Jennifer Kumura: Hi every­one. So, when we design soft­ware, we pro­gram a chain of explic­it com­mands based on how we want it to inter­act with the world and our users. But what if this soft­ware could learn from each of those inter­ac­tions and opti­mize itself by rewrit­ing its com­mands and con­tin­ue to evolve on its own? This is the era of arti­fi­cial intel­li­gence.

Artificial intel­li­gence is defined as a machine that per­forms human-like cog­ni­tive func­tions such as lan­guage pro­cess­ing, eval­u­a­tion, learn­ing, and problem-solving. But what I want to talk to you about today is a spe­cif­ic type of AI: strong AI. Philosopher John Searle in his paper Minds, Brains, and Programs” spec­i­fies strong AI as a machine that has the same mind as a human, a mind that not only per­forms cog­ni­tive func­tions but has its own under­stand­ing, per­cep­tions, and beliefs.

Searle’s top­ic of strong AI fur­thers what BJ Copeland, a pro­fes­sor of phi­los­o­phy who states that a strong AI machine would be one, built in the form of a man; two, have the same sen­so­ry per­cep­tion as a human; and three, go through the same edu­ca­tion and learn­ing process­es as a human child. The AI will uti­lize its form to cre­ate the appro­pri­ate inter­ac­tions; will process each of those inter­ac­tions with sen­so­ry per­cep­tion; and will learn and expand on its cog­ni­tive pro­cess­ing through its learn­ing mech­a­nisms. With these three attrib­ut­es, sim­i­lar to human devel­op­ment, the mind of the machine would be born as a child and will even­tu­al­ly mature as an adult.

But how do we cre­ate this mind this AI child is born with? The AI child needs to be taught a back­ing of what to act off of. As humans, any action we per­form, at its core, whether con­scious or not, is due to a deci­sion that is made. Each of these deci­sions are root­ed in and can be traced back to our own per­son­al goals, morals, and val­ues. When we build the mind of this AI child, by default we become its par­ents. How should we par­ent our child? What do we want to teach our child on how they should make their deci­sions? What are the goals, morals, and val­ues we want them to fol­low?

By design­ing and deter­min­ing the foun­da­tion of what we want their deci­sion back­ing to be, we end up decom­pos­ing our own, uncov­er­ing what moti­vates, or what we hope moti­vates, the deci­sions that we make out in this world. Just as in real par­ent­ing, when we teach our child what morals, goals, and val­ues we want them to fol­low, we become more cog­nizant and intro­spec­tive of our­selves, increas­ing aware­ness of the deci­sions we make and the rea­sons behind them. We have a pro­found oppor­tu­ni­ty to see the impact and impli­ca­tions of our par­ent­ing as we set our AI child out in the wild” and observe its inter­ac­tions, and the results of its inter­ac­tions, with the world.

But what if we take this obser­va­tion a step fur­ther? What if we par­ent mul­ti­ple unique AI chil­dren with dif­fer­ent moral pri­or­i­ties? Such as one that pri­or­i­tizes self­less­ness, one that pri­or­i­tizes hon­esty, one that pri­or­i­tizes achieve­ment, and so on. We can observe the var­i­ous chil­dren and then can be able to asso­ciate out­comes. Backed with this data, cer­tain morals and val­ues will begin to emerge as being more ben­e­fi­cial to peo­ple and the uni­verse than oth­ers.

However, there is an added com­plex­i­ty to this obser­va­tion of these AI chil­dren. Just as humans nat­u­ral­ly change their goals, morals, and val­ues over time with their expe­ri­ences, new inter­ac­tions, and new infor­ma­tion that they receive, the same can hap­pen with our AI chil­dren. Their goals and morals and val­ues can change and update as it learns. Therefore the results could be incon­clu­sive to whether its actions were sup­port­ed from what was orig­i­nal­ly par­ent­ed, or if its actions were a prod­uct of its evo­lu­tion.

So, in this obser­va­tion and research we can do a cou­ple things, and it’s going to most like­ly be a com­bi­na­tion of the two. So one, have this hands-off approach, allow­ing the child to evolve on its own, observ­ing its devel­op­ment but tak­ing note on the how, when, and what caus­es this decision-making con­text to change. And two, use our par­ent­ing skill of rein­force­ment by reward­ing good behav­ior and pun­ish­ing bad, to encour­age the AI child to main­tain the goals, morals, and val­ues we had ini­tial­ly par­ent­ed. So we can con­tin­ue to observe the orig­i­nal decision-making con­scious­ness we cre­at­ed. But we also have to remem­ber, in what­ev­er meth­ods we choose, at the end of the day our chil­dren are our respon­si­bil­i­ty. So close mod­er­a­tion is def­i­nite­ly nec­es­sary.

But what I ini­tial­ly had hoped for in this research of observ­ing mul­ti­ple vari­ant chil­dren was the poten­tial of uncov­er­ing a sin­gle uni­ver­sal code of ethics. However, I real­ize a sin­gle code may not be pos­si­ble, for there’s more than one means to becom­ing a moral­ly cor­rect per­son, and there’s more than one def­i­n­i­tion of what a moral­ly cor­rect per­son is. But, we can still be able to grav­i­tate towards a def­i­n­i­tion of sorts. But all in all, in oth­er words, we can nev­er cre­ate a per­fect child nor learn how to be a per­fect par­ent. But via AI par­ent­ing we still have the pro­found oppor­tu­ni­ty to learn about our­selves as par­ents and as peo­ple.

So I repeat the ques­tion: What are the goals, morals, and val­ues we want our child to fol­low? What kind of par­ents do we want to be? When we par­ent, we decom­pose our own eth­i­cal back­ing, and in return we strength­en it. When we par­ent, we become stronger indi­vid­u­als. We become bet­ter at know­ing and under­stand­ing our own goal-setting, moral prac­tice, and val­ue pri­or­i­ti­za­tion. Our children’s growth and devel­op­ment teach­es us more about our­selves than we could ever have expect­ed. Through par­ent­ing a mind, we learn more about our­selves. We will become more intro­spec­tive. We can become more ground­ed. We become more respon­si­ble. We become bet­ter par­ents. We become bet­ter peo­ple. This is par­ent­ing a mind. Thank you.

Further Reference

Session page


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.