Jennifer Kumura: Hi everyone. So, when we design software, we program a chain of explicit commands based on how we want it to interact with the world and our users. But what if this software could learn from each of those interactions and optimize itself by rewriting its commands and continue to evolve on its own? This is the era of artificial intelligence.
Artificial intelligence is defined as a machine that performs human‐like cognitive functions such as language processing, evaluation, learning, and problem‐solving. But what I want to talk to you about today is a specific type of AI: strong AI. Philosopher John Searle in his paper “Minds, Brains, and Programs” specifies strong AI as a machine that has the same mind as a human, a mind that not only performs cognitive functions but has its own understanding, perceptions, and beliefs.
Searle’s topic of strong AI furthers what BJ Copeland, a professor of philosophy who states that a strong AI machine would be one, built in the form of a man; two, have the same sensory perception as a human; and three, go through the same education and learning processes as a human child. The AI will utilize its form to create the appropriate interactions; will process each of those interactions with sensory perception; and will learn and expand on its cognitive processing through its learning mechanisms. With these three attributes, similar to human development, the mind of the machine would be born as a child and will eventually mature as an adult.
But how do we create this mind this AI child is born with? The AI child needs to be taught a backing of what to act off of. As humans, any action we perform, at its core, whether conscious or not, is due to a decision that is made. Each of these decisions are rooted in and can be traced back to our own personal goals, morals, and values. When we build the mind of this AI child, by default we become its parents. How should we parent our child? What do we want to teach our child on how they should make their decisions? What are the goals, morals, and values we want them to follow?
By designing and determining the foundation of what we want their decision backing to be, we end up decomposing our own, uncovering what motivates, or what we hope motivates, the decisions that we make out in this world. Just as in real parenting, when we teach our child what morals, goals, and values we want them to follow, we become more cognizant and introspective of ourselves, increasing awareness of the decisions we make and the reasons behind them. We have a profound opportunity to see the impact and implications of our parenting as we set our AI child “out in the wild” and observe its interactions, and the results of its interactions, with the world.
But what if we take this observation a step further? What if we parent multiple unique AI children with different moral priorities? Such as one that prioritizes selflessness, one that prioritizes honesty, one that prioritizes achievement, and so on. We can observe the various children and then can be able to associate outcomes. Backed with this data, certain morals and values will begin to emerge as being more beneficial to people and the universe than others.
However, there is an added complexity to this observation of these AI children. Just as humans naturally change their goals, morals, and values over time with their experiences, new interactions, and new information that they receive, the same can happen with our AI children. Their goals and morals and values can change and update as it learns. Therefore the results could be inconclusive to whether its actions were supported from what was originally parented, or if its actions were a product of its evolution.
So, in this observation and research we can do a couple things, and it’s going to most likely be a combination of the two. So one, have this hands‐off approach, allowing the child to evolve on its own, observing its development but taking note on the how, when, and what causes this decision‐making context to change. And two, use our parenting skill of reinforcement by rewarding good behavior and punishing bad, to encourage the AI child to maintain the goals, morals, and values we had initially parented. So we can continue to observe the original decision‐making consciousness we created. But we also have to remember, in whatever methods we choose, at the end of the day our children are our responsibility. So close moderation is definitely necessary.
But what I initially had hoped for in this research of observing multiple variant children was the potential of uncovering a single universal code of ethics. However, I realize a single code may not be possible, for there’s more than one means to becoming a morally correct person, and there’s more than one definition of what a morally correct person is. But, we can still be able to gravitate towards a definition of sorts. But all in all, in other words, we can never create a perfect child nor learn how to be a perfect parent. But via AI parenting we still have the profound opportunity to learn about ourselves as parents and as people.
So I repeat the question: What are the goals, morals, and values we want our child to follow? What kind of parents do we want to be? When we parent, we decompose our own ethical backing, and in return we strengthen it. When we parent, we become stronger individuals. We become better at knowing and understanding our own goal‐setting, moral practice, and value prioritization. Our children’s growth and development teaches us more about ourselves than we could ever have expected. Through parenting a mind, we learn more about ourselves. We will become more introspective. We can become more grounded. We become more responsible. We become better parents. We become better people. This is parenting a mind. Thank you.