Kate Darling: You know, increasingly we’re using automated technology in ways that kind of support humans in what they’re doing rather than just having algorithms work on their own, because they’re not smart enough to do that yet or deal with unexpected situations. So we’re interacting more and more with these systems, and so you need an interface that it makes sense for a human to interact with. And so a lot of these systems have an interface that is personified and that will talk to you.
The effect that I’m so interested in, the psychological effect of us treating these systems like social actors, is actually…we achieved that in the 60s. Like in the 60s we had this chatbot called ELIZA that Joseph Weizenbaum made. It’s a really famous example of a really simple chatbot. All it did was answer everything with a question, like a psychoanalyst would. So if you were like, “Oh, I don’t like my mom,” it would be like, “Why don’t you like your mom?” And people would just open up and tell it all sorts of things even though it was very primitive in how it behaved.
So it’s useful that we will engage with these systems on a social level, because it’s engaging for people and you can get people to actually use the systems more. But I do wonder whether there’s any effect on our behavior, on our long‐term behavioral development.
It really gets interesting when we start talking about kids. You have these systems like Siri and Alexa that kids are interacting with. And if these systems are simulating lifelike behavior or a real conversation, then that could actually influence kids’ behavioral development and the way that they start to converse with other people. There are some examples of this, just anecdotally, that we have so far. Like there was an article in the New York Times a few years ago about this kid, this autistic boy, who developed a relationship with Siri. And the mom was like this is the best thing ever. Like, Siri is infinitely patient, will answer all of his questions. Also the voice recognition is so shitty that he’s had to learn to articulate his words really well, and it’s made him a better communicator with other people because of Siri.
But, then on the other hand you have stories like this one guy wrote an article a few months ago about how Alexa was turning his kids into assholes because she doesn’t require any please or thank you or any of the standard politeness that you want your kids to learn in conversing with others. So you know, I think think it is an interesting question. The more we’re able to simulate a real conversation with these systems, the more that might get muddled in our subconscious in some way.
One of the things that I think is really necessary is that we need to study more what impact this can have on our behavior. So we definitely need to be studying these interactions and studying the effects of these interactions. It’s a similar question to violent video games or pornography—these are questions that have come up again and again, but I think that these systems bring them to a new, more visceral level in our psychology. So we need more research, and then depending on what the answer to that specific question is, we might need to think about design, and use, and maybe even policy for regulating these agents.
Part of the problem is also that this is such an interdisciplinary problem, or all of the questions that are coming up are so interdisciplinary. You need technologists; you need policymakers; you need users; you need researchers who understand psychology, who understand technology, who understand sociology. All of this is coming together and you need people talking to each other.
I would really like us to in the next few years lean into the positive effects of the technology and develop structures the way that we have in other fields like medicine to ensure that we’re using the technology in a responsible, ethical way.