Cynthia Breazeal: I think there are countless amazing opportunities for artificial intelligence and its impact on society. I think one of the areas I’m truly the most excited about is education. We know that there’s tremendous inequity in quality of education worldwide. We know that there are over a billion people who can’t even read at an adequate level, and with the Internet spreading they’re going to have limited ability to access and leverage the benefits of the Internet if they can’t read. And with AI there’s the potential for deeply personalized learning experiences for people of all ages and stages—even for workforce retraining, right.
Challenges around that always go to issues of privacy. So if you’re going to try to create AIs that deeply understand us in order to personalize and adapt and optimize to us, it’s going to have to learn about us. And of course that is this kind of dovetail edge to the sword between what should it be learning and adapting to you to benefit you, and how does it protect your privacy and your security so that you’re not put at risk in unintended ways? There are many diverse voices that need to be a part of that conversation.
In the case of education, I have a project right now where we’re leveraging AI and technology to bring literacy to the world’s most under‐resourced communities, intentionally. And there the stakeholders are the children, the teachers, the parents. Building a network of scientists and content providers who want to create technologies that help these populations. And eventually, governments. Anyone who would eventually become a stakeholder of that kind of technology. But we are intentionally building a network of people from all those perspectives in order to do that.
We really want to show that AI is not just for those few who can master this technology, but we’re really going to apply this with a humanitarian value system, to say we want to understand how to design this technology, want to understand from working with those stakeholders across all these different areas how to best integrate it. We’re going to learn from that process. Because you’re always going to learn from the process of integration. There’s very few cases where you can drop a technology into a working human system and have it just work. It takes time to iterate and develop, and iterate, and learn and learn, and then you finally come to solutions that work.
And if we’re creating AI for human beings, we’d appreciate the expanse of human experience—how we think, how we learn, how we make decisions, what matters to us, to design technology to support those values.