Increasingly we’re using automated technology in ways that kind of support humans in what they’re doing rather than just having algorithms work on their own, because they’re not smart enough to do that yet or deal with unexpected situations.
Ethics & Governance of Artificial Intelligence
presented by Beth Altringer
I teach my students that design is ongoing risky decision‐making. And what I mean by ongoing is that you never really get to stop questioning the assumptions that you’re making and that are underlying what it is that you’re creating—those fundamental premises.
presented by Chinmayi Arun
In a world of conflicting values, it’s going to be difficult to develop values for AI that are not the lowest common denominator.
presented by Barbara Grosz
I think one of the things I want to say from the start is it’s not like AI is going to appear. It’s actually out there, in some instances in ways that we never even notice.
Machine learning systems that we have today have become so powerful and are being introduced into everything from self‐driving cars, to predictive policing, to assisting judges, to producing your news feed on Facebook on what you ought to see. And they have a lot of societal impacts. But they’re very difficult to audit.
presented by Cynthia Breazeal
I think there are countless amazing opportunities for artificial intelligence and its impact on society. I think one of the areas I’m truly the most excited about is education.
presented by Malavika Jayaram
I think developments in artificial intelligence do pose a strong challenge for humanity. I think at a very fundamental level, people don’t quite understand what artificial intelligence is, yet it’s used as a buzzword that’s going to solve every single problem.
presented by Iyad Rahwan
Some of the long‐term challenges are very hypothetical—we don’t really know if they will ever materialize in this way. But in the short term I think AI poses some regulatory challenges for society.