Malavika Jarayam: I think developments in artificial intelligence do pose a strong challenge for humanity. I think at a very fundamental level, people don’t quite understand what artificial intelligence is, yet it’s used as a buzzword that’s going to solve every single problem. You sort of have either a very binary sort of treatment of it’s all wonderful, it’s all great, and it’s going to solve every problem, or you have robot armies are going to kill everyone.
I think the first challenge that we have is even the vocabulary that we use to talk about developments in AI. I see a lot of people in Asia (and also elsewhere in the world, to be fair) who use words like “algorithms,” “big data,” “analytics,” “artificial intelligence” to all mean pretty much the same thing. They use them as interchangeable synonyms, and I think that does all of these technologies a disservice because they’re not necessarily the same thing. You can have automation that is not AI-driven. You can also have AI that is not just about automation. So I think it’s a technology or a set of technologies that on some level are very very opaque and inscrutable, yet they’re being talked about as if it’s the most common, obvious, everyday, ubiquitous thing.
Really what we’re trying to do is look at the impact of AI, specifically on Asian countries. And I think even within Asia it’s not a monolithic thing of you know, all of Asia is going to be treated the same way or is going to react the same way. I think within Asia you have countries that are going to be early adopters of AI, that are very geared up to advanced technologies. So countries like Korea, Japan, Hong Kong, Singapore, are probably going to be a little better equipped. And I think a lot of the poorer, developing, emerging economies are not quite there yet. I don’t think they quite understand what’s going to hit them when it does, and I think there’s a huge role for academia to play in all of this to make sure that in the way that it develops that it’s something that has an ethical backbone that is implemented responsibly, that has all the right stakeholders involved in the decision-making about how these technologies are deployed. And I think that really needs to be a very very robust conversation. It’s not the technology companies setting the standards, and governments and academics and social scientists having no say in how this happens.