In a world of conflicting values, it’s going to be difficult to develop values for artificial intelligence that are not the lowest common denominator. In Asia particularly, we have a lot of countries that believe that governments are the best way to make decisions for people, and that individual human rights can be voiced only by the state. In that context, if we are trying to come up with artificial intelligence that recognizes individual human rights, and that looks to empower citizens and user, and to create transparency, it’s going to be really challenging to come up with an international coordinated regime that does this.
People have developed AI that is predictive. People are researching ways to make sure that AI is able to target advertisement at people depending on their preferences, the devices they use, the routes they take. Now, that kind of predictive AI can very easily be used for surveillance. And it’s a fact in Asia that the states in Asia, including India, are investing very heavily in mass surveillance and they’re creating large centralized databases that they haven’t fully worked out how to sweep as yet.
In India we’ve also got news of the state using drones to monitor assemblies of people in public places just to make sure that nothing goes wrong. We’ve got news that the government’s developing social media labs that are supposed to watch the online social media to see what people are saying and what kinds of subjects are trending. And in that context, again, the question that we’re asking ourselves is, when the state chooses to use its resources to get AI to do these things, how far is AI going to be used to control and monitor the citizen as opposed to enabling the citizen. Because in democracies like ours, the balance of power between the citizen and the state is really delicate, and there is a great potential for AI to tip that balance of power in favor of the state.
While it’s important to make sure that we don’t chill innovation, it’s also important to be cautious and to make sure that technology doesn’t drag us down a dark path. We’ve got examples from history like the Manhattan Project, like the way in which technology was used during the Holocaust, to remind us that if we’re not careful about what we do with technology, it can be abused in ways that that we will come to deeply regret. So it’s necessary to make sure that we have human rights, political theory, but also all the other disciplines that understand what it means to be human and how to engage with humans involved in the designing of AI.
If we don’t work out a way in which citizens are able to ask the right questions about AI to ensure accountability every time AI is created and used, we might be heading towards the world that Orwell predicted, and that would be really unfortunate because new technology should lead to a better world and not a more controlled world, or an unequal world.
As you know, technology is easily sold to people, and it moves very quickly around the world. And so it’s really important to intervene in Asia at the stage of design. People sometimes have the best of intentions, but because of the way in which they’re educated, or the way in which they’re taught to think, the way in which they design technology can end up being really damaging to the world. Conversely, it could end up being really beautiful as well, and that’s why it’s really important that we get into AI right now and help the people that are designing it think of it in a way that imagines a better world.