In a world of con­flict­ing val­ues, it’s going to be dif­fi­cult to devel­op val­ues for arti­fi­cial intel­li­gence that are not the low­est com­mon denom­i­na­tor. In Asia par­tic­u­lar­ly, we have a lot of coun­tries that believe that gov­ern­ments are the best way to make deci­sions for peo­ple, and that indi­vid­ual human rights can be voiced only by the state. In that con­text, if we are try­ing to come up with arti­fi­cial intel­li­gence that rec­og­nizes indi­vid­ual human rights, and that looks to empow­er cit­i­zens and user, and to cre­ate trans­paren­cy, it’s going to be real­ly chal­leng­ing to come up with an inter­na­tion­al coor­di­nat­ed regime that does this.

People have devel­oped AI that is pre­dic­tive. People are research­ing ways to make sure that AI is able to tar­get adver­tise­ment at peo­ple depend­ing on their pref­er­ences, the devices they use, the routes they take. Now, that kind of pre­dic­tive AI can very eas­i­ly be used for sur­veil­lance. And it’s a fact in Asia that the states in Asia, includ­ing India, are invest­ing very heav­i­ly in mass sur­veil­lance and they’re cre­at­ing large cen­tral­ized data­bas­es that they haven’t ful­ly worked out how to sweep as yet.

In India we’ve also got news of the state using drones to mon­i­tor assem­blies of peo­ple in pub­lic places just to make sure that noth­ing goes wrong. We’ve got news that the government’s devel­op­ing social media labs that are sup­posed to watch the online social media to see what peo­ple are say­ing and what kinds of sub­jects are trend­ing. And in that con­text, again, the ques­tion that we’re ask­ing our­selves is, when the state choos­es to use its resources to get AI to do these things, how far is AI going to be used to con­trol and mon­i­tor the cit­i­zen as opposed to enabling the cit­i­zen. Because in democ­ra­cies like ours, the bal­ance of pow­er between the cit­i­zen and the state is real­ly del­i­cate, and there is a great poten­tial for AI to tip that bal­ance of pow­er in favor of the state.

While it’s impor­tant to make sure that we don’t chill inno­va­tion, it’s also impor­tant to be cau­tious and to make sure that tech­nol­o­gy doesn’t drag us down a dark path. We’ve got exam­ples from his­to­ry like the Manhattan Project, like the way in which tech­nol­o­gy was used dur­ing the Holocaust, to remind us that if we’re not care­ful about what we do with tech­nol­o­gy, it can be abused in ways that that we will come to deeply regret. So it’s nec­es­sary to make sure that we have human rights, polit­i­cal the­o­ry, but also all the oth­er dis­ci­plines that under­stand what it means to be human and how to engage with humans involved in the design­ing of AI.

If we don’t work out a way in which cit­i­zens are able to ask the right ques­tions about AI to ensure account­abil­i­ty every time AI is cre­at­ed and used, we might be head­ing towards the world that Orwell pre­dict­ed, and that would be real­ly unfor­tu­nate because new tech­nol­o­gy should lead to a bet­ter world and not a more con­trolled world, or an unequal world.

As you know, tech­nol­o­gy is eas­i­ly sold to peo­ple, and it moves very quick­ly around the world. And so it’s real­ly impor­tant to inter­vene in Asia at the stage of design. People some­times have the best of inten­tions, but because of the way in which they’re edu­cat­ed, or the way in which they’re taught to think, the way in which they design tech­nol­o­gy can end up being real­ly dam­ag­ing to the world. Conversely, it could end up being real­ly beau­ti­ful as well, and that’s why it’s real­ly impor­tant that we get into AI right now and help the peo­ple that are design­ing it think of it in a way that imag­ines a bet­ter world.


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.