Susan Crawford: Now, Tim Hwang, a cofounder of ROFLCon, also the Awesome Foundation

Tim Hwang: For the arts and sci­ences.

Susan Crawford: For the arts and sciences—that’s great. The Institute on Higher Awesome Studies, and the Web Ecology Project is well known to the Berkman fam­i­ly, and is here to give an entire­ly dif­fer­ent spin on this ques­tion. Thanks Tim.


Tim Hwang: Hi every­body. I am not here rep­re­sent­ing the Web Ecology Project or the Awesome Foundation for the Arts and Sciences are the Institute on Higher Awesome Studies or ROFLCon. I’m here rep­re­sent­ing the Pacific Social Architecting Corporation. Which is a slight­ly omi­nous name giv­en to a kind of fun project that we start­ed ear­ly last year, specif­i­cal­ly into the use of bots in shap­ing social behav­ior online.

And so, we actu­al­ly start­ed ini­tial­ly as a com­pe­ti­tion called Socialbots. And it was a real­ly sim­ple idea. Basically we iden­ti­fied a group of users on Twitter and we said as a cod­ing chal­lenge, like social bat­tle­bots, write a bot that will embed itself in this net­work and we will score you based on how well these bots are able to achieve some kind of social change, either in the pat­tern of con­nec­tions between peo­ple or in the things that peo­ple talk about.

And so this com­pe­ti­tion we conducted—three teams, one from New Zealand, one from Boston, and one from London as well. And so teams wrote a vari­ety of bots to basi­cal­ly go into this net­work and tried a bunch of var­i­ous things. This ini­tial exper­i­ment was pret­ty easy. The idea was to basi­cal­ly see whether or not you could get peo­ple to con­nect to the bot and talk to the bot.

So the win­ning New Zealand bot basi­cal­ly used a real­ly sim­ple idea. Basically it had a data­base of gener­ic ques­tions and gener­ic respons­es. So it’d say things like, That’s so inter­est­ing, tell me more about that,” right. So a state­ment that could be the response to any­thing in a con­ver­sa­tion. And it had no sense of AI. It just ran­dom­ly chose these con­ver­sa­tion­al units.

And so it was real­ly fas­ci­nat­ing. It got into these very long con­ver­sa­tions with peo­ple online. This is a sim­ple con­ver­sa­tion. James M. Titus is the bot here and you read the con­ver­sa­tion from the bot­tom to the top. And so, James says, If you could bring one char­ac­ter to life from your favorite book who would it be?” The per­son responds, Jesus.” And then they get into a very long kin­da con­tin­u­ous con­ver­sa­tion about this. This is only a few inter­changes of a much longer con­ver­sa­tion about this.

What’s inter­est­ing is that the bot here actu­al­ly has no AI. It just ran­dom­ly choos­es from this data­base to hold this con­ver­sa­tion. Some of you may use this tac­tic your­selves at var­i­ous par­ties.

Another bot that we were using that was quite inter­est­ing as a mod­el basi­cal­ly didn’t use any AI at all. What it did is it hired peo­ple on Mechanical Turk to write its own con­tent. So it said to some­one on Mechanical Turk, Here’s a pen­ny. Write some­thing about your break­fast in 140 char­ac­ters.” It takes that con­tent and then push­es it out as its own.

The best part is this bot can beat the Turing Test because you can ask it a direct ques­tion. You could say, Bot, what did you have for break­fast today?” The bot will take your ques­tion, give it to a human to answer, get the end response and then push it back at you. And so it behaves in very human-like ways.

And what was most remark­able to us, actu­al­ly, is that we start­ed this com­pe­ti­tion and we end­ed up with a net­work that looked like this after two weeks. So, the col­ored dots here are the bots. The col­ored lines are the con­nec­tions with this net­work of sort of 500 peo­ple. And so that’s sur­pris­ing to us because we got into the sit­u­a­tion where we real­ized what was hap­pen­ing was we were essen­tial­ly design­ing soft­ware that could reli­ably change the pat­tern of con­nec­tions or the pat­terns of behav­ior of peo­ple online. And if we could do this, imag­ine all the oth­er things we could do.

And so the project, the Pacific Social Architecting Corporation, which peo­ple say is both a real­ly fun name and also a real­ly scary name—which actu­al­ly goes pret­ty well with project—is try­ing to do two things. One of them is mon­i­tor uses of bots for this pur­pose and design coun­ter­mea­sures. And that’s actu­al­ly a real­ly big par­tic­u­lar­ly because you’ve seen the increas­ing deploy­ment of bots to try to push dis­cus­sion or oth­er­wise kind of shape social net­works.

And then the oth­er one is to actu­al­ly find what we could do, actu­al­ly, at the lim­its of this. Because we feel that there’s some real­ly pow­er­ful uses and real­ly great uses of this tech­nol­o­gy as well.

So this is a recent snap­shot from an exper­i­ment that we’re doing. You can’t see it too well but we cur­rent­ly have two groups of 10,000 peo­ple. And the idea is that the bots are actu­al­ly stitch­ing them togeth­er over a three to six-month peri­od. They’re mak­ing intro­duc­tions. They’re expos­ing peo­ple to con­tent that they’re not usu­al­ly exposed to. And the idea is over a six-month peri­od you actu­al­ly cre­ate a social scaf­fold. Basically these bots will serve the pur­pose of intro­duc­ing these two groups to one anoth­er. And once the con­nec­tions are built, you can deac­ti­vate the scaf­fold, right, leav­ing the com­mu­ni­ty that you want­ed to cre­ate.

Which leads to all softs of intrigu­ing pos­si­bil­i­ties. If you want peo­ple to be more inter­est­ed in cur­rent events, for instance. Or actu­al­ly you want to design bots to detect inci­dences of astro­turf­ing and call that out. Bots can be used against bots, as well.

And so some­thing of what we’re envi­sion­ing, basi­cal­ly, is a kind of… I’ve got to come up with a bet­ter name for it, but social secu­ri­ty,” right, for com­put­er secu­ri­ty but for the social space. Unfortunately that name­space is tak­en up in a real­ly big way. But the con­cept is basi­cal­ly that you treat social net­works as if they’re com­put­er net­works. And then you envi­sion a future in which peo­ple are not only try­ing to com­pro­mise the behav­ior of these net­works but also pro­tect them against sort of undue influ­ence as well.

So, one of the projects that we’re cur­rent­ly work­ing on is this idea of social pen­e­tra­tion test­ing. So if you’re famil­iar with com­put­er secu­ri­ty, pen­e­tra­tion test­ing is the find­ing of sort of vul­ner­a­bil­i­ties in a net­work. And so we’re design­ing a swarm of bots right now who could poten­tial­ly test out a net­work to see where are the cog­ni­tive vul­ner­a­bil­i­ties, right. Who is the most influ­en­tial per­son? Who is the worst at eval­u­at­ing the qual­i­ty or the real­i­ty of infor­ma­tion? And if you can do that you, you iden­ti­fy a cog­ni­tive hole in that net­work. Potentially this is some­one who could feed untrue infor­ma­tion to the rest of the net­work and not be very good about coun­ter­ing that. And we think that’s real inter­est­ing from the point of view of sort of hard­en­ing these social spaces, poten­tial­ly against sort of influ­ence attacks, if you want to envi­sion it that way. So I know I only have three to five min­utes, but I fig­ured I’d give a quick overview. Thank you very much for your time.


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.