Hello. My name is Mike. I work at the University of Falmouth. I’m going to tell you a cou­ple of sto­ries today which are hope­ful­ly be a bit about Twitter bots and a bit not. And they’re going to seam­less­ly merge into a finale where I’m going to say some­thing which I thor­ough­ly encour­age you to ignore.

First I want to tell you about some­thing called The Book. I’m going to be talk­ing about a book and then the Book, so it’s going to get a bit [?] and I apol­o­gize. But about ten years ago, towards the end of my sec­ondary school career, a friend of mine lent me a book. It was called The Man Who Loved Only Numbers, and it had con­vert­ed her to math­e­mat­ics, basically.

It was about this guy, Paul Erdős. He was an eccen­tric, but leg­endary, Hungarian math­e­mati­cian who was incred­i­bly well-known for trav­el­ing through­out America and oth­er coun­tries vis­it­ing his col­leagues in the dead of night and just turn­ing up. And as soon as they opened the door he would walk through and imme­di­ate­ly start talk­ing about a math­e­mat­i­cal prob­lem. He would pub­lish hun­dreds of papers a year with peo­ple. He was incred­i­bly pro­lif­ic. You may have heard the term Erdős Number. It’s the num­bers of degrees of sep­a­ra­tion you have between your­self and Paul Erdős, based on author­ships, basi­cal­ly. People strive to have the low­est num­ber possible.

I’m not qual­i­fied to say whether or not God exists,” Erdős said. I kind of doubt He does. Nevertheless, I’m always say­ing that the SF has this trans­fi­nite Book—transfinite being a con­cept in math­e­mat­ics that is larg­er than infinite—that con­tains the best proofs of all math­e­mat­i­cal the­o­rems, proofs that are ele­gant and per­fect” The strongest com­pli­ment Erdős gave to a col­league’s work was to say, It’s straight from the Book.”
Paul Hoffman, The Man Who Loved Only Numbers

He con­tributed many things to math­e­mat­i­cal cul­ture. This is my under­stand­ing as some­one who’s cer­tain­ly not a math­e­mati­cian. But my favorite one is his con­cept of some­thing called the Book. This is a quote from uh, not the Book but the book about the Book. You don’t need to read the whole quote, but there’s a bit where he says he has this con­cept of God, which he does­n’t believe in, hav­ing this trans­fi­nite book. And in this book, all the best proofs from math­e­mat­ics, all the most ele­gant and beau­ti­ful ones are writ­ten. The great­est com­pli­ment he could give you was he looked at some­thing you did and said, That is from the Book.” As if you’d tapped into some­thing that was so fun­da­men­tal to the uni­verse, you’d seen some­thing that it could­n’t be bet­ter. It could­n’t be per­fect, you know. In heav­en there is a tome that has your proof writ­ten in it.

I love this con­cept. I don’t do maths, though, so I get jeal­ous of things like this that are very roman­tic. And I think there’s a con­cept of Twitter bots hav­ing a Book. And if I have time at the end I’ll talk about anoth­er book about Twitter bots, but I real­ize we’re pressed for time. So, I think a lot of Twitter bots are about pat­terns and tem­plates. This is Cheap Bots Done Quick; you may have heard of it. So, a lot of bot-making relies on tem­plates. It’s real­ly hard to write in actu­al lan­guage, so often we cut things up and leave holes and insert words in, which is great. I use it in almost all of my bots. And a lot of bot-making also relies on pat­terns. Often lin­guis­tic bots, oth­er bots also. 

Understanding pat­terns in data or lan­guage or the way humans use language…and the last talk by Esther was real­ly on-topic for me, as well. Because under­stand­ing some of those pat­terns, even in a small way, is quite mag­i­cal when you get it right. Because there are so many of them. And under­stand­ing lan­guage is real­ly real­ly hard. And when you do it, there’s some­thing real­ly won­der­ful about it.

This is a bot called @wikisext by Thrice. And I’m no Paul Erdős, but I feel like @wikisext could be said to be from the Book of Twitter bots. It under­stands some­thing about the data it’s using. It under­stands some­thing about the tropes that it’s try­ing to insert that into. And the result is some­thing real­ly won­der­ful. I real­ly like this inter­ac­tion between the bot, but you should go and look at the bot and see its oth­er tweets, because they’re wonderful.

As I said, @wikisext in par­tic­u­lar takes data and there’s a struc­ture that Thrice has under­stood, and they’ve woven it into this new form, and that is excit­ing. I get an excite­ment feel­ing when that hap­pens. And I feel like it’s akin to how I see math­e­mati­cians talk about num­ber the­o­ry, which is what Erdős wrote.

This is one of my bots reply­ing to one of Darius’ bots. Darius wrote a bot called @MuseumBot, which posts an exhib­it from the Met every six hours. And I wrote a bot called @AppreciationBot, which pre­tends to say some­thing intel­lec­tu­al about it. There’s a tiny grain of intel­li­gence in there, and then the rest of it is blus­ter and bluff, and pos­tur­ing and weasel words and phras­es that don’t real­ly mean any­thing but they sound like they might. This is what I do when I walk around muse­ums. I would­n’t cast any asper­sions on cre­ators. This is def­i­nite­ly me and my own insecurities.

But the thing is actu­al­ly quite a bad result from the bot, and peo­ple get dis­ap­point­ed when it goes wrong. So this one, the bot’s mis­un­der­stood. It things that bowls exist inside grapes, but it’s actu­al­ly that you would expect grapes to be in bowls, right? And what I’ve noticed is that when I try and have bots act in a very human way, espe­cial­ly when they’re talk­ing in a flow­ery sense here, the result is a real clash when peo­ple real­ize that it got it wrong.

Now I’m going to digress on anoth­er tan­gent briefly. But we’ll come back. Honestly.

The Painting Fool, The Dancing Salesman Problem”, 2011

ANGELINA is the name of a piece of soft­ware that I made over the last six years for my PhD. I work in a field called Computational Creativity, where we’re inter­est­ed in build­ing soft­ware that engage in cre­ative domains, either as a cre­ator or assist­ing a cre­ator. This is a piece of art paint­ed by a piece of software.

When I start­ed out in this field, I thought that there was lots of objec­tive truths that we were going to find, there was going to be some check­list of what con­sti­tut­ed com­pu­ta­tion­al cre­ativ­i­ty, or cre­ativ­i­ty in humans. There was some for­mu­la that I could plug stuff into, and we’d get a good result out on the oth­er side. And over time, obvi­ous­ly, I was taught by won­der­ful peo­ple, one of whom is in this room, who I’ll intro­duce you to briefly lat­er, who made me real­ize that that’s not real­ly what it’s about. Because there is no def­i­n­i­tion of cre­ativ­i­ty, real­ly, and there’s cer­tain­ly not one that we would all agree on.

Similarly, I thought that maybe there was a Turing Test vibe going on here, where we’d show peo­ple things that our soft­ware had cre­at­ed but not tell them that it was soft­ware, and if they could­n’t tell it apart from a human cre­ation, then we would pull off the sheet and point and laugh at them and we’d say, Haha! It was cre­ative because you could­n’t tell the difference.” 

And that does­n’t work either, because what com­pu­ta­tion­al cre­ativ­i­ty is real­ly about is about per­cep­tion. It’s not about me, and it’s not even real­ly about the soft­ware I write. It’s about what peo­ple think of the soft­ware. So once you’ve pulled the sheet off, that per­son still isn’t star­ing at your soft­ware. And if you have made them feel like there was some trick­ery involved or any­thing like that, their per­cep­tion of the soft­ware, their appre­ci­a­tion of it, drops.

I was read­ing the Guardian web­site today when I came across a sto­ry titled Obama to urge Afghan pres­i­dent Karzai to push for Taliban set­tle­ment”. It inter­est­ed me because I’d read the oth­er arti­cles that day already, and I pre­fer read­ing new things for inspi­ra­tion. I looked for images of United States land­scape for the back­ground because it was men­tioned in the arti­cle. I also want­ed to include some of the impor­tant peo­ple from the arti­cle. For exam­ple, I looked for pho­tographs of Barack Obama. I searched for hap­py pho­tos of the per­son because I like them. I also focused on Afghanistan because it was men­tioned in the arti­cle a lot.
ANGELINA, descrip­tion of Hot NATO, 2012

ANGELINA was designed to make games. I am inter­est­ed in games, so ANGELINA made games. And one of the real­ly impor­tant things to make peo­ple feel like ANGELINA is engag­ing in a cre­ativ­i­ty activ­i­ty and engag­ing in a cre­ative field is the get it to talk about its works. So this is some wall text that it pro­duced to jus­ti­fy things that it was doing. This was template-based, like many of my Twitter bots are. And it would insert real pieces of data. So, it was using Barack Obama in this par­tic­u­lar game, so it was telling you that so that you would hope­ful­ly believe in its cre­ative decisions.

I also made my first Twitter bot with ANGELINA, because I thought it would be good if it engaged peo­ple in its cre­ative process, and showed that it was learn­ing and reusing infor­ma­tion. So ANGELINA would post images and ask peo­ple to give it word asso­ci­a­tions so it could then use that knowl­edge when it wants to use this as a tex­ture in a game, for instance.

And there are oth­er parts of the per­cep­tion of soft­ware that I did­n’t have any con­trol over, and that was things like the press. So, the press real­ly liked the nar­ra­tive of ANGELINA as a head­line. So things like Angelina is an AI that loves Rupert Murdoch.” This had come out in an inter­view with me, and then out of con­text in a head­line it sounds real­ly great. And I includ­ed this slide because I want­ed to show that the per­cep­tion of soft­ware is not just gov­erned by the peo­ple who write it. Once you put it out there, and I think George was get­ting at this with tools as well, it’s out of your hands. You can’t con­trol it anymore.

Screenshot of several replies to ANGELINA asking for for suggestions about a piece of wood, suggesting things like planks, floorboards, etc.

And that’s real­ly impor­tant, because ini­tial­ly peo­ple were inter­act­ing with ANGELINA in a very stan­dard way. Many of these peo­ple are my friends. They were behav­ing. They were doing very nice things. And then peo­ple said to me, Well, when ANGELINA’s ask­ing for help, what if I…lied to it?” And I said, Well, you can do what­ev­er you want. It’s okay.” They were kind of cau­tious about it. But the rea­son I said yes is because as you know, Twitter users are unpre­dictable, and if this Twitter bot ever got big­ger it would receive unusu­al and maybe mali­cious respons­es. So I said we might as well prac­tice that now.

https://​twit​ter​.com/​k​a​d​h​i​m​s​h​u​b​b​e​r​/​s​t​a​t​u​s​/​441911974004547584

So my friends kind of ner­vous­ly start­ed giv­ing it incor­rect respons­es. Kadhim now works at the Financial Times, but assure you he’s a very seri­ous gentleman.

And the thing is that I encour­aged peo­ple to inter­act with it as nat­u­ral­ly as they want­ed. I encour­aged them to make jokes with it, or to be a lit­tle play­ful in their respons­es. But in doing so, you’re encour­ag­ing them to devel­op a rela­tion­ship with ANGELINA that could nev­er real­ly be aspired to. Like, ANGELINA could­n’t respond in a play­ful way, because it did­n’t under­stand what was going on. And that is fine. Lots of bots are like that. I’ve had con­ver­sa­tions with @wikisext, and it does­n’t real­ly under­stand me on a per­son­al lev­el. But the dif­fer­ence between @wikisext and ANGELINA is that ANGELINA was try­ing to present itself as a cre­ator, and I was try­ing to do that. And that mean that I was try­ing to present it as some­thing that peo­ple could engage with.

A human-looking but still recognizably artificial female robot.

And that kind of led to dis­ap­point­ment, because once you raise some­one’s expec­ta­tions, falling from those raised expec­ta­tions is way more painful, and you fall far fur­ther than the orig­i­nal point where you start­ed at. As soon as you start to promise things, once you can’t deliv­er on them, as you may have expe­ri­ence in oth­er aspects of your life or your Twitter bot mak­ing, peo­ple’s reac­tions end up being more neg­a­tive some­times. And before I click next slide,” I’m going to bring every­thing togeth­er now, but I have to apol­o­gize because I did­n’t want to inflict this on you but it was just right for the talk. Okay?

Tay

So. Now. I looked up the def­i­n­i­tion of hot take.” And it explic­it­ly says writ­ten.” And this is a talk, so it’s fine. I’m talk­ing to you. This is no longer— It’s okay for me to do this, okay. And it’s brief, I promise you it’s brief.

Tay's avatar image (a pixelated and color-shifted photo of a young woman's face) overlaid with "Freshly microwaved hot takes from last month"

So, as you may know, and I real­ly apol­o­gize if you don’t, Microsoft put a chat­bot onto Twitter. It was a bit of a mis­take. And var­i­ous mis­takes were made. Including one of the things that it did was sort of repeat ver­ba­tim things oth­er peo­ple had told it, which is not great. And after the fact, there was lots of talk about whether this could’ve been pre­vent­ed with bet­ter AI. And lots of peo­ple turned to the peo­ple in this room and said, What could we have done?” We could’ve used fil­tered word lists, and we could’ve just not lis­tened to humans. 

And I think all of those solu­tions were good, but there was a big­ger prob­lem here that peo­ple did­n’t real­ly acknowl­edge. And that was that Tay was try­ing to be human. It was pre­sent­ing itself on the same lev­el as humans. It was­n’t say­ing, I’m made of flesh and blood and I walk down the street,” and things like that. But it was say­ing, Treat me as you would treat some­one else who is human.”

The Ava robot from the movie Ex Machina touching another face hanging on a wall.

And the prob­lem is that being human comes with a lot of strings attached. There’s a lot of impli­ca­tions there. When @AppreciationBot starts to use words that refer to some­times I think this” or get a feel­ing of” (that’s one of the phras­es that @AppreciationBot uses), you’re promis­ing things that you can’t back up. You’re rais­ing peo­ple’s expec­ta­tions. Once I encour­age peo­ple to lie to ANGELINA or get play­ful with ANGELINA or read things into its games that it then can’t jus­ti­fy, you raise peo­ple’s expec­ta­tions, which can be great, and in my field of com­pu­ta­tion­al cre­ativ­i­ty it’s actu­al­ly very valu­able to raise peo­ple’s expec­ta­tions, I think. Because they engage more with peo­ple as a cre­ator. And often, we trust artists to jus­ti­fy what they’ve done. We actu­al­ly trust them. We don’t press them and try and fig­ure out if they were lying. People do that to ANGELINA a lot, and a lot of oth­er com­pu­ta­tion­al­ly cre­ative software. 

But the thing is with Twitter bots and a lot of AI in pop sci­ence, it’s kind of like stay­ing up late with your par­ents. Once you ask to be treat­ed like a human being, you have to [abide] by a dif­fer­ent set of rules. You have to be extra good. You have to be on your best behav­ior if you want to stay up late and watch the grown-up TV, like I used to get to sometimes.

Theodor from the movie Her seated glumly in front of a computer screen, captioned "People want to believe in AI, but their hearts are fragile."

And the sec­ond you mis­be­have, you get sent to bed. Because you did­n’t play by the rules that you were agree­ing to be judged by. And I think that some­times we do this inten­tion­al­ly, like Tay did. And some­times we do it unin­ten­tion­al­ly like things like @AppreciationBot does, where you go so far in on the bluff­ing or the play­ful­ness that we for­get, I think, the most impor­tant thing about AI and its inter­ac­tion with soci­ety, which is I am very for­tu­nate I have met a lot of won­der­ful peo­ple over the last five years. And talked to them about ANGELINA, and many of them are in this room. And the reac­tion to ANGELINA is won­der­ful, and peo­ple love ANGELINA and they love AI. And they want to believe in tech­nol­o­gy. They want to talk to bots. They want to believe in AI.

But they are also very eas­i­ly heart­bro­ken. And I think we have a duty to be very care­ful with what we let our bots say, what we let our bots do, because peo­ple get attached to these things, and we can’t stop them from being let down. 

Don't Be Human

So, this is the les­son I want to leave with you, and I thor­ough­ly encour­age you to ignore me. But the next time you try and make your bot as human as pos­si­ble, think about whether you could take anoth­er tack entirely. 

Before I stop, I just want to say that I talked about The Book ear­li­er. I am writ­ing with Tony Beall, who’s in the room right now, a book about AI and about Twitter bots and about com­pu­ta­tion­al cre­ativ­i­ty. And I’d love to speak to peo­ple in this small but love­ly com­mu­ni­ty about who they are and why they do what they do. So, we would love to chat some­time, if not here maybe email me or tweet it.

Thank you very much for hav­ing me to talk. 

Further Reference

Darius Kazemi’s home page for Bot Summit 2016.