Thank you very much for hav­ing me. I real­ly appre­ci­ate the oppor­tu­ni­ty to come and talk.

As Alice point­ed out, I’m a jour­nal­ist and film­mak­er. I just want­ed to give a lit­tle bit of back­ground on my own inter­est in this sub­ject. I don’t come from a cod­ing back­ground. My inter­est in tech­nol­o­gy real­ly comes from my inter­est in pop­u­lar cul­ture and my belief that if you want to under­stand pop­u­lar cul­ture, you real­ly need to engage with tech­nol­o­gy and the ques­tions that it pos­es in the world, which are real­ly key for under­stand­ing not just how the world works, but real­ly our rela­tion­ship with the world, our rela­tion­ships with each oth­er, and our selves and issues of iden­ti­ty.

You look at any peri­od in his­to­ry, and imagery and metaphors are often drawn from pop­u­lar sci­ence, and today there’s no sci­ence more pop­u­lar than com­put­er sci­ence. We can see this from the per­va­sive­ness of tools like Facebook, Google, and Twitter. And when you think about it, the algo­rithm is the key metaphor of the age. It inter­est­ed me because of the idea of just how much of our lives can be algo­rith­mized, or sort of rou­tinized and auto­mat­ed. That’s what I would like to talk about today.

One of the most famous state­ments made about the enter­tain­ment indus­try was made by the screen­writer William Goldman. Back in the 1980s, Goldman, who wrote Butch Cassidy and the Sundance Kid, and All the President’s Men, and var­i­ous films like that, was work­ing on his auto­bi­og­ra­phy, Adventures in the Screen Trade. One of the things that his edi­tor was very keen for him to do was to look back on his time work­ing in the Hollywood trench­es and try and draw out a les­son that he would be able to pass on to film fans, or real­ly any­one who want­ed to fol­low in his foot­steps.

He did, but the only les­son that he could come up with was the idea that when it comes to the enter­tain­ment indus­try, nobody knows any­thing. He was­n’t say­ing this nec­es­sar­i­ly to insult the peo­ple who had reject­ed scripts and things over the years, but rather the idea that until a film arrives on the cin­e­ma screen, nobody’s able to pre­dict whether it’s going to become a hit or a flop.

As a film­mak­er myself, I’ve always been real­ly inter­est­ed in this idea of whether or not we can pre­dict hits. You speak to any­one who works in the enter­tain­ment indus­try, and every­one has their was sto­ries of that film they were sure was going to become a hit which some­how became a miss. There are niche films which appeal to every­one, and per­haps more like­ly, films that are designed to appeal to every­one which some­how appeal to no one. Nobody has an unblem­ished record. When you con­sid­er the fact, it’s kind of dif­fi­cult to blame them.

Back in the mid-2000s, there were two films which were doing the rounds in Hollywood, and I’d just like to talk you through them.

This is the first one. It was called Project 880. Project 880 was a sci­ence fic­tion film. It was direct­ed by a very suc­cess­ful Hollywood film­mak­er who’d had a string of hit films pre­vi­ous­ly. In fact, the last film that he had made pri­or to Project 880 had become the first film in his­to­ry to gross a bil­lion dol­lars at the box office, and had racked up eleven Academy Awards at the Oscars. However, he had­n’t real­ly done very much for the decade pri­or to mak­ing Project 880. He was also ask­ing for quite a lot of mon­ey to make it, $237 mil­lion. The film he was propos­ing to make did­n’t have any major stars in it, and it was shot in the then-experimental 3D for­mat. Nonetheless, he was giv­en the mon­ey to make the film, and when it arrived in the cin­e­ma it was labeled, in the words of an ear­ly review­er, the most expen­sive American film ever made and pos­si­bly the most anti-American as well.

Let’s jump to anoth­er. Project X was doing the rounds pret­ty much at the same time as Project 880. Like Project 880, it was also a sci­ence fic­tion film, again from a very suc­cess­ful direc­tor who had pre­vi­ous­ly direct­ed Wall‑E, Finding Nemo, and had worked on all of the entries in the hugely-successful Toy Story series. It was based on a clas­sic chil­dren’s sto­ry, and the script was co-written by a Pulitzer Prize-winning author. He was ask­ing for quite a lot of mon­ey as well, a shade more than the direc­tor of Project 880. He was ask­ing for $250 mil­lion. And again, he was also shoot­ing in the exper­i­men­tal 3D for­mat. He was also giv­en the oppor­tu­ni­ty to direct the film.

What is inter­est­ing to me about this is that on paper, both of these seem like they should be hits. In fact, they sound like they should be fair­ly sim­i­lar hits. Both from suc­cess­ful direc­tors, both main­stream Hollywood films, both sci­ence fic­tion movies, both 3D, etc.

However, they came to slight­ly dif­fer­ent fates. Project 880, as I’m sure a num­ber of you if you’re film fans will have guessed, turned out to be James Cameron’s Avatar, which became the first film in his­to­ry to earn $2 bil­lion at the box office, so a pret­ty good return on invest­ment for the peo­ple who had bankrolled it.

Project X on the oth­er hand, did­n’t become the next Avatar, but rather became the first John Carter, a film which was crit­i­cal­ly panned and lost $200 mil­lion in the box office, and actu­al­ly result­ed in the fir­ing of the head of the stu­dio that had made it, despite the fact that he had tak­en the job after the film was already in pro­duc­tion.

As per William Goldman, nobody knows any­thing.

I was fas­ci­nat­ed [by] this idea, and I start­ed look­ing around for oth­er exam­ples of try­ing to pre­dict hits and miss­es (pre­sum­ably try­ing to pre­dict hits) across the enter­tain­ment field. I found a great quote from Somerset Maugham which was­n’t about movie mak­ing, but was actu­al­ly about nov­el writ­ing. He said there are three rules for writ­ing a suc­cess­ful nov­el, unfor­tu­nate­ly nobody knows what they are. This got me won­der­ing, does nobody know what they are because Maugham was being face­tious and they don’t exist as rules, or does nobody know what they are because as humans we’re not good enough at pat­tern recog­ni­tion in order to be able to find the high-level pat­terns which dic­tate what’s going to become a hit?

Fortunately, this is an area that com­put­er sci­ence can help with. Here in London there’s a fas­ci­nat­ing com­pa­ny called Epagogix which works with some real­ly big movie stu­dios in Hollywood and pre­dicts how much films are going to make at the box office. It does this using a neur­al net­work. For those of you who don’t know what a neur­al net­work is, it’s a sort of vast arti­fi­cial brain for explor­ing the rela­tion­ship between cause and effect, or input and out­put, in sit­u­a­tions where that sit­u­a­tion is com­plex, or unclear, or pos­si­bly unknown.

Essentially, they’re giv­en scripts by the movie stu­dio they work with, and they also take scripts for films that they’re not work­ing on just as a way of grow­ing their data­base, and they divide it into mil­lions of dif­fer­ent com­po­nents, in fact 30,073,680. Thirty mil­lion unique scor­ing com­bi­na­tions, which are far more than you or I would ever be able to come up with if we were asked to write down the com­po­nents of a suc­cess­ful film. We might come up with ten or fif­teen at best, and strug­gle a lit­tle bit after that.

Epagogix, how­ev­er, has been very suc­cess­ful for the stu­dio that it works with. And inter­est­ing­ly, it does­n’t just churn out a par­tic­u­lar num­ber and that’s the end of it. It can also make cre­ative deci­sions because it can look at parts of the script where the yield is per­haps not where it could be, and sug­gest that you tight­en up this moment in the fif­teenth sec­ond of the four­teenth minute of the film, and we pre­dict that this will have a knock-on effect which will result in you earn­ing X amount more at the box office. Epagogix is real­ly a vision of a future in which machine log­ic can be embed­ded in the cre­ative process.

So let’s jump to a dif­fer­ent field, one that from my per­spec­tive as an out­sider seemed like it should be more straight­for­ward to auto­mate. Not because it’s a less com­plex sub­ject but just because it’s one that’s more log­i­cal and more rule-based. In some ways, aca­d­e­m­ic pub­lish­ing would fall into this kind of cat­e­go­ry. But the area is law.

It stands to rea­son that these areas should be more pre­dictable, and it should be pos­si­ble to come up with con­sis­tent rules that we would be able to use to pre­dict out­comes and offer auto­mat­ed solu­tions to our every­day log­i­cal prob­lems. It turns out, how­ev­er, (and this is per­haps some­thing that you know from your own work with­in aca­d­e­m­ic pub­lish­ing) that some­thing that seems like it should be fair­ly straight­for­ward to auto­mate does­n’t always turn out to be the case. In fact there was a fas­ci­nat­ing study which was done last year which shows just how dif­fi­cult it is to turn even the sim­plest of law into an algo­rithm.

The law in ques­tion was an algo­rithm designed to deter­mine whether dri­vers had bro­ken the speed lim­it, and then to give them a tick­et if it deemed that they had. As far as laws go, this seems like it should be a fair­ly straight­for­ward one to auto­mate. It’s a fair­ly bina­ry law. You either dri­ve above a cer­tain lim­it and get a tick­et, or you dri­ve under the lim­it and you don’t get a speed­ing tick­et. However this showed how dif­fi­cult it is to auto­mate even that.

For the exper­i­ment, fifty-two com­put­er pro­gram­mers were brought togeth­er, and they were all giv­en two data sets. One data set showed the legal speed lim­it along a par­tic­u­lar route and the oth­er data set showed the speed of a vehi­cle, which was a Toyota Prius, along that route, tak­en from an on-board com­put­er. Comparing the two data sets you should there­fore be able to deter­mine when the car had exceed­ed the speed lim­it. I should also say that this jour­ney was a pret­ty typ­i­cal com­mute to work; it was about a half-hour jour­ney in total, and it was fair­ly unevent­ful. It was com­plet­ed safe­ly and with­out any inci­dent.

When the fifty-two com­put­er pro­gram­mers were brought togeth­er, they were split into two groups, and they were each asked to do some­thing slight­ly dif­fer­ent. One group was asked to cre­ate an algo­rithm which would con­form to the intent of the law, and the oth­er group was asked to write an algo­rithm which would con­form to the let­ter of the law. On the sur­face, both of these seem like they should be achiev­ing the same end, but in fact they achieved vast­ly dif­fer­ent results.

The intent of the law” group issued the amount of tick­ets that we would expect for some­thing like this, between 0 and 1.5. We would like­ly be slight­ly irri­tat­ed if we got a tick­et for dri­ving to work, but at least it’s with­in what we would expect to hap­pen. The let­ter of the law” group, how­ev­er, issued a slight­ly more dra­con­ian 498.3 tick­ets.

This aston­ish­ing dis­par­i­ty illus­trates that the sce­nario where decid­ing legal cas­es by algo­rithm might not per­haps be quite as close as we assume that it is. But it is, how­ev­er, some­thing that we’re going to have to grap­ple with going for­wards. Just this week, in fact, we saw Google rolling out the first mass-market (at least in name) pro­to­type for its self-driving car. And we’ve also read a lot about ambi­ent law, which is the idea that laws can be embed­ded with­in the devices and envi­ron­ments around us. So we might have a car which is able to deter­mine whether its dri­ver is over the legal alco­hol lim­it and then decide not to start as a results. Or we could have the Smart Office which reg­u­lates tem­per­a­ture, and if a cer­tain lim­it is reached or exceed­ed, decides to sound an alarm or turn off com­put­ers or some­thing sim­i­lar to that.

But this shows why this is going to be such a chal­leng­ing thing to achieve. Laws aren’t based on hard and fast rules, as it turns out but rather a sort of high lev­el of what we could call inter­sub­jec­tive agree­ment. And imple­ment­ed incor­rect­ly or with­out under­stand­ing not nec­es­sar­i­ly the code at the heart of it but the human­i­ty at the heart of these laws can result in high­ly prob­lem­at­ic sit­u­a­tions.

Perhaps the sin­gle most reveal­ing sta­tis­tic from the speed lim­it exper­i­ment hap­pened after­wards when the group was reassem­bled and were dis­cussing what they had learned. What was par­tic­u­lar­ly fas­ci­nat­ing was when they were asked whether they would be hap­py to dri­ve on the road under the con­di­tions that they had just been respon­si­ble for cod­ing. Here the let­ter of the law” group, who had issued the dra­con­ian num­ber that I men­tioned ear­li­er, 94% claimed that they would­n’t be hap­py to dri­ve on the road under those con­di­tions. In fact, only one said that they would, and said that they would do so only on the pro­vi­so that they had a back­door which would enable them to some­how cir­cum­vent the laws that they had cod­ed, which prob­a­bly would­n’t be hap­pen­ing.

So let’s jump now to one last area. This is from man-made laws to nat­ur­al laws. What about behav­ior, and par­tic­u­lar­ly what about love? The idea that we might be able to pro­gram our own boyfriend or girl­friend isn’t a new one in terms of sci­ence fic­tion. I’m sure lots of you also grew up in the 80s and remem­ber such films as Weird Science. More recent­ly we’ve had a mil­len­ni­al take on that in Spike Jonze’s film Her which also deals with the idea of AI and whether or not we could fall in love with an AI and what the impli­ca­tions of that would be. In fact, there have been some high lev­el AI pro­po­nents or AI experts who have inves­ti­gat­ed this area and have come to some inter­est­ing and poten­tial­ly wor­ry­ing con­clu­sions about, as algo­rithms get bet­ter and robot­ics get bet­ter, not only do they pre­dict that an algo­rith­mic lover might become pos­si­ble, but also that in some ways it might become prefer­able because we would be able to fine-tune our oth­er half.

This is, for the most part, either Hollywood sci­ence fic­tion or the kind of stuff that’s tacked onto the end of PhD the­sis as a sort of hypo­thet­i­cal eth­i­cal issue that we will be deal­ing with at some point down the line. However, I met a pret­ty inter­est­ing pro­gram­mer while I was writ­ing my book, a guy called Sergio Parada who lives in California. Sergio Parada start­ed out his car­reer work­ing as a video game design­er work­ing on the Leisure Suit Larry series of games.

For those of you lucky enough not to know what the Leisure Suit Larry series of video games are, it was essen­tial­ly a 1990s series of bawdy adult video games in which you play a sort of affa­ble los­er as he pro­gress­es through a sce­nario of bed­ding an increas­ing num­ber of part­ners. It was while Parada was work­ing on the last entry in the game, which was in fact nev­er released, called Larry Explores Uranus, that he came up with the idea of cre­at­ing not a girl sim­u­la­tor, but a rela­tion­ship sim­u­la­tor. He called it Kari, stand­ing for the Knowledge-Acquiring Response Intelligence.

One of the inter­est­ing things about Kari, she’s a chat­ter­bot, which is an algo­rithm designed to sim­u­late an intel­li­gent human con­ver­sa­tion, which is of course an idea which goes back to Turing. The inter­est­ing thing about Kari, espe­cial­ly giv­en Parada’s pre­vi­ous work, is that unlike a reg­u­lar video game, in Kari there was no set nar­ra­tive. It was­n’t like you would reach the end of a lev­el and that would be it. The reward for the play­er was a rela­tion­ship, as with any suc­cess­ful rela­tion­ship, which grows and devel­ops and deep­ens over time. Of course to ensure that this hap­pened, Parada ensured that you would be able to mod­i­fy Kari in a way that you may not be able to mod­i­fy your sig­nif­i­cant oth­er in the real world.

This was done by way of a series of slid­ers, the most roman­tic way to do this, obvi­ous­ly.

Screenshot of a settings panel for the Kari software, with a range of sliders for love, ego, libido and various topics of interest.

This means that Kari can live up to the kind of the pros­ti­tute’s clas­sic sales pitch that I can be who­ev­er you want me to be. You can lit­er­al­ly fine-tune Kari to be your ide­al part­ner and you can, for exam­ple if you notice that she was being a bit too aloof you might low­er her inde­pen­dence lev­el. Or if you notice that she jumps too quick­ly from top­ic to top­ic, you might raise the amount of sec­onds between unpro­voked com­ments.

For me the most inter­est­ing aspect of Kari isn’t just that it chal­lenges our idea of what rep­re­sents a rela­tion­ship but that in a way it makes us aware of the degree to which our rela­tion­ships with one anoth­er are essen­tial­ly high-level social algo­rithms which are act­ed out accord­ing to a step by step frame­work.

So if some­thing like love can be auto­mat­ed, then what next? The ques­tion is sort of whether every­thing can be sub­ject to algo­rith­miza­tion, and I think all of you are prob­a­bly in a bet­ter posi­tion than I am to answer this ques­tion. The answer cur­rent­ly is no.” There are cer­tain things that cur­rent­ly can’t be car­ried out by algo­rithm. For exam­ple, image recog­ni­tion fre­quent­ly requires far more train­ing exam­ples than a child in order to be able to rec­og­nize par­tic­u­lar objects. Or mark­ing essays in a sub­jec­tive area like the human­i­ties could also be chal­leng­ing to auto­mate.

An array of job titles: policework, architecture, advertising, teaching, etc.

But things are improv­ing, or at least they’re chang­ing very very rapid­ly. We have a num­ber of fields which pre­vi­ous­ly would­n’t seemed impos­si­ble to auto­mate which today, there are very suc­cess­ful com­pa­nies work­ing on auto­mat­ic or algo­rith­miz­ing. For exam­ple, you have a field like my cho­sen pro­fes­sion of jour­nal­ism. There’s a com­pa­ny in America which is very suc­cess­ful called Narrative Science which is work­ing on cre­at­ing algo­rithms which can com­pile news sto­ries. Currently this is only used for com­pil­ing finan­cial reports or low-league sports match write-ups. But long-term it could per­haps do more, and I know the CEO of Narrative Science has claimed that with­in the next ten to fif­teen years, he believes it’s going to win the Pulitzer prize for inves­tiga­tive jour­nal­ism. Maybe a sort of PR spin or wish­ful think­ing on his part, but I think it rais­es some inter­est­ing ques­tions.

Another area is music com­po­si­tion. In 2012, the London Symphony Orchestra took to the stage to per­form the works of Iamus, which is a music com­pos­ing algo­rithm which has com­posed more than a bil­lion com­po­si­tions across a wide range of gen­res.

But some of our con­cern around this is about employ­ment. Ten, or maybe fif­teen, years ago it would’ve seemed ridicu­lous that a long-distance dri­ver could have his or her job replaced by an algo­rithm. Today of course, as we men­tioned ear­li­er, there’s the Google self-driving car. This is clear­ly an area which is, if not being direct­ly threat­ened with­in the next few years, at least there’s the poten­tial for that to hap­pen.

But there’s also a big­ger, deep­er ques­tion about whether there are parts of life that are sim­ply too pre­cious to auto­mate, or too inte­gral to our con­cep­tu­al­iza­tion of what it is that makes us human. To go back to the first exam­ple, if an algo­rithm can make cre­ative deci­sions, as with Epagogix, what does that say about art and cre­ativ­i­ty, and about our own human­i­ty? In the case of art, and algo­rithm does­n’t have knowl­edge about its own mor­tal­i­ty and the sense of urgency that comes with it. It might be able to achieve pleas­ing sounds, or string togeth­er pleas­ing images, but are these ever going to match the emo­tion­al com­plex­i­ty or the cre­ative inno­va­tion of a painter or writer or film­mak­er. Ultimately, writ­ing this book left me with far more ques­tions than it did answers, and hope­ful­ly I will prob­a­bly not be able to give you defin­i­tive answer, but I can at least give an effort to do so if you have any ques­tions.

Thank you very much.

Further Reference

The early review of Avatar mentioned near the beginning.

There was also a Q&A session after Luke's presentation.


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.