Samim Winiger: Welcome to Ethical Machines. We are your hosts…

Roelof Pieters: Roelof.

Winiger: And Samim.

Pieters: Ethical Machines is a series of con­ver­sa­tions about humans, machines, and ethics. It aims at start­ing a deep­er, better-informed debate about impli­ca­tions of intel­li­gent sys­tems for soci­ety and individuals.

Winiger: For the sec­ond episode, we invit­ed Jack Clark, the world’s first neur­al net­work jour­nal­ist, report­ing for Bloomberg, to talk with us. Let’s dive into the interview.

So how long have you been in the US for now?

Jack Clark: I’ve been here for about three years now. I moved here to join The Register, where I wrote about AI and data­bas­es. And then I got hired by Bloomberg, and so now I’m help­ing to cov­er AI for us as well as doing more tra­di­tion­al enter­prise companies.

The way I think of it is, neur­al net­works are going to be fun­da­men­tal to a very large amount of the AI that we’ll see and expe­ri­ence for the next few years. So I fig­ure if I report on any com­pa­ny or indi­vid­ual mess­ing around with this tech­nol­o­gy, then I can get a view of a good sec­tion of the AI world as it expands. And from a sto­ry point of view it’s very fruit­ful, because both lots of research hap­pen­ing and it’s cre­at­ing some fas­ci­nat­ing exper­i­ments and prod­ucts that we can write about.

Pieters: So what do you think is the best approach to explain­ing these com­pli­cat­ed topics?

Clark: I think you have to read every­thing. I spend about one or two hours a day read­ing the preprints as they come on arXiv, read the papers and study that. And then try to turn it into an anal­o­gy. Because no one knows what a neur­al net­work is out­side of acad­e­mia, but every­one knows that when you’re an extreme­ly young child you’ll like, pick up a flower and stare at it for sev­er­al hours. And it’s this ear­ly form of learn­ing that chil­dren do that is anal­o­gous to what we’re try­ing to do with some of these systems.

Winiger: You’ve heard major fig­ures in deep learn­ing and beyond blam­ing jour­nal­ists late­ly for over­hyp­ing the issue, for mis­char­ac­ter­iz­ing. You’ve heard a lot of ugly words being thrown around. How would you con­front these very regard­ed fig­ures crit­i­ciz­ing jour­nal­ism in such a broad way?

Clark: Ninety-five per­cent of the peo­ple mak­ing these crit­i­cisms have spent years study­ing arti­fi­cial intel­li­gence. They have a tech­ni­cal back­ground. They prob­a­bly have a PhD. They have years of expe­ri­ence of look­ing at very com­plex tech­nol­o­gy and com­ing away with objec­tive and sort of applied appli­ca­tions of it. 

Most jour­nal­ists don’t have PhDs in machine learn­ing. From the jour­nal­ist’s per­spec­tive you know, I’ve had to spend sev­er­al years read­ing a lot of lit­er­a­ture and doing a lot of math­e­mat­ics and try­ing to teach myself this stuff and I’ve made that invest­ment. It’s dif­fi­cult to find the time. So peo­ple like Demis Hassabis or Yann LeCun or Jeffrey Hinton, one of their main respon­si­bil­i­ties and of their pub­lic rela­tions depart­ments should be to spend a lot of time with jour­nal­ists and make sure that they they just edu­cate the jour­nal­ists about how this stuff works. It yields a media where you have an under­stand­ing of it. Because they have to be gen­er­ous with their time just as we do in our writ­ing of their subject.

Pieters: So what are things which kind of hap­pen, be it like unex­pect­ed or just inter­est­ing to you. What are the kind of most excit­ing things?

Clark: The two very excit­ing things to me, one is mem­o­ry sys­tems? You know, we have seen in recent years every­one has start­ed to look at long-short term mem­o­ry again. The appre­ci­a­tion for the role the hip­pocam­pus plays in con­scious­ness, that same obser­va­tion is being applied to AI to give us sys­tems that can do long-term rea­son­ing and multi-part pat­tern recognition.

The sec­ond one is about rein­force­ment learn­ing being com­bined with deep learn­ing in robot­ics. I mean, you saw just this week Fanuc invest­ed a stake in Preferred Networks. Preferred Networks have been doing rein­force­ment rein­force­ment and deep learn­ing applied to robot­ics plat­forms. They read their neur­al Turing Machine paper, they read the q‑learning paper. They’ve also read work out of Berkeley’s lab from Pieter Abbeel and Sergey Levine on end-to-end visuo­mo­tor pol­i­cy training.

So that already has cre­at­ed a sit­u­a­tion where Fanuc has invest­ed mon­ey, ABB has put mon­ey into Vicarious. And we have some star­tups I’ve just heard about who are all doing this. So this is excit­ing, you know. Especially after we saw the DARPA Grand Challenge, those robots looked so kind of drunk. They were falling all over the place. They were stu­pid, very very slow. And I spoke to Dr. Gill Pratt who ran that, and he’s of the opin­ion as well that we’re going to get a huge increase in robot­ics capa­bil­i­ty from the appli­ca­tion of sens­ing sys­tems from neur­al networks.

Pieters: But con­nect­ed to that, what would you rec­om­mend, as a grad­u­ate stu­dent in machine learn­ing? Stay in acad­e­mia? Join one of the estab­lished cor­po­rate research labs? Or start your own startup?

Clark: I think what we’re going to get deep learn­ing artists at some point. Now that is the worst idea in the world if you would like to have mon­ey or a career. But it is some­thing that will hap­pen. We’re going to get some artists who are using this gen­er­a­tive stuff. I’ve seen work that you’ve done—I’ve seen a lot of it. I think that the field of new aes­thet­ics we’re see­ing from this could become very very inter­est­ing once we have a bet­ter under­stand­ing on how to use the models.

To answer your actu­al ques­tion, though, there are two areas of pos­si­bil­i­ty. One is cus­tom accel­er­a­tors. If you can fig­ure out how to do good FPGAs to do learn­ing, then you can start to put clas­si­fiers on low-cost drones and things like that. That would be the area I’d rec­om­mend. But FPGAs are incred­i­bly dif­fi­cult. So you know, good luck.

Winiger: So, recent­ly we’ve seen lots of friends of ours that are work­ing on deep learn­ing libraries or key pieces of the puz­zle get­ting hired imme­di­ate­ly by Facebook Google, etc. And so I guess as a stu­dent espe­cial­ly, you’re con­front­ed with this real­i­ty right now. You can either get imme­di­ate­ly hired by one of the big play­ers, or go and do a PhD by an insti­tu­tion that is spon­sored by one of the big play­ers, or start a start­up. To refor­mu­late that ques­tion, which one of these is that less evil poi­son, in a sense?

Clark: Well, here’s the prob­lem. This is a poi­son ques­tion, right. Because from a soci­ety view, every­one should stay in acad­e­mia because it begets the largest quan­ti­ty of open research and the best teach­ing for the next gen­er­a­tion. I don’t know about Europe so much, but in America with the way fund­ing is and com­pet­i­tive tenure sit­u­a­tions, and just the mis­ery of being a post-doc at say, if you’re not a top-tier insti­tu­tion? That’s such a hard life that the ratio­nal deci­sion for many is going to be to go and work at Google or Facebook. Because you will get excel­lent train­ing, you will get some of the best data you can access in the world, and you will be paid giant amounts of mon­ey. So, go for the com­pa­ny. But keep in mind that every­one in this com­mu­ni­ty has a respon­si­bil­i­ty to do the best research they can in the most open way.

Pieters: Yeah. Connected to this is the ques­tion of biopol­i­tics, you know, in a post-structuralist sense, where it’s about con­trol of the land­scape. Where you have all these peo­ple in their free time cre­at­ing libraries. But I’m get­ting hired by one of the big com­pa­nies. It’s a land grab for tal­ent, that’s clear. But at the same time it’s also a land grab for con­trol of the resources on the soft­ware lev­el. So that’s a trend, at least. Do you see this trend?

Clark: Yes, def­i­nite­ly. Partly it’s that AI is a rel­a­tive­ly small com­mu­ni­ty. You know, Yann LeCun and his friends at NYU all work on Torch, where­as DeepMind has done work on oth­er libraries. Even the lan­guages. Some peo­ple real­ly like Lua, oth­ers are more doing stuff with Python. Some peo­ple who are very intel­li­gent are just writ­ing things in C. But I am afraid of those peo­ple because I don’t know how you can do that. You know, there’s a diver­si­ty hap­pen­ing here.

Now, I don’t know how it gets fixed. Someone needs to grow up and com­mit the com­mu­ni­ty to one or two of them. If we look at the his­to­ry of soft­ware, that’s not gonna hap­pen for a few years. 

Winiger: Maybe piv­ot­ing from the cor­po­rate dis­cus­sion, in the pre­vi­ous episode where we had Mark Riedl, we had a long chat about gen­er­a­tive tech in sto­ry gen­er­a­tion, that kind of thing. And we touched briefly on gen­er­a­tive jour­nal­ism. And he men­tioned that was one of the few areas where gen­er­a­tive text cre­ation real­ly is being deployed in indus­try in a large way. Which was to me eye-opening. And I knew at the bor­ders this was hap­pen­ing, but to hear it first­hand was very inter­est­ing. I mean, what you think? Is it going to be a rad­i­cal­ly new jour­nal­ism quite soon, or what do you think about that?

Clark: …Yes. [laughs]

Winiger: Right.

Clark: So num­ber one, I work at Bloomberg. We obvi­ous­ly do very com­pet­i­tive sto­ries. When you do an earn­ings sto­ry, we try and have a first ver­sion out with­in three or four min­utes. We do that by writ­ing incred­i­bly detailed tem­plates. We talk to peo­ple in the days run­ning up. We have a whole team of peo­ple stand­ing around with num­bers, check­ing every one when you push it live. 

Obviously this is some­thing that is going to be increas­ing­ly auto­mat­ed. Because this is a job where I am try­ing to be like a com­put­er. And when­ev­er you’re doing that job, you real­ize at some point a com­put­er will do this.” The Associated Press already uses tech­nol­o­gy from Narrative Sciences to gen­er­ate earn­ings reports for com­pa­nies that they don’t cover. 

The only prob­lem that these tools have is that they can’t do con­text. So it will look at all of the indi­ca­tors in the earn­ings release. It will look at the ana­lysts’ rec­om­men­da­tions. And it will make sen­ti­ments deci­sion based on whether the com­pa­ny beat or did­n’t beat. The prob­lem is that the mar­ket isn’t ratio­nal, so some­times a com­pa­ny can beat on all of the ana­lyst esti­mates but go way down because buried some­where in the release is a ref­er­ence to how they’re chang­ing account­ing, or they’re writ­ing down some­thing or whatever. 

Generative jour­nal­ism will be a real­i­ty. We’re not quite there…yet, but it’s very clear to me that it’s going to fos­ter a sort of winner-take-all sit­u­a­tion where if we at Bloomberg devel­op some small tools, I will be able to do sto­ries much more effi­cient­ly and that leaves more time for investigation. 

The New York Times, in a pro­to­type ver­sion of their new CMS, they’re using recur­rent neur­al net­works to sug­gest tags based on the sto­ry. So they’re already kind of aug­ment­ing arti­cles with some of this machine intel­li­gence. Which is a great idea. 

Pieters: Well, and the oth­er way around there’s this recent work on ques­tion answer­ing sys­tems, where some news­pa­pers, very very spe­cif­ic in their style by using both the sum­ma­ry and the actu­al arti­cle to train a long-short term mem­o­ry but a ques­tion answer­ing sys­tem. Which basi­cal­ly is to say that if it can go one way it should also be able to go the oth­er way.

Clark: It should be able to. You may have been aware DeepMind did some of the work there. And took in all of the Daily Mail, which is a large tabloid web site. And what they found is that if you put the phrase into their learn sys­tem you know, Does cof­fee cause…” or Does eat­ing lob­ster cause…” every time “…” be can­cer, because the Daily Mail loves writ­ing arti­cles about how every­thing’s going to give you can­cer. So that shows you how even with a very large data set there could be some prob­lems that are very unpredictable. 

Winiger: Yeah, I mean this is in my expe­ri­ence as well work­ing with gen­er­a­tive sys­tems. You real­ly have to rethink the design process as one of choos­ing inputs and out­puts. Choosing the Daily Mail seems like an exer­cise in com­e­dy more than any­thing else.

Clark: It’s fun­ny, but one of the things that Google has been talk­ing about a lot is that… And you may know more about this. The European Commission pub­lish­es huge doc­u­ments it pub­lish­es them con­cur­rent­ly in twenty-seven dif­fer­ent lan­guages. So there’s an idea of not only can we use that as a very good store of text to learn con­cepts, but we can learn con­cepts as they cross from one lan­guage to anoth­er because we have that map­ping. And because it’s not just a French-German dic­tio­nary it’s French-German-Italian-English all on one thing, you can learn a very com­plex, rich rep­re­sen­ta­tion across the dif­fer­ent cul­tures. So that seems fruit­ful to me.

Winiger: It just brings back a dis­cus­sion I had a few days ago with [Guy Acosta?] one of the devel­op­ers in deep learn­ing. And he brought up this inter­est­ing idea that the trained weights of these nets can be set in con­nec­tion with what pre­vi­ous­ly was the data­base. So the next ora­cle in a sense will be the one that is wield­ing the most pre-trained weights and that there will be a whole set of law cas­es unfold­ing soon where peo­ple will get sued for the spe­cial­ized train­ing on top of pre-trained nets and things like that. Which I found inter­est­ing. It just came to mind when you were talk­ing just there.

Clark: Well, if you think about it what we’re doing is we’re turn­ing very high-dimensional math­e­mat­ic rep­re­sen­ta­tions of a sort of large knowl­edge space into intel­lec­tu­al prop­er­ty. Which should be the most fright­en­ing idea in the world to any­one. This is from most abstract thing you could pos­si­bly try and turn into a cap­i­tal­ist object, and we’re head­ing down that direc­tion. I don’t think that can work. I think that if you look at the way that you encode infor­ma­tion from a trained net, the legal cas­es will be huge­ly com­plex. But Google has been acquir­ing many patents and so has IBM and so has Microsoft. So we might get a Cold War sce­nario where there is no law suits because all of them have enough patents to threat­en each oth­er with you know, nuclear bomb law suits. Who knows?

Winiger: I mean, it’s a hor­ri­ble sce­nario. Nobody wants to see that because I sup­pose it would sti­fle inno­va­tion, really. 

Clark: How do we avoid it? You know, what are things that you guys think could be done to stop it happening?

Winiger: I think step one is to sup­port the cur­rent open­ness. Because we see this mar­ket­ing mon­ey rush­ing in and obvi­ous­ly they’re smart mar­ket­ing peo­ple. They try to set a cul­tur­al sen­ti­ment. And I think that’s one dial that as a soci­ety we can start to turn in the oth­er direc­tion. Upgrading the now some­what old-sounding notion of the pub­lic domain. Especially in the US it sounds com­plete­ly out of date but it could quite eas­i­ly actu­al­ly be dialed back into fash­ion. So that’s one approach, I sup­pose. It’s a real­ly hard prob­lem, isn’t it?

Pieters: There’s Creative Commons licens­es, right. I mean, why does Google for instance bring out a patent for all these dif­fer­ent algo­rithms they devel­op. Why not launch it with a Creative Commons license, where there may be [an] attri­bu­tion clause. Okay, you have to attribute it to Google, so Google is still pro­tect­ed on the attri­bu­tion prin­ci­ple but it still would be actu­al­ly open.

Clark: Well, it real­ly is unfor­tu­nate­ly this hor­ri­ble sort of mutu­al­ly assured destruc­tion game the­o­ry sce­nario, where Google may not have want­ed to patent this, but what it may have done (which com­pa­nies do reg­u­lar­ly) is looked at all of the patents IBM has on AI, said like, Holy moly. If we don’t have some AI patents, we can be in a legal­ly weak posi­tion with respect to IBM should there be a law suit.” So it cre­ates this sce­nario where even if it’s not a good idea, you’re going to amass these patents because as a cor­po­ra­tion you have to do ratio­nal things for your investors. And any investor would kind of right­ly say, Hey, Google. By not patent­ing any of this stuff, you’re putting your­self at a dis­ad­van­tage to your com­peti­tors in the mar­ket­place who are amass­ing the tools nec­es­sary to mount a legal attack.” It’s a bit depress­ing but I think that is the ratio­nal cor­po­rate response.

Pieters: One of the big sto­ries is also the more future con­cerns about the devel­op­ment of AI, lead­ing mass employ­ments, or to Terminators, ets. But let’s stick to the mass employ­ment sce­nario for now. It’s being argued if there is a mass employ­ment there needs to be a solu­tion for the peo­ple who are unem­ployed, which might be some­thing like a min­i­mum income. So where do you stand on this issue?

Clark: It’s a huge prob­lem. I’ve read a lot of research by David Autor, who is a great econ­o­mist I believe at MIT. He has writ­ten a lot about this. His analy­sis is that we don’t have the data to be able to project that AI could lead to large-scale unem­ploy­ment. But we also don’t have the data to say that that won’t hap­pen. And then if you look at peo­ple like Andrew Ning, who does AI at Baidu you know, he said to me, When the US went from 50% of peo­ple work­ing in farm­ing to 2% over fifty years, that was fine because the farmer knew that their son or daugh­ter should go to col­lege because farm­ing would be mech­a­nized and there would­n’t be a job.” 

The speed of today’s econ­o­my means that this same tran­si­tion is hap­pen­ing with­in a sin­gle gen­er­a­tion. And that’s where the prob­lems come in, is that we have no sys­tem in soci­ety for retrain­ing peo­ple when they’re mid­way through their lives to take on a new type of employ­ment or job. And that will be the issue that AI brings to the table. Because if you’re a truck dri­ver, if you’re a lawyer work­ing in e‑discovery or data stuff, you’re a jour­nal­ist doing a lot of jour­nal­ism that just requires sort analy­sis of num­bers which are out there, there is huge evi­dence that AI is com­ing for you and is mov­ing very rapidly.

And just from anoth­er slight­ly more basic eco­nom­ic point of view, the thing that AI does is it makes your exist­ing cap­i­tal expenditures—you know, your ware­hous­es, your factories—it increas­es the effi­cien­cy of them and it low­ers the depre­ci­a­tion of them. So as a busi­ness oper­a­tor like Amazon, you have the hugest incen­tive in the world to roll out Kiva Systems robots to as many of your ware­hous­es as fast as you can. Because when­ev­er you look at the num­bers, the effi­cien­cy is so much greater than with a staffed mod­el. This is going to be a defin­ing issue, maybe. I expect with­in the next five to ten years we see the big effects. If self-driving cars come on sched­ule and receive the kind of uptake that peo­ple at J.P. Morgan, peo­ple at the big banks are object­ing, it’s com­ing, you know.

Winiger: It’s super inter­est­ing. So when it gets to the kind of hard ques­tion of how soci­ety should frame automa­tion, etc., in the West espe­cial­ly self-worth and these more philo­soph­i­cal con­structs are real­ly based on full employ­ment and so forth, right. I mean, the whole psy­chic in the West is built on these notions. And so in a sense we are say­ing it’s all going to col­lapse soon­er or lat­er. It’s already now an elec­tion cycle topic. 

Clark: It’s going to be chal­leng­ing. I have a friend who’s actu­al­ly also English. They work in New York in finance. So they’re aware of tech­nol­o­gy and what hap­pens. I speak to them about this issue and they say to me, Well, but Jack, what will peo­ple do if they don’t have to work? People have to work. It’s nat­ur­al.” And I talked to a lot of peo­ple who have that view. So as you say, we have such a deep-set psy­cho­log­i­cal asso­ci­a­tion between work and self-worth that watch­ing that change is going to be dif­fi­cult. Maybe this is an area where the Europeans can take lead­er­ship because we’ve always had an appre­ci­a­tion for hol­i­day and not work­ing. Maybe that will help us, you know.

Pieters: Maybe you could argue that Google and Facebook and [inaudi­ble], it would be in their inter­est to push this new phi­los­o­phy of there will be peo­ple unem­ployed and it’s fine in the sense that they will have the [?] peo­ple work­ing against this trend. In polit­i­cal lead­er­ship of this kind of trend, this a good thing. It can be framed in the nar­ra­tive of this is good for busi­ness, good for the world, then they are, at least on the cor­po­rate lev­el the ones who are most to gain from this, right? 

Clark: Yeah. And from a pub­lic rela­tions stand­point, as a com­pa­ny you nev­er want to be asso­ci­at­ed with the destruc­tion of jobs and increas­ing inequal­i­ty. And unfor­tu­nate­ly for these AI com­pa­nies like Facebook and Google, they’re already being tagged with that. Because they have a very com­pet­i­tive mar­ket and they give engi­neers free food and mas­sages and bus­es. And we’re in San Francisco where you have huge home­less prob­lem and huge inequal­i­ty as well. But this is an issue that they’re going to need to take a lead­er­ship role on because oth­er­wise they’ll risk dis­con­tent from soci­ety becom­ing direct­ed at them because they’ve become a symbol. 

Winiger: Right. That actu­al­ly ties it real­ly beau­ti­ful­ly back to begin­ning of the dis­cus­sion of the greater need of explain­ing this real­ly com­plex set of issues to the pub­lic much bet­ter. Because oth­er­wise that whole debate is going to actu­al­ly break down. I mean, that might end in a real­ly nasty sce­nario, I guess.

Clark: Yeah. The oth­er issue this is bound up with is why can’t I pay for Facebook? Why can’t I pay for Twitter? Why can’t I do some some sit­u­a­tion where either I pay them and they don’t get my data, or they pay me a very small amount of mon­ey and I give them my data? Because if you taught peo­ple that their data has some val­ue, or that they have the option of cap­i­tal­iz­ing on that val­ue by buy­ing a ser­vice instead of get­ting it for free, they would under­stand what AI means.

Because all AI is in the way that it all affects a lot of soci­ety, it’s the out­come of us trans­fer­ring loads of very well-annotated, clean data to cor­po­ra­tions. And that issue will have to become one because if you’re Google and you say, Well, you don’t pay for Gmail because we sub­si­dize it with adverts and you get a lot of val­ue from it even though we take your data,” that is a very rea­son­able argu­ment, but at some point peo­ple are going to ask well why can’t I have the oth­er option. 

And then the com­pa­nies have to say, Well actu­al­ly, your data is so valu­able when we com­bine it with every­one else’s that we have no incen­tive to do this, from an AI devel­op­ment stand­point.” I believe that would start a con­ver­sa­tion among peo­ple about this.

Winiger: And for the gen­er­a­tive work I’ve been doing, we copy­right­ed it. I mean, what hap­pens if you train a net with copy­right­ed images and you gen­er­ate out­puts with that, you get into a real­ly inter­est­ing sit­u­a­tion with copy­right law very quick­ly, real­ly. And that’s just the beginning.

Clark: But I can think of real­ly puz­zling sce­nar­ios, like if I train a gen­er­a­tive music sys­tem based on a CD I buy of the New York Philharmonic play­ing Bach, a what point am I still using copy­right­ed per­for­mances from the New York Philharmonic ver­sus at what point is it just Bach played by a gen­er­a­tive sys­tem? It’s very very hard to dis­cern that bor­der­line. Because we know that what hap­pens as you do this gen­er­a­tive stuff is you dis­tort the under­ly­ing mate­r­i­al to the point that maybe it is fair use, that maybe it isn’t the orig­i­nal IP any­more. There’s no way to answer this stuff sim­ply. It’s going to be a very hor­ri­bly com­pli­cat­ed time, I think.

You can imag­ine you train a movie sys­tem on every sin­gle action movie in his­to­ry. And then you cre­ate a gen­er­a­tive sys­tem which will go from frame to frame or scene to scene and sort of inter­po­late a new movie out of this. At what point are you infring­ing copy­right? Like how do you even judge that any­more? It’s crazy. And as I said ear­li­er, we’re going to get artists who do this. We’re going to get peo­ple who want to con­tribute to the cul­tur­al dis­cus­sion and the aes­thet­ic dis­cus­sion, that do this stuff, and the legal sys­tem and right­sh­old­ers will have no clear path for how to react. It’s new territory.

Pieters: Well, even more prob­lem­at­ic, once you start charg­ing mon­ey for the gen­er­at­ed mate­r­i­al or you want to copy­right that, then it becomes inter­est­ing for all those copy­right hold­ers from the orig­i­nal inputs. Because they might want to also cash out that. 

Clark: You know, do you end up licens­ing the object itself? So say the val­ue of a pho­to­graph of my cus­tomized vehi­cle is quite low. Would it be greater for me to share a data set which is a set of sev­er­al hun­dred pho­tos of the car from every sin­gle angle so you can train a sys­tem to have a rep­re­sen­ta­tion of it? And should I charge my data on the rich­ness of the AI rep­re­sen­ta­tion you can derive from it? Again, I don’t know but this feels like con­ver­sa­tions which cre­ative peo­ple are going to have to start having.

And then you get to the real­ly out­landish sce­nar­ios when you start to com­bine all of this which we’re talk­ing about with a dis­trib­uted trust-based blockchain sys­tem, or the run­ning of code and val­i­da­tion of it. Again, you start to get autonomous pro­grams that will mine the Internet for con­tent and then will sell gen­er­a­tive art for bit­coin, anony­mous­ly. Well, what do we do when that has hap­pened in the world?

Pieters: So there’s a prece­dent in Holland of some­one cre­at­ing lan­guage with nat­ur­al lan­guage pro­cess­ing (I think it was not a neur­al net­work, but I mean machine learn­ing in any case), which was argued that it was hate speech or threat­en­ing tweets. And he got his door kicked in by the police. And in the end the ques­tion was who was respon­si­ble for this con­tent on Twitter? Was it the machine learn­ing algo­rithms or the per­son behind it, who argued it was—

Clark: Put the serv­er in prison.

Pieters: Yeah. At least accord­ing to the Dutch leg­is­la­tion, in the end it was the per­son who cre­at­ed the algo­rithm who was held respon­si­ble for this.

Clark: You know, Google when they launched Google Photos had a huge prob­lem, which was that the sys­tem was iden­ti­fy­ing peo­ple of col­or as goril­las. Which is lit­er­al­ly the most offen­sive thing your sys­tem could do, pret­ty much. Again, is the Google per­son respon­si­ble for not test­ing all of the cor­ner cas­es? Is Google the cor­po­ra­tion respon­si­ble for not doing QA? Valid and com­pli­cat­ed ques­tions, because it cer­tain­ly offend­ed and hurt some peo­ple. But then, they weren’t hurt by a per­son, they were hurt by gen­er­a­tive deci­sions of an algo­rithm that emerged out of a data set whose prove­nance we as the pub­lic aren’t told because it comes from a pri­vate com­pa­ny. Where is account­abil­i­ty in this universe?

Winiger: It’s tricky. I mean, I sup­pose on the one hand yeah sure, gen­er­a­tive sys­tems do man­i­fest a lot of auton­o­my, in a sense. On the oth­er hand it’s the per­fect black box to hide nefar­i­ous human action behind. And you know, you just stand there and you raise your hand and say, Well you know, it was the black box. Excuse my—or it’s—behav­ior. Don’t sue me.” I mean, it’s a bit of both real­ly, isn’t it?

Clark: I had a con­ver­sa­tion with a hedge fund recent­ly, who I can’t name, but it was about what they thought of deep lead­ing and deep learn­ing sys­tems and how it applies to trad­ing. These peo­ple are very con­cerned by deep learn­ing because we can’t inspect the mod­els very eas­i­ly. We can’t get very good teleme­try. And the whole con­cept of tak­ing an emer­gent gen­er­a­tive sys­tem and plug­ging it into a trad­ing envi­ron­ment gives these peo­ple night­mares. It is like the worst thing they could imagine.

And yet, there is going to be a huge incen­tive to use deep learn­ing to pick up on some sig­nals which are not yet being processed by tech­ni­cal trad­ing firms. So we’re going to see some very inter­est­ing arms race there. You know, we already see it with satel­lite imagery being run through deep learn­ing sys­tems to look at the height of oil and gas tow­ers to infer sup­ply. As we get the roll­out of low-cost drones, mak­ing it pos­si­ble to sur­veil far out com­modi­ties, we’re going to get a whole range of learn­ing sys­tems plug­ging into the mar­ket. Again, it’ll be a good thing for effi­cien­cy but it will also open us up to hor­ri­ble prob­lems that we can­not even imag­ine. Which is excit­ing and kind of unnerv­ing as well.

Winiger: If you made it this far, thanks for listening.

Pieters: And also we would real­ly love to hear your com­ments and any kind of feed­back. So drop us a line at info@​ethicalmachines.​com

Winiger: See you next time.

Pieters: Adios.

Winiger: Bye bye.