Samim Winiger: Welcome to the first episode of Ethical Machines.

Roelof Pieters: We are your hosts, Roelof…

Winiger: And Samim.

Pieters: Ethical Machines is a series of con­ver­sa­tions about humans, machines, and ethics. It aims at spark­ing a deep­er, better-informed debate about the impli­ca­tions of intel­li­gent sys­tems for soci­ety and individuals.

Winiger: For our first episode, we invit­ed Mark Riedl to come and speak with us. Let’s dive into the inter­view right now.

Welcome, Mark. I’m very pleased that you made it. It’s a plea­sure to have you on, our first guest. Maybe I’ll use this oppor­tu­ni­ty to intro­duce you to the audi­ence. Mark Riedl is an asso­ciate pro­fes­sor at Georgia Tech School of Interactive Computing and Director at the Entertainment Intelligence Lab. Mark’s research focus­es on the inter­sec­tion of arti­fi­cial intel­li­gence and sto­ry­telling. You can read more about his very inter­est­ing biog­ra­phy on his web­site, which we’ll link. To get start­ed and so we can get to know you a lit­tle bit bet­ter, could you elab­o­rate how you got inter­est­ed in this field in the first place?

Mark Riedl: So, it was actu­al­ly a very slow pro­gres­sion. I had got­ten inter­est­ed in human-computer inter­ac­tion and human fac­tors in under­grad and ear­ly in my grad­u­ate stud­ies. And then pro­gres­sive­ly came to real­ize that sto­ry­telling is such an impor­tant part of human cog­ni­tion and is real­ly kind of miss­ing when it comes to com­pu­ta­tion­al systems.

Computers can tell sto­ries but they’re always sto­ries that humans have input into a com­put­er, which are then just being regur­gi­tat­ed. But they don’t make sto­ries up on their own. They don’t real­ly under­stand the sto­ries that we tell. They’re not kind of aware of the cul­tur­al impor­tance of sto­ries. They can’t watch the same movies or read the same books we do. And this seems like this huge miss­ing gap between what com­put­ers can do and humans can do if you think about how impor­tant sto­ry­telling is to the human condition. 

So we tell sto­ries dozens of times a day to relate to oth­er peo­ple, to com­mu­ni­cate, to enter­tain. And so the broad­er ques­tions are, if com­put­ers could under­stand sto­ries and make sto­ries, can they inter­face with us in more nat­ur­al sorts of ways—the ways that human-human inter­ac­tion hap­pens? So the pri­ma­ry research that I’ve been inter­est­ed in the last fif­teen years or so has been in sto­ry gen­er­a­tion, which is the cre­ation of nov­el fic­tion­al sto­ries that one might read and con­ceive as hav­ing story-like qualities. 

What I don’t work on is jour­nal­ism. So I don’t try to gen­er­ate news sto­ries, but actu­al­ly try to make up things that have nev­er exist­ed in the real world. So there’s a very strong cre­ative element.

And then the oth­er kind of major area I’m work­ing in is pro­ce­dur­al game gen­er­a­tion, so try­ing to actu­al­ly gen­er­ate com­put­er games from scratch. 

Winiger: So do you have a the­o­ry how to judge a good sto­ry out­put from one of these gen­er­a­tive sys­tems and what will con­sti­tute good out­puts from bad” outputs? 

Riedl: Yeah no, that’s a real­ly great ques­tion because sto­ries are very sub­jec­tive. And part of this is because there are many dif­fer­ent roles that sto­ries can take. So in many ways the answer is very domain-dependent. A lot of my work more recent­ly has been involved in telling plau­si­ble real-world sto­ries. So for exam­ple can a com­put­er make up a sto­ry about a bank rob­bery that has not exist­ed, that no bank has actu­al­ly been robbed. But when peo­ple read it peo­ple actu­al­ly think, Yeah, this could’ve hap­pened in the real world.”

Now, the oth­er work that I’ve done in the past in terms of fairy tale gen­er­a­tion, that’s much more dif­fi­cult to eval­u­ate. Because there’s no nice, objec­tive mea­sures of what a good fairy tale is oth­er than did you enjoy it or not. And there what I’ve tried to do is to dip into psy­chol­o­gy and to say well, can we actu­al­ly mea­sure aspects of men­tal mod­els when peo­ple read stories?

So for exam­ple are there things that are con­fus­ing because the moti­va­tions of a char­ac­ter were not well jus­ti­fied? That can actu­al­ly have an effect on how you build the men­tal mod­el of the sto­ry, how you under­stand the sto­ry. And we’ve devel­oped tech­niques for basi­cal­ly pulling the men­tal mod­el out of your head.

Pieters: So Mark, does that mean that you try to kind of per­son­al­ize the sto­ry to log­ic, in the sense of actu­al sto­ries that it’s cred­i­ble? Or how do you decide on these root questions?

Riedl: Yeah. So, log­ic might be too strong of a word but we do know—and psy­chol­o­gists have stud­ied human read­ing com­pre­hen­sion. We know that there are cer­tain things that humans try to mod­el about sto­ries. They try to mod­el the causal pro­gres­sion. They try to mod­el the moti­va­tions of char­ac­ters. They try to mod­el the phys­i­cal cause and effect sorts of things. And so when we do these psy­cho­log­i­cal stud­ies of our read­ers who’ve read a sto­ry gen­er­at­ed by com­put­er, we’re then look­ing for these ele­ments in their men­tal mod­els. There is a log­ic to sto­ry­telling. It’s not…purely math­e­mat­i­cal­ly log­i­cal. But there is a set of expec­ta­tions that humans have when told stories.

Winiger: Right. From here we can look at sto­ry gen­er­a­tion as part of [?] gen­er­a­tion. So what does your intu­ition tell you how far we are from deploy­ing such mod­els in indus­try. You look at any num­bers of these cre­ative indus­tries, they’re still very much in this mode of hand-creation. 

Winiger: Yeah. So, there are many indus­tries that have a par­tic­u­lar way of doing things and have been very suc­cess­ful. So com­put­er games indus­try is one such exam­ple of an indus­try that has found a lot of real­ly good tech­niques for mak­ing some real­ly real­ly great games. And as you said, they do rely more on hand-crafted rules and hand-crafted kind of con­tent and that sort of stuff.

The adop­tion of arti­fi­cial intel­li­gence real­ly is a func­tion of need and appli­ca­tion at this point. You know, there’s an argu­ment to be made about automa­tion and scal­a­bil­i­ty. So, areas in which we need to pro­duce a lot of con­tent real­ly quick­ly, or cus­tomize a lot of con­tent to individuals. 

Winiger: You remem­ber this game Façade a cou­ple of years ago.

Riedl: Yeah.

Winiger: Are we talk­ing Façade-like con­ver­sa­tion­al mod­els mixed with con­tent gen­er­a­tion? Or [could?] you give us some insight of what you’re hit­ting at the bor­ders of gam­ing and what kind of suc­cess you’re find­ing there with that option?

Riedl: Yeah, so Façade a great exam­ple of what’s called an inter­ac­tive dra­ma, where the sto­ry pro­gres­sion changes based on what the pro­tag­o­nist does. You know, some­times com­put­er games have branch­es. Choose Your Own Adventure nov­els are actu­al­ly a real­ly great exam­ple. You get to make deci­sions, and what hap­pens next is a con­se­quence of what you do, and there’s some­times long-term consequences.

So one of the things that arti­fi­cial intel­li­gence and sto­ry gen­er­a­tion is real­ly good at is auto­mat­i­cal­ly gen­er­at­ing branch­es. If you think about the man­u­al effort it would take to cre­ate a branch­ing sto­ry, real­ly you’re look­ing at an expo­nen­tial increase. So every time the user has a choice, you might dou­ble the amount of con­tent that has to be pro­duced. So if we have good mod­els of sto­ry gen­er­a­tion, we can auto­mat­i­cal­ly fig­ure out what the branch­es should be, lay out those branch­es, and we can have much more cus­tomized con­tent in terms of respond­ing to what the user does.

Now you know, the trade­off is that sto­ry gen­er­a­tors are not as good as human con­tent cre­ators. So if you want to cre­ate the most engag­ing expe­ri­ence, it may still be use­ful to hand-craft those things. Façade, for exam­ple, had a lot of man­u­al input into their arti­fi­cial intelligence.

Winiger: So would it be fair that you actu­al­ly see, pos­si­bly as a step­ping stone or as a [?] path of research this notion of assist­ed con­tent cre­ation, or assist­ed expe­ri­ence in a sense, where it’s more of a col­lab­o­ra­tive effort between the tra­di­tion­al cre­ator mod­el and this new gen­er­a­tive mod­el where you’re creating?

Riedl: Well we can cer­tain­ly start to now envi­sion a spec­trum between ful­ly man­u­al and ful­ly auto­mat­ed. And then in the mid­dle grounds are kind of inter­est­ing, where you might imag­ine more of a dia­logue between human and a com­put­er, where the human is high-level guid­ance say­ing, I want things like this but I don’t have the time or the effort nec­es­sary to lay it all out. Can you pro­duce things for me. Maybe I’ll check it, maybe I won’t.” And then as your con­tent needs become greater and greater and greater, you can push toward the autonomous side, where the sys­tem is com­ing up with its own rules.

Pieters: I mean, it’s also a ques­tion of scale when you talk about some­thing like more assist­ed sto­ry­telling, right. I mean, for instance you have the exam­ple of the Putin admin­is­tra­tion hav­ing nat­ur­al lan­guage pro­cess­ing bot­nets churn­ing out sto­ries in favor of the estab­lish­ment. Or China, where it’s not only bot­nets but it’s whole depart­ments of peo­ple sit­ting in their offices using assis­tive sto­ry­telling tech­niques to be able to write sto­ries on a much larg­er scale.

Riedl: Right. Well I mean it’s already hap­pen­ing in a very lim­it­ed sense if you think about tar­get­ed adver­tis­ing on the Internet. You know, we’ve seen this actu­al­ly used in pol­i­tics, where peo­ple can fig­ure out pop­u­la­tions on the Internet that are more recep­tive to cer­tain types of mes­sages and state­ments, and then tar­get those mes­sages to dif­fer­ent sub­pop­u­la­tions. So that’s an exam­ple of the tech­nol­o­gy being used to assist in sto­ry­telling, at least in the lim­it­ed, adver­tis­ing sense.

Winiger: Maybe I’ll just jump­ing into the deep and say all of this brings us to this ques­tion do you have a work­ing the­o­ry of com­pu­ta­tion­al cre­ativ­i­ty that guides these initiatives?

Riedl: Well, in the last few years one of the things that I’ve come to believe is that there’s real­ly noth­ing spe­cial about cre­ativ­i­ty. Which is good from a con­sti­tu­tion­al stand­point because we should be able to cre­ate algo­rithms that can do cre­ation. And of course we do see there are very sim­ple forms of cre­ation, there’s more com­pli­cat­ed forms of cre­ation. Now we have sto­ry gen­er­a­tors and poet­ry gen­er­a­tors, so on and so forth. But I do think that the under­ly­ing mech­a­nisms that allow both humans and com­put­ers to be cre­ative real­ly are tied to notions of exper­tise and learning.

So if you study cre­ators, the degree to which they’re able to pro­duce qual­i­ty is the degree to which they have stud­ied the medi­um and the cul­ture and the soci­ety into which it’s going to be deployed. And this makes sense, right. Our algo­rithms need knowl­edge. That knowl­edge has to be acquired from some­where. It should be social and cul­tur­al knowl­edge, in addi­tion to knowl­edge about oth­er peo­ple and what oth­er things have been cre­at­ed pri­or to the algo­rithm. And that we can start to treat these as data sets that we can then use to train algo­rithms to be experts. And while I think that our notion of cre­at­ing cre­ative sys­tems is still very sim­ple, I do see that things are start­ing to move in that direc­tion. Which is very positive.

Pieters: There’s a lot of these ques­tion and answer sys­tems out cur­rent­ly, which are strict­ly kind of more from that AI per­spec­tive, trained on large data sets of text and mean­ing and log­ic. But they’re not cre­ative. I mean, they just become more and more log­i­cal. They can under­stand syn­tac­ti­cal and seman­tic struc­ture. So nega­tion and posi­tion­al argu­men­ta­tion. But cre­ativ­i­ty, you don’t see it at least in this kind of [?] indus­try or in academia.

Riedl: Right. I’m going to speak specif­i­cal­ly about sto­ry gen­er­a­tion now at this point. A ques­tion answer­ing sys­tem and a sto­ry gen­er­a­tion sys­tem are going to share a lot of the same under­ly­ing needs. And some of those needs are what we refer to as com­mon sense rea­son­ing. So if I want to have a com­put­er tell a sto­ry about going to a restau­rant, it’s got to know a lot about restau­rants and what peo­ple do at restau­rants and the expec­ta­tions. If you don’t have that infor­ma­tion, if you don’t have that knowl­edge, you screw it up and peo­ple think the sto­ry doesn’t make any sense. So sense­mak­ing is anoth­er aspect of com­mon sense reasoning. 

But the appli­ca­tion of the com­mon sense rea­son­ing is very dif­fer­ent from a ques­tion answer which just needs to regur­gi­tate facts, ver­sus a cre­ative sys­tem which then has to take the same knowl­edge set but then has to do some­thing more with it. It’s not enough to just spit facts back out. You actu­al­ly have to make deci­sions about what should come next and what is the com­mu­nica­tive goal of the agent. So I do believe that a lot of these under­ly­ing sys­tems are going to share the same sort of needs.

Winiger: How do you actu­al­ly perceive…let’s call it an arti­fi­cial expe­ri­ence design­er in a job descrip­tion from 2020 or something—

Riedl: Sure.

Winiger: Somebody who actu­al­ly con­scious­ly designs expe­ri­ences with these sys­tems. Can you envi­sion such a job, and how do you see the impor­tance of these emerg­ing jobs?

Riedl: Well, that’s an inter­est­ing ques­tion. So, there’s been a lot of talk in the com­pu­ta­tion­al cre­ativ­i­ty and in par­tic­u­lar the com­put­er game/AI com­mu­ni­ty about whether future researchers or future users have to be capa­ble of liv­ing both in the cre­ative domains (to be design­ers, to be cre­ators), but also be knowl­edge engi­neers and be com­put­er sci­en­tists as well.

Right now it takes a very rare sort of indi­vid­ual who can exist in both of these very dif­fer­ent worlds at the same time. And there’s a big ques­tion about how can you train peo­ple to be both first-class pro­duc­ers, cre­ators, design­ers, and also sci­en­tists, engi­neers, AI experts. And do we need bet­ter cur­ricu­lum in uni­ver­si­ties, so on and so forth.

So you know, you might imag­ine a class of kind of cre­ative engi­neers in the future; that would be the ide­al. An alter­na­tive approach to this would be to look at tech­no­log­i­cal ways of mak­ing the con­sumers of cre­ative tech­nolo­gies more capa­ble of using these high­ly tech­ni­cal sorts of things. And we’re start­ing to see areas now where we’re try­ing to fig­ure out how to make machine learn­ing acces­si­ble to peo­ple who don’t have advanced com­put­er sci­ence degrees. And so you know, can we under­stand the usabil­i­ty aspects of arti­fi­cial intel­li­gence and machine learn­ing as a service?

Winiger: [inaudi­ble] we extrap­o­late a lit­tle bit and we’ll get these [inaudi­ble] con­tent cre­ation tools at that point into the hand of many more peo­ple. And one can imag­ine a world where adver­tis­ing as an indus­try will very aggres­sive­ly engage with these sys­tems. Do you have views on the eth­i­cal impli­ca­tions of mass dis­tri­b­u­tion of such tech­nol­o­gy? Could you share some thoughts on this?

Riedl: Going back to my spe­cial­ty again in sto­ry gen­er­a­tion, there are two kind of par­tic­u­lar eth­i­cal con­cerns that come up there. One is decep­tion. So, in the sense that if we have vir­tu­al char­ac­ters who are online, who are on Twitter, Facebook or things like that, who are cre­at­ing sto­ries and telling sto­ries that appear plau­si­ble in the real world, are there issues if humans can­not tell the dif­fer­ence as to whether they’re com­mu­ni­cat­ing with real human agents? 

The sec­ond area is the per­sua­sive nature of sto­ries. So we know from adver­tis­ing, as you men­tioned from pol­i­tics espe­cial­ly, that sto­ries can have a very pro­found effect on people’s belief struc­tures. And what peo­ple believe and what they’re will­ing to believe. There’s this great study I think prob­a­bly fif­teen or twen­ty years ago now in which psy­chol­o­gists went to malls and told sto­ries about peo­ple being abduct­ed in malls. And they were able to change people’s per­cep­tions about how safe they were in malls. And the most fas­ci­nat­ing about this is that they then repli­cat­ed the study and they told every­one, I’m going to tell you a fic­tion­al sto­ry about peo­ple being abduct­ed in malls.” And peo­ple still changed their beliefs about how safe they were.

So there’s this pow­er of sto­ry­telling that is very very hard to over­ride. We’re real­ly kind of hard­wired to believe sto­ries as true even when they’re not. And now if we get com­put­ers that are now capa­ble of gen­er­at­ing sto­ries for the pur­pos­es of per­sua­sion and you can gen­er­ate mas­sive amounts of sto­ries and cus­tomize those sto­ries to have the max­i­mum effect on each indi­vid­ual, in some ways sto­ries become dangerous.

Pieters: What would you say is now that state of the art with sto­ry­telling if you com­pare what is being devel­oped in indus­try cre­at­ing games and in your research. And also maybe a bit more on the tech­ni­cal aspects like what kind of tech­ni­cal mod­els are being used.

Riedl: So I’ll address the research aspects first. In terms of research, we’re able to gen­er­ate fairy tales or more plau­si­ble real-world sto­ries basi­cal­ly at the lev­el of maybe one to two para­graphs long. So these are very sim­ple sto­ries. They’re often at high lev­el, more like plot out­lines than some­thing that you’d actu­al­ly kind of want to sit down and read in a book. Although the nat­ur­al lan­guage is get­ting bet­ter I would say that we’re still explor­ing a lot of the basic research ques­tions behind how sto­ries are cre­at­ed by algorithms.

In indus­try we don’t see a lot of adop­tion of cre­ative arti­fi­cial intel­li­gences right now, or sto­ry­telling sys­tems in par­tic­u­lar. The one area where we are see­ing adop­tion is in news jour­nal­ism. And this is real­ly more of nat­ur­al lan­guage gen­er­a­tion than sto­ry gen­er­a­tion. So, the facts are giv­en to the sys­tem. The things that should be told are giv­en to the sys­tem as opposed to cre­at­ed in a fic­tion­al sense. And these sys­tems have got­ten very good at choos­ing the words and the struc­tur­ing of the words, to the point where they’re almost indis­tin­guish­able from human-written short jour­nal­is­tic news reports.

Now, you asked about the tech­nolo­gies that go behind it. We haven’t seen the adop­tion of neur­al net­works in sto­ry gen­er­a­tion, I think because there’s still this miss­ing, kind of delib­er­a­tive com­mu­nica­tive layer—the thing that can actu­al­ly decide what should be in the sto­ry. Although, I’m fol­low­ing very close­ly how these deep nets are pro­gress­ing. Because they may get to that point. We just may need more lay­ers on the net­work? Or there may be actu­al­ly some­thing fun­da­men­tal­ly dif­fer­ent about cre­ation that requires…something else.

Pieters: Yeah, you wrote on Twitter, Skip-thought vec­tors,” (and it’s about a paper called Skip-Thoughts’), are an inter­est­ing approach to seman­tics. My only point: sto­ries require seman­tics plus some­thing else.” So as you say now, there’s some­thing miss­ing. Do you have any kind of ideas what is miss­ing, and what are the chal­lenges you have yourself?

Riedl: Well, it’s miss­ing plan­ning. So when humans gen­er­ate sto­ries, they’re not Markov process­es, right, where they say oh, this sen­tence is log­i­cal­ly fol­lowed by that sen­tence. There’s lots of sen­tences that can log­i­cal­ly fol­low that miss the kind of seman­tics struc­ture of plot, or again the com­mu­nica­tive goal. The fact that I might want to affect a belief change on you.

So when you talk about it in those search terms you start think­ing about plan­ning, a sequence of men­tal state changes in the read­er that you want to achieve, that then have to be ground­ed. So these seman­tic neur­al nets I think would be great at the ground­ing but you first have to have this delib­er­a­tive plan out your plot” process. You know, what I don’t know is whether neur­al nets can progress to the point where they’re able to do this delib­er­a­tive, com­mu­nica­tive goal struc­tur­ing as well. I think the­o­ret­i­cal­ly they might be able to do it, but we don’t know how to do it yet.

Winiger: You’ve been work­ing in acad­e­mia for quite some time now, with some links I sup­pose to indus­try. What is your per­cep­tion of this appar­ent­ly grow­ing trend of cor­po­ra­tions buy­ing whole aca­d­e­m­ic teams from uni­ver­si­ties to work specif­i­cal­ly on deep learn­ing and oth­er areas of machine intelligence?

Riedl: I mean, I have sev­er­al reac­tions. One is it’s very excit­ing to see that arti­fi­cial intelligence—weak AI in particular—and machine learn­ing has got­ten the point where we can see com­mer­cial adop­tion in actu­al prod­uct. You know, we often refer to this is a new gold­en age of arti­fi­cial intelligence. 

At the same time I’m a lit­tle bit con­cerned about brain drain and sus­tain­abil­i­ty of this mod­el, in par­tic­u­lar if we don’t have real­ly great peo­ple com­ing into fac­ul­ty posi­tions to teach arti­fi­cial intel­li­gence in our uni­ver­si­ties. You know, are we cre­at­ing a suc­cess­ful pipeline of future AI researchers and devel­op­ers and prac­ti­tion­ers? I think it’s not a prob­lem yet, but you can def­i­nite­ly see how the trend becomes accel­er­at­ed. We might actu­al­ly have a prob­lem where AI kind of…eats itself, right. It becomes a vic­tim of its own success.

Pieters: The oppo­site is hap­pen­ing as well, right? I mean, in Holland for instance they announced news about a whole new research lab being cre­at­ed just for specif­i­cal­ly deep learn­ing and com­put­er vision between a big com­pa­ny, Qualcomm, and the University of Amsterdam, with I think some­thing like twelve PhD posi­tions and three post-doctorates. So do you see that hap­pen­ing also more where you work?

Riedl: Um…yeah, I don’t know every­thing that’s hap­pen­ing at every uni­ver­si­ty. I mean, the big sto­ry in the United States is the so-called part­ner­ship between Uber and Carnegie Mellon that end­ed up I think ulti­mate­ly decreas­ing the num­ber researchers that were affil­i­at­ed with the uni­ver­si­ty. So there’s always kind of a risk that indus­try and uni­ver­si­ties do have fun­da­men­tal­ly com­pet­ing goals, where indus­try is inter­est­ed in more short-term, incre­men­tal sort of solu­tions, and researchers osten­si­bly tend to be more focused on long-term prob­lems. So a lot of researchers get a lot of fund­ing from indus­try and it’s usu­al­ly kind of a healthy thing. But it does change what peo­ple want to work on. So there is an effect.

Winiger: So I would like to state to you a hypo­thet­i­cal sce­nario and see what you make of it. It’s the year 2025 and you’re in a car—a self-driving car—driving from LA to San Francisco. Now, sud­den­ly the car alarm goes off and you’re informed that it about thir­ty mil­lisec­onds you’re going to be involved in a mas­sive car accident.

Now, since it’s a self-driving car and every­thing around you is a self-driving car, the com­put­er in there will imme­di­ate­ly hook up to the net­work, cal­cu­late the like­ly out­come of this crash for you and the ten peo­ple around you, and make an eval­u­a­tion what is more impor­tant: to kill you and save ten oth­er lives, or kill ten oth­er lives and save you.

And into his con­sid­er­a­tion, one can imag­ine would not only play the phys­i­cal­i­ty of the crash but as well your income, your social insur­ance, the whole social assess­ment that can be done in thir­ty mil­lisec­onds. To land this in a ques­tion, have you thought about design­ing objec­tive func­tion for autonomous or semi-autonomous sys­tems, and I guess that can be tied into sto­ry gen­er­a­tion in a sense, as well.

Riedl: Yeah, well, this brings up kind of one of the clas­si­cal eth­i­cal conun­drums of kind of the indi­vid­ual ver­sus soci­ety, and the fact that indi­vid­u­als and soci­eties can have dif­fer­ent call them objec­tive func­tions, or think­ing about it in terms of rein­force­ment learn­ing, a reward func­tion. And then what’s the right thing to do? So kill the dri­ver because the ten peo­ple have greater social val­ue or some­thing like that, or should you do what the human would have done which is prob­a­bly do some­thing more self-preserving?

You know, I think about this in a slight­ly dif­fer­ent con­text in my own work. You know, a lot of my work has been involved in try­ing to under­stand how humans oper­ate in soci­ety because I need to tell sto­ries about peo­ple oper­at­ing in soci­ety, right. So again, the easy exam­ple is how do you go to a restau­rant? Well, the thing we don’t do is we don’t walk into the kitchen and steal all the food because we’re hun­gry, right. So we actu­al­ly per­form it to pro­to­col. And the pro­to­col has been devel­oped over a long peri­od of time, for social har­mo­ny and so on and so forth. One of the solu­tions is well, let’s try to have human-like val­ues in our agents, and that allows us to kind of avoid…or it at least gives us an answer to the soci­etal val­ue ques­tion, right. Do what the human would do. What is the human val­ue set? At least we won’t be any worse off than what the human would have decid­ed in the first place.

But you know, obvi­ous­ly the counter side of that is well, should soci­ety as a whole have a stronger val­ue? You know, it’s an eth­i­cal conun­drum that’s meant to exist to chal­lenge our pre­con­ceived notions on what is eth­i­cal and right. I’m going to go with the as long as we do no worse than what a human would do,” then I think we prob­a­bly can feel com­fort­able about the AIs that we’re developing.

Winiger: It’s inter­est­ing, though, what the human would do is pro­gres­sive­ly defined by what cul­ture would do, and cul­ture varies from place to place. I guess cul­tur­al stud­ies should play a role in AI, who knows? What do you think? 

Riedl: Yeah, com­put­ers right now and com­put­ers in the future should not exist inde­pen­dent­ly of our cul­ture. So when we talk about sto­ry gen­er­a­tion we want com­put­ers to under­stand us bet­ter because we have par­tic­u­lar ways of think­ing about and com­mu­ni­cat­ing and express­ing our­selves that is wrapped up in cul­ture and soci­ety. So if com­put­ers are unaware of our cul­ture, then they’re going to make deci­sions that are fun­da­men­tal­ly alien to us and that will present chal­lenges and increased fears and uncer­tain­ty. But if we feel like they under­stand us even if they’re mak­ing sub­op­ti­mal deci­sions, then we’re going to be more com­fort­able with com­mu­ni­cat­ing and using these technologies.

Pieters: So if you made it this far, thanks for lis­ten­ing and we hope to see you next time.

Winiger: Bye bye.

Pieters: Adios.


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.