Samim Winiger: Welcome to the first episode of Ethical Machines.
Roelof Pieters: We are your hosts, Roelof…
Winiger: And Samim.
Pieters: Ethical Machines is a series of conversations about humans, machines, and ethics. It aims at sparking a deeper, better-informed debate about the implications of intelligent systems for society and individuals.
Winiger: For our first episode, we invited Mark Riedl to come and speak with us. Let’s dive into the interview right now.
Welcome, Mark. I’m very pleased that you made it. It’s a pleasure to have you on, our first guest. Maybe I’ll use this opportunity to introduce you to the audience. Mark Riedl is an associate professor at Georgia Tech School of Interactive Computing and Director at the Entertainment Intelligence Lab. Mark’s research focuses on the intersection of artificial intelligence and storytelling. You can read more about his very interesting biography on his website, which we’ll link. To get started and so we can get to know you a little bit better, could you elaborate how you got interested in this field in the first place?
Mark Riedl: So, it was actually a very slow progression. I had gotten interested in human-computer interaction and human factors in undergrad and early in my graduate studies. And then progressively came to realize that storytelling is such an important part of human cognition and is really kind of missing when it comes to computational systems.
Computers can tell stories but they’re always stories that humans have input into a computer, which are then just being regurgitated. But they don’t make stories up on their own. They don’t really understand the stories that we tell. They’re not kind of aware of the cultural importance of stories. They can’t watch the same movies or read the same books we do. And this seems like this huge missing gap between what computers can do and humans can do if you think about how important storytelling is to the human condition.
So we tell stories dozens of times a day to relate to other people, to communicate, to entertain. And so the broader questions are, if computers could understand stories and make stories, can they interface with us in more natural sorts of ways—the ways that human-human interaction happens? So the primary research that I’ve been interested in the last fifteen years or so has been in story generation, which is the creation of novel fictional stories that one might read and conceive as having story-like qualities.
What I don’t work on is journalism. So I don’t try to generate news stories, but actually try to make up things that have never existed in the real world. So there’s a very strong creative element.
And then the other kind of major area I’m working in is procedural game generation, so trying to actually generate computer games from scratch.
Winiger: So do you have a theory how to judge a good story output from one of these generative systems and what will constitute good outputs from “bad” outputs?
Riedl: Yeah no, that’s a really great question because stories are very subjective. And part of this is because there are many different roles that stories can take. So in many ways the answer is very domain-dependent. A lot of my work more recently has been involved in telling plausible real-world stories. So for example can a computer make up a story about a bank robbery that has not existed, that no bank has actually been robbed. But when people read it people actually think, “Yeah, this could’ve happened in the real world.”
Now, the other work that I’ve done in the past in terms of fairy tale generation, that’s much more difficult to evaluate. Because there’s no nice, objective measures of what a good fairy tale is other than did you enjoy it or not. And there what I’ve tried to do is to dip into psychology and to say well, can we actually measure aspects of mental models when people read stories?
So for example are there things that are confusing because the motivations of a character were not well justified? That can actually have an effect on how you build the mental model of the story, how you understand the story. And we’ve developed techniques for basically pulling the mental model out of your head.
Pieters: So Mark, does that mean that you try to kind of personalize the story to logic, in the sense of actual stories that it’s credible? Or how do you decide on these root questions?
Riedl: Yeah. So, logic might be too strong of a word but we do know—and psychologists have studied human reading comprehension. We know that there are certain things that humans try to model about stories. They try to model the causal progression. They try to model the motivations of characters. They try to model the physical cause and effect sorts of things. And so when we do these psychological studies of our readers who’ve read a story generated by computer, we’re then looking for these elements in their mental models. There is a logic to storytelling. It’s not…purely mathematically logical. But there is a set of expectations that humans have when told stories.
Winiger: Right. From here we can look at story generation as part of [?] generation. So what does your intuition tell you how far we are from deploying such models in industry. You look at any numbers of these creative industries, they’re still very much in this mode of hand-creation.
Winiger: Yeah. So, there are many industries that have a particular way of doing things and have been very successful. So computer games industry is one such example of an industry that has found a lot of really good techniques for making some really really great games. And as you said, they do rely more on hand-crafted rules and hand-crafted kind of content and that sort of stuff.
The adoption of artificial intelligence really is a function of need and application at this point. You know, there’s an argument to be made about automation and scalability. So, areas in which we need to produce a lot of content really quickly, or customize a lot of content to individuals.
Winiger: You remember this game Façade a couple of years ago.
Riedl: Yeah.
Winiger: Are we talking Façade-like conversational models mixed with content generation? Or [could?] you give us some insight of what you’re hitting at the borders of gaming and what kind of success you’re finding there with that option?
Riedl: Yeah, so Façade a great example of what’s called an interactive drama, where the story progression changes based on what the protagonist does. You know, sometimes computer games have branches. Choose Your Own Adventure novels are actually a really great example. You get to make decisions, and what happens next is a consequence of what you do, and there’s sometimes long-term consequences.
So one of the things that artificial intelligence and story generation is really good at is automatically generating branches. If you think about the manual effort it would take to create a branching story, really you’re looking at an exponential increase. So every time the user has a choice, you might double the amount of content that has to be produced. So if we have good models of story generation, we can automatically figure out what the branches should be, lay out those branches, and we can have much more customized content in terms of responding to what the user does.
Now you know, the tradeoff is that story generators are not as good as human content creators. So if you want to create the most engaging experience, it may still be useful to hand-craft those things. Façade, for example, had a lot of manual input into their artificial intelligence.
Winiger: So would it be fair that you actually see, possibly as a stepping stone or as a [?] path of research this notion of assisted content creation, or assisted experience in a sense, where it’s more of a collaborative effort between the traditional creator model and this new generative model where you’re creating?
Riedl: Well we can certainly start to now envision a spectrum between fully manual and fully automated. And then in the middle grounds are kind of interesting, where you might imagine more of a dialogue between human and a computer, where the human is high-level guidance saying, “I want things like this but I don’t have the time or the effort necessary to lay it all out. Can you produce things for me. Maybe I’ll check it, maybe I won’t.” And then as your content needs become greater and greater and greater, you can push toward the autonomous side, where the system is coming up with its own rules.
Pieters: I mean, it’s also a question of scale when you talk about something like more assisted storytelling, right. I mean, for instance you have the example of the Putin administration having natural language processing botnets churning out stories in favor of the establishment. Or China, where it’s not only botnets but it’s whole departments of people sitting in their offices using assistive storytelling techniques to be able to write stories on a much larger scale.
Riedl: Right. Well I mean it’s already happening in a very limited sense if you think about targeted advertising on the Internet. You know, we’ve seen this actually used in politics, where people can figure out populations on the Internet that are more receptive to certain types of messages and statements, and then target those messages to different subpopulations. So that’s an example of the technology being used to assist in storytelling, at least in the limited, advertising sense.
Winiger: Maybe I’ll just jumping into the deep and say all of this brings us to this question do you have a working theory of computational creativity that guides these initiatives?
Riedl: Well, in the last few years one of the things that I’ve come to believe is that there’s really nothing special about creativity. Which is good from a constitutional standpoint because we should be able to create algorithms that can do creation. And of course we do see there are very simple forms of creation, there’s more complicated forms of creation. Now we have story generators and poetry generators, so on and so forth. But I do think that the underlying mechanisms that allow both humans and computers to be creative really are tied to notions of expertise and learning.
So if you study creators, the degree to which they’re able to produce quality is the degree to which they have studied the medium and the culture and the society into which it’s going to be deployed. And this makes sense, right. Our algorithms need knowledge. That knowledge has to be acquired from somewhere. It should be social and cultural knowledge, in addition to knowledge about other people and what other things have been created prior to the algorithm. And that we can start to treat these as data sets that we can then use to train algorithms to be experts. And while I think that our notion of creating creative systems is still very simple, I do see that things are starting to move in that direction. Which is very positive.
Pieters: There’s a lot of these question and answer systems out currently, which are strictly kind of more from that AI perspective, trained on large data sets of text and meaning and logic. But they’re not creative. I mean, they just become more and more logical. They can understand syntactical and semantic structure. So negation and positional argumentation. But creativity, you don’t see it at least in this kind of [?] industry or in academia.
Riedl: Right. I’m going to speak specifically about story generation now at this point. A question answering system and a story generation system are going to share a lot of the same underlying needs. And some of those needs are what we refer to as common sense reasoning. So if I want to have a computer tell a story about going to a restaurant, it’s got to know a lot about restaurants and what people do at restaurants and the expectations. If you don’t have that information, if you don’t have that knowledge, you screw it up and people think the story doesn’t make any sense. So sensemaking is another aspect of common sense reasoning.
But the application of the common sense reasoning is very different from a question answer which just needs to regurgitate facts, versus a creative system which then has to take the same knowledge set but then has to do something more with it. It’s not enough to just spit facts back out. You actually have to make decisions about what should come next and what is the communicative goal of the agent. So I do believe that a lot of these underlying systems are going to share the same sort of needs.
Winiger: How do you actually perceive…let’s call it an artificial experience designer in a job description from 2020 or something—
Riedl: Sure.
Winiger: Somebody who actually consciously designs experiences with these systems. Can you envision such a job, and how do you see the importance of these emerging jobs?
Riedl: Well, that’s an interesting question. So, there’s been a lot of talk in the computational creativity and in particular the computer game/AI community about whether future researchers or future users have to be capable of living both in the creative domains (to be designers, to be creators), but also be knowledge engineers and be computer scientists as well.
Right now it takes a very rare sort of individual who can exist in both of these very different worlds at the same time. And there’s a big question about how can you train people to be both first-class producers, creators, designers, and also scientists, engineers, AI experts. And do we need better curriculum in universities, so on and so forth.
So you know, you might imagine a class of kind of creative engineers in the future; that would be the ideal. An alternative approach to this would be to look at technological ways of making the consumers of creative technologies more capable of using these highly technical sorts of things. And we’re starting to see areas now where we’re trying to figure out how to make machine learning accessible to people who don’t have advanced computer science degrees. And so you know, can we understand the usability aspects of artificial intelligence and machine learning as a service?
Winiger: [inaudible] we extrapolate a little bit and we’ll get these [inaudible] content creation tools at that point into the hand of many more people. And one can imagine a world where advertising as an industry will very aggressively engage with these systems. Do you have views on the ethical implications of mass distribution of such technology? Could you share some thoughts on this?
Riedl: Going back to my specialty again in story generation, there are two kind of particular ethical concerns that come up there. One is deception. So, in the sense that if we have virtual characters who are online, who are on Twitter, Facebook or things like that, who are creating stories and telling stories that appear plausible in the real world, are there issues if humans cannot tell the difference as to whether they’re communicating with real human agents?
The second area is the persuasive nature of stories. So we know from advertising, as you mentioned from politics especially, that stories can have a very profound effect on people’s belief structures. And what people believe and what they’re willing to believe. There’s this great study I think probably fifteen or twenty years ago now in which psychologists went to malls and told stories about people being abducted in malls. And they were able to change people’s perceptions about how safe they were in malls. And the most fascinating about this is that they then replicated the study and they told everyone, “I’m going to tell you a fictional story about people being abducted in malls.” And people still changed their beliefs about how safe they were.
So there’s this power of storytelling that is very very hard to override. We’re really kind of hardwired to believe stories as true even when they’re not. And now if we get computers that are now capable of generating stories for the purposes of persuasion and you can generate massive amounts of stories and customize those stories to have the maximum effect on each individual, in some ways stories become dangerous.
Pieters: What would you say is now that state of the art with storytelling if you compare what is being developed in industry creating games and in your research. And also maybe a bit more on the technical aspects like what kind of technical models are being used.
Riedl: So I’ll address the research aspects first. In terms of research, we’re able to generate fairy tales or more plausible real-world stories basically at the level of maybe one to two paragraphs long. So these are very simple stories. They’re often at high level, more like plot outlines than something that you’d actually kind of want to sit down and read in a book. Although the natural language is getting better I would say that we’re still exploring a lot of the basic research questions behind how stories are created by algorithms.
In industry we don’t see a lot of adoption of creative artificial intelligences right now, or storytelling systems in particular. The one area where we are seeing adoption is in news journalism. And this is really more of natural language generation than story generation. So, the facts are given to the system. The things that should be told are given to the system as opposed to created in a fictional sense. And these systems have gotten very good at choosing the words and the structuring of the words, to the point where they’re almost indistinguishable from human-written short journalistic news reports.
Now, you asked about the technologies that go behind it. We haven’t seen the adoption of neural networks in story generation, I think because there’s still this missing, kind of deliberative communicative layer—the thing that can actually decide what should be in the story. Although, I’m following very closely how these deep nets are progressing. Because they may get to that point. We just may need more layers on the network? Or there may be actually something fundamentally different about creation that requires…something else.
Pieters: Yeah, you wrote on Twitter, “Skip-thought vectors,” (and it’s about a paper called ‘Skip-Thoughts’), “are an interesting approach to semantics. My only point: stories require semantics plus something else.” So as you say now, there’s something missing. Do you have any kind of ideas what is missing, and what are the challenges you have yourself?
Riedl: Well, it’s missing planning. So when humans generate stories, they’re not Markov processes, right, where they say oh, this sentence is logically followed by that sentence. There’s lots of sentences that can logically follow that miss the kind of semantics structure of plot, or again the communicative goal. The fact that I might want to affect a belief change on you.
So when you talk about it in those search terms you start thinking about planning, a sequence of mental state changes in the reader that you want to achieve, that then have to be grounded. So these semantic neural nets I think would be great at the grounding but you first have to have this deliberative “plan out your plot” process. You know, what I don’t know is whether neural nets can progress to the point where they’re able to do this deliberative, communicative goal structuring as well. I think theoretically they might be able to do it, but we don’t know how to do it yet.
Winiger: You’ve been working in academia for quite some time now, with some links I suppose to industry. What is your perception of this apparently growing trend of corporations buying whole academic teams from universities to work specifically on deep learning and other areas of machine intelligence?
Riedl: I mean, I have several reactions. One is it’s very exciting to see that artificial intelligence—weak AI in particular—and machine learning has gotten the point where we can see commercial adoption in actual product. You know, we often refer to this is a new golden age of artificial intelligence.
At the same time I’m a little bit concerned about brain drain and sustainability of this model, in particular if we don’t have really great people coming into faculty positions to teach artificial intelligence in our universities. You know, are we creating a successful pipeline of future AI researchers and developers and practitioners? I think it’s not a problem yet, but you can definitely see how the trend becomes accelerated. We might actually have a problem where AI kind of…eats itself, right. It becomes a victim of its own success.
Pieters: The opposite is happening as well, right? I mean, in Holland for instance they announced news about a whole new research lab being created just for specifically deep learning and computer vision between a big company, Qualcomm, and the University of Amsterdam, with I think something like twelve PhD positions and three post-doctorates. So do you see that happening also more where you work?
Riedl: Um…yeah, I don’t know everything that’s happening at every university. I mean, the big story in the United States is the so-called partnership between Uber and Carnegie Mellon that ended up I think ultimately decreasing the number researchers that were affiliated with the university. So there’s always kind of a risk that industry and universities do have fundamentally competing goals, where industry is interested in more short-term, incremental sort of solutions, and researchers ostensibly tend to be more focused on long-term problems. So a lot of researchers get a lot of funding from industry and it’s usually kind of a healthy thing. But it does change what people want to work on. So there is an effect.
Winiger: So I would like to state to you a hypothetical scenario and see what you make of it. It’s the year 2025 and you’re in a car—a self-driving car—driving from LA to San Francisco. Now, suddenly the car alarm goes off and you’re informed that it about thirty milliseconds you’re going to be involved in a massive car accident.
Now, since it’s a self-driving car and everything around you is a self-driving car, the computer in there will immediately hook up to the network, calculate the likely outcome of this crash for you and the ten people around you, and make an evaluation what is more important: to kill you and save ten other lives, or kill ten other lives and save you.
And into his consideration, one can imagine would not only play the physicality of the crash but as well your income, your social insurance, the whole social assessment that can be done in thirty milliseconds. To land this in a question, have you thought about designing objective function for autonomous or semi-autonomous systems, and I guess that can be tied into story generation in a sense, as well.
Riedl: Yeah, well, this brings up kind of one of the classical ethical conundrums of kind of the individual versus society, and the fact that individuals and societies can have different call them objective functions, or thinking about it in terms of reinforcement learning, a reward function. And then what’s the right thing to do? So kill the driver because the ten people have greater social value or something like that, or should you do what the human would have done which is probably do something more self-preserving?
You know, I think about this in a slightly different context in my own work. You know, a lot of my work has been involved in trying to understand how humans operate in society because I need to tell stories about people operating in society, right. So again, the easy example is how do you go to a restaurant? Well, the thing we don’t do is we don’t walk into the kitchen and steal all the food because we’re hungry, right. So we actually perform it to protocol. And the protocol has been developed over a long period of time, for social harmony and so on and so forth. One of the solutions is well, let’s try to have human-like values in our agents, and that allows us to kind of avoid…or it at least gives us an answer to the societal value question, right. Do what the human would do. What is the human value set? At least we won’t be any worse off than what the human would have decided in the first place.
But you know, obviously the counter side of that is well, should society as a whole have a stronger value? You know, it’s an ethical conundrum that’s meant to exist to challenge our preconceived notions on what is ethical and right. I’m going to go with the “as long as we do no worse than what a human would do,” then I think we probably can feel comfortable about the AIs that we’re developing.
Winiger: It’s interesting, though, what the human would do is progressively defined by what culture would do, and culture varies from place to place. I guess cultural studies should play a role in AI, who knows? What do you think?
Riedl: Yeah, computers right now and computers in the future should not exist independently of our culture. So when we talk about story generation we want computers to understand us better because we have particular ways of thinking about and communicating and expressing ourselves that is wrapped up in culture and society. So if computers are unaware of our culture, then they’re going to make decisions that are fundamentally alien to us and that will present challenges and increased fears and uncertainty. But if we feel like they understand us even if they’re making suboptimal decisions, then we’re going to be more comfortable with communicating and using these technologies.
Pieters: So if you made it this far, thanks for listening and we hope to see you next time.
Winiger: Bye bye.
Pieters: Adios.