Ed Finn: Our next panel is “Who and What Will Get To Think the Future?” and I’m delighted to be talking about this with Ted Chiang, a science fiction writer, a technical writer, the author of Stories of Your Life and Others and The Lifecycle of Software Objects. His stories have been winners of the Locus and Nebula awards, and he is distinguished today by being a science fiction writer who’s not actually in the book. So he still has some shred of independence to tell us what he really thinks.
So Ted, I want to start by asking you— I want to lean on this word “think.” This is a subject that I’ve become really interested in over the past year or so. I’m working on a book about algorithms as culture machines, basically, and the ways in which thinking might not be the same anymore. So, what do you think thinking is going to mean, and do you see that changing in the near future?
Ted Chiang: Well, okay. So, there was this, I thought, really fascinating anecdote that the science writer Steven Johnson mentioned once. He was working on a science book, and he has installed on his computer a piece of software which caches all the web pages and stuff that he has consulted for research. And all sorts of his notes, it organizes all the information that he wants to use. And this software also scans what he is typing as he types, and then it throws up possibly relevant information from his personal research database. And in the course of writing a chapter, this software threw up a piece of information which he thought was—that was a really interesting connection it made. And then that gave rise to an entire chapter of his book, making this connection between one thing that he had said and this other thing that the software had brought up. And he wondered who came up with the idea for that chapter. Was it him, or was it the software?
Now, that example is—that piece of software is not something that most of us are using right now. But I think the fact that we are more and more sort of relying on computer devices as—a lot of people call them “secondary brains,” we are sort of doing a lot of cognitive outsourcing. So, in various fashions our thinking is partially being done by algorithms now. And our creativity is not all happening within our heads now. And you know, at some point it will probably become worthwhile asking how much of our cognition do we want to cede to software, and will the companies who make that software have an interest in getting a part of our cognition? Will different companies offer maybe different benefits or different styles of cognition? And is that a choice that we will have to make when we choose the software we use?
Finn: I think we’re already there. I mean, I’m borrowing this from one of my colleagues at ASU but you know, how many of you use Apple devices? How many of you feel that this is in some way kind of a moral or aesthetic choice? Right? That you sort of look down on people who don’t use Apple devices, right? And the same could probably be said of many of the Android users in the room. And there’s already a kind of cognitive investment that we make, you know. At a certain point, you have years of your personal history living in somebody’s cloud. And that goes beyond merely being a memory bank, it’s also a cognitive bank in some way.
I want to come back to another thing you mentioned, which is this notion of creativity. We’ve always used tools, from the I Ching, to flipping through your copy of the Aeneid in the Middle Ages, to going to a library and looking at what books are on the shelf next to the thing you thought you were looking for. We’ve always used serendipity, a sort of structured serendipity, to do research, to do intellectual work. And one of the most interesting things about digital systems like the one that you were talking about from from Steven Johnson is that they also manufacture serendipity in a way that is supposed to be helpful to you.
But all of these systems have their implicit biases and reasons for doing things, right. And so we might be using Twitter as another serendipity engine to try and find out what’s happening in the world. But Twitter isn’t only interested in showing us stuff that’s happening in the world, right. They have these other agendas that as we were just talking about in the last panel, people are trying to make money off this, and we’re not really the users of a lot of the systems, we’re the product. We’re the thing that’s being sold to to advertisers.
So as you think about where we’re heading, I’d like to hear you reflect a little more on that that question of style. Do you think that… Do you want to speculate on what kinds of styles we might actually get to have? I mean, are we already starting to wear the grooves through the relationships we have now with our software tools?
Chiang: Okay, so in terms know what sort of serendipity we rely on, I think that Google autocomplete is… I think it has become something that a lot of people rely on. You type in a word, and then you see what’s in the dropdown list, and that will often influence what the next word you type is. And while it would be nice to think that that dropdown list is determined on purely objective terms, we have no guarantee that that is. I mean, there’s no real the definition of what constitutes an objective population of that dropdown list. There’s going to be an algorithm, and different people will offer different algorithms for how to populate that autocomplete list. And that will shape the serendipity that you experience when you are doing research.
Even something like— And this is next example is not so much algorithmic, but again the fact that so many people rely on Wikipedia and whatever the authors of that Wikipedia entry, and whatever links they put in, those are probably shaping a lot of people’s ways of thinking about topics. And these are all things that we didn’t voluntarily sign up for, and initially we think these are incredibly welcome conveniences. But they are shaping the serendipity that we experience. They are in some way influencing our creativity.
And at the moment, Google I think really dominates search, at least in the English language. But you could easily imagine a situation where different search engines are major players. And if their autocomplete lists are different in some way, people might choose their search engine because you like, I sort of like the autocomplete suggestions that Bing is offering more than the ones that Google is offering.
Finn: It just gets me.
Chiang: Yes, yes. And so that is sort of an extension of targeted advertising, and it’s an opportunity for a kind of targeted cognitive bias.
Finn: I’m fascinated by autocomplete. As a short digression, I teach a course at ASU called “Media Literacies and Composition.” And one assignment I have have our students do each year is to write a poem or short story using only phrases they get from autocomplete. I’ll usually give them up a seed that they can start with, like “how do I” or something like that, and they can add on letters or words if they want to to kind of farm out, to get more stuff. But it’s fascinating.
And one of the reasons that it’s so compelling is that you know— I mean, I’m sure Google is manipulating this and trying to get you. But also this is a cognitive amplification of what thousands of people must have typed into their search bars at some point or another, to actually ask about. And so that can be fascinating, horrifying, deeply sad, sometimes joyous, when you see what those things are. If you type in “how do I,” it’s sort of mind-blowing what comes up. And so the poetry or the fiction that comes out the other end is often really interesting because of that, too. So that idea of grooves you know, these are really well-worn grooves, people sitting there typing this stuff in.
But what is really intriguing to me now beyond simply autocomplete is the whole suite, the whole apparatus of interaction. And I think Google is the elephant or maybe the octopus in the room in this context. Because it is so easy to look something up on Google— and now Google has of course sort of ingested Wikipedia. So Wikipedia entries will pop up if you’ve ever— You’ve probably noticed that if you’re looking for something and Wikipedia happened to have an entry of it, Google puts it right up near the top for you. Often you don’t even need to click through to Wikipedia, which I’m sure makes Wikipedia sad. And they’ve they’ve sort of absorbed this entire knowledge infrastructure from Wikipedia.
And they have this this project called Knowledge Graph, where they’re basically going out and trying to ingest vast portions of the web. They started with things like Wikipedia that had structured data, and now they’re proceeding out into it unstructured data and the deeper wilds of the Internet. I feel like eventually they’re going to travel back in time and start serving GeoCities with little spiders and getting all the old GIFs.
But what they’re really doing is building this map of ideas, and stuff. Of cognitive elements. And because it’s so easy and it’s almost impossible not to begin any intellectual question you have that you’re going to use a computer for, it’s almost impossible not to begin with Google now, in some way, shape, or form, right? At least, again, in English. Certainly in the US. It’s easy to forget all the stuff that Google doesn’t know. So that’s one thing to think about. And the the seduction, right, the seduction of perfect knowledge. And the seduction of Wikipedia, too, which has its own romantic notion of building the universal encyclopedia.
So that’s one thing. And then the other thing is getting back to sure there are thousands of people typing this in, but ultimately comes back to you, and this is something that you end up typing in. Why does Google keep trying to complete my sentences and my thoughts for me, right? And their system Google Now, Google Now, Google will tell you when to go to your next appointment. I find it deeply useful. I’m not trying to knock this. I think it’s exciting and something that we need to think hard about at the same time.
But they’re not just not mapping outer space, the universe of knowledge, they’re also mapping inner space, right. They’re mapping each of us. And there’s this sort of interesting question of at what point do computers and algorithms actually know us better than we know ourselves? Because they can see things about us that we can’t easily see. They know way better than us exactly how long it takes us to get out of the house each morning, or how long it takes us to eat lunch, or how many typos we make every hour, how efficient we are at 11:00 AM versus 3:00 PM. There are algorithms that gather all this information. So do you do you think we’re going to be more surprised by algorithms that map the outer space of knowledge or the inner space of knowledge?
Chiang: Well, I guess I think that the risk is that we will not be aware of it mapping the inner space of knowledge. We will not be conscious of the way that it is shaping our cognition, modifying our habits. The utility of Google for searching, getting information, that is something that we are aware of. We’re thinking, “This is great.” But it is having an effect on us internally, and that is much much less obvious. I mean, I think this is something that in a way is a continuation of a long trend of cognitive technologies. Socrates, famously, he criticized writing because he thought that it only creates the illusion of wisdom instead of someone actually knowing something themselves. They just read it somewhere, and they don’t really know it.
Finn: To be fair, Plato really put those words in his mouth when he wrote the book.
Chiang: Yes, he did. He did. And so, our reliance on Google and the Internet in general, in a sense all of us are trivia champs now.
Finn: We’re sort of meta-trivia champs, right? We know how to find it.
Chiang: Yes. I mean, we all share we all share a certain cognitive resource now. And in a lot of ways we all feel like this is an incredibly powerful tool, but you know, Socrates (or Plato) had a point about the fact that it is taking something away from us. When people are deprived of the Internet, when you don’t have your smartphone, a lot of people they feel less themselves. So that is one of these unanticipated side-effects of this technology that we all love.
Finn: Yeah. I think that notion of the phone, and in some ways also these invisible things, you know. Whether it’s your Twitter feed or whatever. But they are these these cognitive prostheses that do somehow amplify you, that are your self. And which leads to the interesting question of cognitive proprioception, cultural proprioception in the sense that you might have these things have become internalized as part of your identity that only exist virtually and they connect you to other people virtually. You know, I think it does fundamentally change who we are as humans.
And as a card-carrying English professor, the humanities is changing, right? How we read and write is fundamentally changing because of these tools. And that means that how we construct ourselves as human beings and what we think that means is also changing. And I think we’re just at the beginning of that. I’m going to give you the last word.
Chiang: I guess— Um… I don’t have a good line.
Finn: Should we Google it?
Chiang: Yeah, yeah.
Finn: Somebody out here in the audience will have figured it out for us on Twitter. So, yeah. I think who and what will get to think the future, it’s clearly going to be a collaboration, right. I think that’s the stopping point.
Chiang: Yes.
Finn: As this was. So, thank you.
Chiang: Thank you.
Further Reference
Can We Imagine Our Way to a Better Future? event page at the New America site