Ed Finn: Our next pan­el is Who and What Will Get To Think the Future?” and I’m delight­ed to be talk­ing about this with Ted Chiang, a sci­ence fic­tion writer, a tech­ni­cal writer, the author of Stories of Your Life and Others and The Lifecycle of Software Objects. His sto­ries have been win­ners of the Locus and Nebula awards, and he is dis­tin­guished today by being a sci­ence fic­tion writer who’s not actu­al­ly in the book. So he still has some shred of inde­pen­dence to tell us what he real­ly thinks. 

So Ted, I want to start by ask­ing you— I want to lean on this word think.” This is a sub­ject that I’ve become real­ly inter­est­ed in over the past year or so. I’m work­ing on a book about algo­rithms as cul­ture machi­nes, basi­cal­ly, and the ways in which think­ing might not be the same any­more. So, what do you think think­ing is going to mean, and do you see that chang­ing in the near future?

Ted Chiang: Well, okay. So, there was this, I thought, real­ly fas­ci­nat­ing anec­dote that the sci­ence writer Steven Johnson men­tioned once. He was work­ing on a sci­ence book, and he has installed on his com­put­er a piece of soft­ware which caches all the web pages and stuff that he has con­sult­ed for research. And all sorts of his notes, it orga­nizes all the infor­ma­tion that he wants to use. And this soft­ware also scans what he is typ­ing as he types, and then it throws up pos­si­bly rel­e­vant infor­ma­tion from his per­son­al research data­base. And in the course of writ­ing a chap­ter, this soft­ware threw up a piece of infor­ma­tion which he thought was—that was a real­ly inter­est­ing con­nec­tion it made. And then that gave rise to an entire chap­ter of his book, mak­ing this con­nec­tion between one thing that he had said and this oth­er thing that the soft­ware had brought up. And he won­dered who came up with the idea for that chap­ter. Was it him, or was it the soft­ware?

Now, that exam­ple is—that piece of soft­ware is not some­thing that most of us are using right now. But I think the fact that we are more and more sort of rely­ing on com­put­er devices as—a lot of peo­ple call them sec­ondary brains,” we are sort of doing a lot of cog­ni­tive out­sourcing. So, in var­i­ous fash­ions our think­ing is par­tial­ly being done by algo­rithms now. And our cre­ativ­i­ty is not all hap­pen­ing with­in our heads now. And you know, at some point it will prob­a­bly become worth­while ask­ing how much of our cog­ni­tion do we want to cede to soft­ware, and will the com­pa­nies who make that soft­ware have an inter­est in get­ting a part of our cog­ni­tion? Will dif­fer­ent com­pa­nies offer may­be dif­fer­ent ben­e­fits or dif­fer­ent styles of cog­ni­tion? And is that a choice that we will have to make when we choose the soft­ware we use? 

Finn: I think we’re already there. I mean, I’m bor­row­ing this from one of my col­leagues at ASU but you know, how many of you use Apple devices? How many of you feel that this is in some way kind of a moral or aes­thet­ic choice? Right? That you sort of look down on peo­ple who don’t use Apple devices, right? And the same could prob­a­bly be said of many of the Android users in the room. And there’s already a kind of cog­ni­tive invest­ment that we make, you know. At a cer­tain point, you have years of your per­son­al his­to­ry liv­ing in somebody’s cloud. And that goes beyond mere­ly being a mem­o­ry bank, it’s also a cog­ni­tive bank in some way.

I want to come back to anoth­er thing you men­tioned, which is this notion of cre­ativ­i­ty. We’ve always used tools, from the I Ching, to flip­ping through your copy of the Aeneid in the Middle Ages, to going to a library and look­ing at what books are on the shelf next to the thing you thought you were look­ing for. We’ve always used serendip­i­ty, a sort of struc­tured serendip­i­ty, to do research, to do intel­lec­tu­al work. And one of the most inter­est­ing things about dig­i­tal sys­tems like the one that you were talk­ing about from from Steven Johnson is that they also man­u­fac­ture serendip­i­ty in a way that is sup­posed to be help­ful to you. 

But all of the­se sys­tems have their implic­it bias­es and rea­sons for doing things, right. And so we might be using Twitter as anoth­er serendip­i­ty engine to try and find out what’s hap­pen­ing in the world. But Twitter isn’t only inter­est­ed in show­ing us stuff that’s hap­pen­ing in the world, right. They have the­se oth­er agen­das that as we were just talk­ing about in the last pan­el, peo­ple are try­ing to make mon­ey off this, and we’re not real­ly the users of a lot of the sys­tems, we’re the pro­duct. We’re the thing that’s being sold to to adver­tis­ers.

So as you think about where we’re head­ing, I’d like to hear you reflect a lit­tle more on that that ques­tion of style. Do you think that… Do you want to spec­u­late on what kinds of styles we might actu­al­ly get to have? I mean, are we already start­ing to wear the grooves through the rela­tion­ships we have now with our soft­ware tools?

Chiang: Okay, so in terms know what sort of serendip­i­ty we rely on, I think that Google auto­com­plete is… I think it has become some­thing that a lot of peo­ple rely on. You type in a word, and then you see what’s in the drop­down list, and that will often influ­ence what the next word you type is. And while it would be nice to think that that drop­down list is deter­mined on pure­ly objec­tive terms, we have no guar­an­tee that that is. I mean, there’s no real the def­i­n­i­tion of what con­sti­tutes an objec­tive pop­u­la­tion of that drop­down list. There’s going to be an algo­rithm, and dif­fer­ent peo­ple will offer dif­fer­ent algo­rithms for how to pop­u­late that auto­com­plete list. And that will shape the serendip­i­ty that you expe­ri­ence when you are doing research.

Even some­thing like— And this is next exam­ple is not so much algo­rith­mic, but again the fact that so many peo­ple rely on Wikipedia and what­ev­er the authors of that Wikipedia entry, and what­ev­er links they put in, those are prob­a­bly shap­ing a lot of people’s ways of think­ing about top­ics. And the­se are all things that we didn’t vol­un­tar­i­ly sign up for, and ini­tial­ly we think the­se are incred­i­bly wel­come con­ve­niences. But they are shap­ing the serendip­i­ty that we expe­ri­ence. They are in some way influ­enc­ing our cre­ativ­i­ty.

And at the moment, Google I think real­ly dom­i­nates search, at least in the English lan­guage. But you could eas­i­ly imag­ine a sit­u­a­tion where dif­fer­ent search engi­nes are major play­ers. And if their auto­com­plete lists are dif­fer­ent in some way, peo­ple might choose their search engine because you like, I sort of like the auto­com­plete sug­ges­tions that Bing is offer­ing more than the ones that Google is offer­ing.

Finn: It just gets me.

Chiang: Yes, yes. And so that is sort of an exten­sion of tar­get­ed adver­tis­ing, and it’s an oppor­tu­ni­ty for a kind of tar­get­ed cog­ni­tive bias.

Finn: I’m fas­ci­nat­ed by auto­com­plete. As a short digres­sion, I teach a course at ASU called Media Literacies and Composition.” And one assign­ment I have have our stu­dents do each year is to write a poem or short sto­ry using only phras­es they get from auto­com­plete. I’ll usu­al­ly give them up a seed that they can start with, like how do I” or some­thing like that, and they can add on let­ters or words if they want to to kind of farm out, to get more stuff. But it’s fas­ci­nat­ing.

And one of the rea­sons that it’s so com­pelling is that you know— I mean, I’m sure Google is manip­u­lat­ing this and try­ing to get you. But also this is a cog­ni­tive ampli­fi­ca­tion of what thou­sands of peo­ple must have typed into their search bars at some point or anoth­er, to actu­al­ly ask about. And so that can be fas­ci­nat­ing, hor­ri­fy­ing, deeply sad, some­times joy­ous, when you see what those things are. If you type in how do I,” it’s sort of mind-blowing what comes up. And so the poet­ry or the fic­tion that comes out the oth­er end is often real­ly inter­est­ing because of that, too. So that idea of grooves you know, the­se are real­ly well-worn grooves, peo­ple sit­ting there typ­ing this stuff in.

But what is real­ly intrigu­ing to me now beyond sim­ply auto­com­plete is the whole suite, the whole appa­ra­tus of inter­ac­tion. And I think Google is the ele­phant or may­be the octo­pus in the room in this con­text. Because it is so easy to look some­thing up on Google— and now Google has of course sort of ingest­ed Wikipedia. So Wikipedia entries will pop up if you’ve ever— You’ve prob­a­bly noticed that if you’re look­ing for some­thing and Wikipedia hap­pened to have an entry of it, Google puts it right up near the top for you. Often you don’t even need to click through to Wikipedia, which I’m sure makes Wikipedia sad. And they’ve they’ve sort of absorbed this entire knowl­edge infra­struc­ture from Wikipedia. 

And they have this this project called Knowledge Graph, where they’re basi­cal­ly going out and try­ing to ingest vast por­tions of the web. They start­ed with things like Wikipedia that had struc­tured data, and now they’re pro­ceed­ing out into it unstruc­tured data and the deep­er wilds of the Internet. I feel like even­tu­al­ly they’re going to trav­el back in time and start serv­ing GeoCities with lit­tle spi­ders and get­ting all the old GIFs.

But what they’re real­ly doing is build­ing this map of ideas, and stuff. Of cog­ni­tive ele­ments. And because it’s so easy and it’s almost impos­si­ble not to begin any intel­lec­tu­al ques­tion you have that you’re going to use a com­put­er for, it’s almost impos­si­ble not to begin with Google now, in some way, shape, or form, right? At least, again, in English. Certainly in the US. It’s easy to for­get all the stuff that Google doesn’t know. So that’s one thing to think about. And the the seduc­tion, right, the seduc­tion of per­fect knowl­edge. And the seduc­tion of Wikipedia, too, which has its own roman­tic notion of build­ing the uni­ver­sal ency­clo­pe­dia.

So that’s one thing. And then the oth­er thing is get­ting back to sure there are thou­sands of peo­ple typ­ing this in, but ulti­mate­ly comes back to you, and this is some­thing that you end up typ­ing in. Why does Google keep try­ing to com­plete my sen­tences and my thoughts for me, right? And their sys­tem Google Now, Google Now, Google will tell you when to go to your next appoint­ment. I find it deeply use­ful. I’m not try­ing to knock this. I think it’s excit­ing and some­thing that we need to think hard about at the same time.

But they’re not just not map­ping out­er space, the uni­verse of knowl­edge, they’re also map­ping inner space, right. They’re map­ping each of us. And there’s this sort of inter­est­ing ques­tion of at what point do com­put­ers and algo­rithms actu­al­ly know us bet­ter than we know our­selves? Because they can see things about us that we can’t eas­i­ly see. They know way bet­ter than us exact­ly how long it takes us to get out of the house each morn­ing, or how long it takes us to eat lunch, or how many typos we make every hour, how effi­cient we are at 11:00 AM ver­sus 3:00 PM. There are algo­rithms that gath­er all this infor­ma­tion. So do you do you think we’re going to be more sur­prised by algo­rithms that map the out­er space of knowl­edge or the inner space of knowl­edge?

Chiang: Well, I guess I think that the risk is that we will not be aware of it map­ping the inner space of knowl­edge. We will not be con­scious of the way that it is shap­ing our cog­ni­tion, mod­i­fy­ing our habits. The util­i­ty of Google for search­ing, get­ting infor­ma­tion, that is some­thing that we are aware of. We’re think­ing, This is great.” But it is hav­ing an effect on us inter­nal­ly, and that is much much less obvi­ous. I mean, I think this is some­thing that in a way is a con­tin­u­a­tion of a long trend of cog­ni­tive tech­nolo­gies. Socrates, famous­ly, he crit­i­cized writ­ing because he thought that it only cre­ates the illu­sion of wis­dom instead of some­one actu­al­ly know­ing some­thing them­selves. They just read it some­where, and they don’t real­ly know it. 

Finn: To be fair, Plato real­ly put those words in his mouth when he wrote the book.

Chiang: Yes, he did. He did. And so, our reliance on Google and the Internet in gen­er­al, in a sense all of us are triv­ia champs now. 

Finn: We’re sort of meta-trivia champs, right? We know how to find it.

Chiang: Yes. I mean, we all share we all share a cer­tain cog­ni­tive resource now. And in a lot of ways we all feel like this is an incred­i­bly pow­er­ful tool, but you know, Socrates (or Plato) had a point about the fact that it is tak­ing some­thing away from us. When peo­ple are deprived of the Internet, when you don’t have your smart­phone, a lot of peo­ple they feel less them­selves. So that is one of the­se unan­tic­i­pat­ed side-effects of this tech­nol­o­gy that we all love.

Finn: Yeah. I think that notion of the phone, and in some ways also the­se invis­i­ble things, you know. Whether it’s your Twitter feed or what­ev­er. But they are the­se the­se cog­ni­tive pros­the­ses that do some­how ampli­fy you, that are your self. And which leads to the inter­est­ing ques­tion of cog­ni­tive pro­pri­o­cep­tion, cul­tur­al pro­pri­o­cep­tion in the sense that you might have the­se things have become inter­nal­ized as part of your iden­ti­ty that only exist vir­tu­al­ly and they con­nect you to oth­er peo­ple vir­tu­al­ly. You know, I think it does fun­da­men­tal­ly change who we are as humans.

And as a card-carrying English pro­fes­sor, the human­i­ties is chang­ing, right? How we read and write is fun­da­men­tal­ly chang­ing because of the­se tools. And that means that how we con­struct our­selves as human beings and what we think that means is also chang­ing. And I think we’re just at the begin­ning of that. I’m going to give you the last word.

Chiang: I guess— Um… I don’t have a good line.

Finn: Should we Google it?

Chiang: Yeah, yeah. 

Finn: Somebody out here in the audi­ence will have fig­ured it out for us on Twitter. So, yeah. I think who and what will get to think the future, it’s clear­ly going to be a col­lab­o­ra­tion, right. I think that’s the stop­ping point.

Chiang: Yes.

Finn: As this was. So, thank you.

Chiang: Thank you.

Further Reference

Can We Imagine Our Way to a Better Future? event page at the New America site

Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.