Luke Robert Mason: You’re lis­ten­ing to the Futures Podcast with me, Luke Robert Mason.

On this episode I speak to dis­tin­guished pro­fes­sor in elec­tri­cal engi­neer­ing and com­put­er sci­ences at the University of California, Berkeley, Edward A. Lee.

What we need is a new phi­los­o­phy of tech­nol­o­gy that is much more inte­grat­ed with our under­stand­ing of cul­ture and human processes.
Edward A. Lee, excerpt from interview

Edward shared his thoughts on why a sym­bi­ot­ic coevo­lu­tion of humans and machines is more like­ly than the even­tu­al obso­les­cence of human­i­ty due to arti­fi­cial intel­li­gence, why the dataist belief in human cog­ni­tion resem­bling com­pu­ta­tion is like­ly wrong, and how recent tech­no­log­i­cal devel­op­ments resem­ble the emer­gence of a new life form.

Mason: Your new book sug­gests that humans have less con­trol over the devel­op­ment of tech­nol­o­gy than we might actu­al­ly think. In fact, it argues that the devel­op­ment of tech­nol­o­gy might actu­al­ly be a coevo­lu­tion­ary process. So I guess my first ques­tion is: What does it actu­al­ly mean for tech­nol­o­gy to evolve?

Edward A. Lee: In the book, I coin the term dig­i­tal cre­ation­ism, for the hypoth­e­sis that all of these pieces of soft­ware and dig­i­tal tech­nol­o­gy that we’ve cre­at­ed are the result of top-down intel­li­gent design. The real­i­ty is, I’ve been writ­ing soft­ware for about 40 years. I start­ed writ­ing embed­ded soft­ware in the 1970s, and for my entire career I thought that the prod­uct of my work was the result of my delib­er­ate deci­sions in my brain about how things should come out. I am a lit­tle embar­rassed that it took me so long to realise that that’s not real­ly the case. I could draw an anal­o­gy: It’s a lit­tle bit like if you come home from the gro­cery store with a bag of gro­ceries. You feel like you’ve accom­plished some­thing. It’s a per­son­al accom­plish­ment to have stocked your refrigerator—but is it real­ly? There are so many fac­tors that played into that accom­plish­ment. The road sys­tem, the car that took you there, the eco­nom­ic sys­tem that enables pay­ing for the gro­cery bag­ger. All of these things that are real­ly much big­ger than your per­son­al accom­plish­ment, and the per­son­al part of it is actu­al­ly rel­a­tive­ly small. I’ve realised that the same is true of most of the soft­ware that I’ve devel­oped over the course of my career. Most of that soft­ware is real­ly muta­tions of pre­vi­ous soft­ware and my thought process­es as I’ve devel­oped the soft­ware are very strong­ly shaped by the pro­gram­ming lan­guages and the devel­op­ment tools and so on, that real­ly guide my think­ing. This view of these tech­nol­o­gy prod­ucts as pure­ly the result of top-down intel­li­gent design is real­ly very misleading.

Mason: You take issue with dig­i­tal cre­ation­ism, because in a fun­ny sort of way, in cul­ture we assume that the design­er of the tech­nol­o­gy has full agency over how it’s cre­at­ed. This idea of dig­i­tal cre­ation­ism encom­pass­es all of these assump­tions that we have, and you believe that that’s no longer the case. I just won­der, how have you come to that under­stand­ing? You’ve used that exam­ple there, but I just won­der at what point did you realise that, in actu­al fact, even though I am the devel­op­er of the soft­ware or of the dig­i­tal tech­nol­o­gy, I don’t have full agency over it. That there is a co-creation process that’s occur­ring here.

Lee: I’m actu­al­ly not the first per­son to pos­tu­late this the­sis. In fact, the first time that it entered my head was when I was read­ing this won­der­ful book by George Dyson, called Turing’s Cathedral’. Towards the end of this book, Dyson talks about a vis­it that he made to Google where he got a tour of the data cen­tre with its thou­sands of servers. The thou­sands of servers in this data cen­tre sort of made him think of it as kind of a liv­ing thing, that was actu­al­ly nur­tur­ing the humans that were tak­ing care of it and devel­op­ing it. He talks in this book about this sort of feed­back rela­tion­ship between the humans tak­ing care of the machines and the machines tak­ing care of the humans. That real­ly got me think­ing, and I start­ed to run with that.

I think one of the things that both­ered me in some sense about Dyson’s argu­ment was that I actu­al­ly had a mis­con­cep­tion about how evo­lu­tion works. I think it’s a fair­ly com­mon mis­con­cep­tion, even today. My under­stand­ing of Darwinian evo­lu­tion was that large­ly ran­dom muta­tions would occur in DNA due to, for exam­ple, alpha par­ti­cles or chem­i­cals in the envi­ron­ment, or some­thing like that. Sometimes those muta­tions would lead to a ben­e­fi­cial vari­a­tion in the organ­ism. Most of the time they don’t, and the organ­ism does­n’t actu­al­ly sur­vive the muta­tion. But actu­al­ly, biol­o­gists have dis­cov­ered that that sort of process of ran­dom muta­tion can’t account for evo­lu­tion that we see in the wild. For exam­ple, the evo­lu­tion of antibi­ot­ic resis­tant bac­te­ria occurs much too rapid­ly to be explain­able by that kind of mech­a­nism. Instead, there’s a whole suite of oth­er mech­a­nisms that are involved in bio­log­i­cal evo­lu­tion­ary process­es that have inter­ven­tions that almost start to look like agency, where virus­es inter­vene, splic­ing DNA of one microbe into anoth­er, for exam­ple. Or, I learned in the course of research­ing this book about a the­sis called endosym­bio­sis, which is the the­sis for how eukary­ot­ic cells have evolved. Those are cells that are in all plants and ani­mals, and they’re cells that have organelles in them. They have nucle­us and mito­chon­dria and so on.

It turns out my pre­vi­ous naive view was that: Well, at some point, a ran­dom muta­tion occurred in a DNA that caused a cell divi­sion to result in an organelle get­ting cre­at­ed inside one of those cells. It’s prob­a­bly not the way that it came about.

Lynn Margulis is one of the peo­ple who has real­ly pro­mot­ed and made respectable this the­sis of endosym­bio­sis, which you can think of in a cartoon-like way. You can think of a bac­teri­um swal­low­ing anoth­er bac­teri­um and instead of digest­ing it, mak­ing it part of its metab­o­lism and turn­ing this rela­tion­ship into a sym­bio­sis rather than just a food source. That kind of evo­lu­tion­ary mech­a­nism is real­ly very dif­fer­ent from the kind of ran­dom muta­tion that I had thought of, that was hap­pen­ing before.

So these kinds of bio­log­i­cal mech­a­nisms that have recent­ly come to light actu­al­ly have pret­ty good analo­gies in soft­ware devel­op­ment. If you think of what a soft­ware devel­op­er does: A soft­ware devel­op­er does­n’t start with a blank page in a text edi­tor and start writ­ing code on that blank page. Nobody does that. Maybe stu­dents tak­ing their very first intro­duc­to­ry pro­gram­ming class might be asked to do that, but even then, they’re always giv­en some piece of code to start from. What real­ly hap­pens in soft­ware devel­op­ment is that pro­gram­mers take pieces of code from here, pieces of code from there. They go to GitHub, they go to Stack Overflow to look for ways of doing things in the soft­ware. They use libraries that come with their pro­gram­ming lan­guages like The Standard Template Library and C++ and so on, and they’re real­ly just stitch­ing togeth­er pieces of code and their own lit­tle hand­writ­ten pieces of code to end up being a rather small part of the result.

This is real­ly quite anal­o­gous to what a virus does, when a virus takes a piece of DNA from one bac­teri­um and splices it into the DNA of anoth­er bac­teri­um. I use the term LDBs [Living Digital Beings] in my book, for these chunks of code that get realigned and recom­bined by a soft­ware devel­op­er to cre­ate a new piece of software.

Then, of course, that new piece of software—most soft­ware that gets writ­ten dies. Most of the soft­ware that I’ve writ­ten is archived in some back up stor­age some­where, and it nev­er sees the light of day again, and nev­er gets exe­cut­ed again. The real­i­ty is that that’s true of most soft­ware, but some software—when you release it into the wild—actually takes off and starts to thrive in this ecosys­tem. Then this ecosys­tem itself pro­vides a feed­back loop where some of the soft­ware that gets devel­oped ends up being in libraries, and then those libraries get used fur­ther by oth­er soft­ware devel­op­ers. Some of the soft­ware that gets devel­oped ends up becom­ing part of soft­ware devel­op­ment tools.

One of the inter­est­ing things that I realised when I start­ed to under­stand this process a lit­tle bit bet­ter is a lot of peo­ple who are very wor­ried about AI these days are argu­ing that there is an inevitable point where the machines will learn to pro­gramme them­selves. That AI is going to lead to machines writ­ing their own soft­ware, and devel­op­ing their own soft­ware. One of the things that I realise is that actu­al­ly, machines teach humans to pro­gramme. The soft­ware devel­op­ment tools that we use today are actu­al­ly teach­ing the pro­gram­mers how to write code.

Today, I write soft­ware a lot, and my soft­ware pro­duc­tiv­i­ty is orders of mag­ni­tude bet­ter than it was just 10 years ago, because the tools are guid­ing my thought process­es, help­ing me con­struct that code accu­rate­ly, show­ing me how to con­struct that code. Then of course, there’s this feed­back where some of the code gets devel­oped and then influ­ences fur­ther soft­ware devel­op­ment and the thought process­es of oth­er soft­ware developers.

Anyway, this process is real­ly not like a top-down intel­li­gent design.

Mason: So this is real­ly what you mean by the idea of coevo­lu­tion. Humans have an impact on the devel­op­ment of tech­nol­o­gy and vice-versa. Software then impacts the way in which humans inter­vene into what becomes suc­cess­ful soft­ware and what becomes obso­lete soft­ware. I just want to talk a lit­tle bit more, though, about that rela­tion­ship that we have with our machines. In the book, you posit var­i­ous dif­fer­ent types of sym­bio­sis that we can poten­tial­ly have with our machines, whether it’s a mutu­al­is­tic sym­bio­sis or an oblig­ate sym­bio­sis. I just won­der where we are in our cur­rent rela­tion­ship with machines as we under­stand them today.

Lee: Yeah, that’s a good ques­tion. I mean, one obser­va­tion, I think, is that the rela­tion­ship between humans and machines is high­ly asym­met­ric today. The fact is that we tend to think that the tech­nol­o­gy has led to enor­mous­ly com­plex sys­tems, but actu­al­ly, all of the com­put­er tech­nol­o­gy on the plan­et com­pared to a sin­gle human being—the com­plex­i­ty rela­tion­ship is high­ly asym­met­ric. Humans are far more com­plex as machines than the whole suite of com­put­ers on the planet.

I think that our rela­tion­ship with the machines might be more anal­o­gous to our rela­tion­ship with our gut bio­me. It’s kind of that much asym­me­try, or more. We depend very heav­i­ly on our gut bio­me as we depend very heav­i­ly on our machines. Imagine what would hap­pen on the plan­et if we turned off all our com­put­ers today. It would be dis­as­trous, the impact would be much big­ger than the coro­n­avirus in terms of loss of life, and mass star­va­tion and so forth. If you shut down all the com­put­ers today, it’d be real­ly dis­as­trous; we’re real­ly very depen­dent on these machines. We’re also very depen­dent on our gut bio­me. If you kill the gut biome—and that does occa­sion­al­ly hap­pen, right—if you over­do antibi­otics for exam­ple, you can end up killing way too many of your gut bio­me and you get very sick, extreme­ly sick. It can kill you, in fact. I think that we’re cur­rent­ly in a very asym­met­ric but sym­bi­ot­ic relationship.

Now, peo­ple fear that it’s going to turn into a kind of par­a­sitic rela­tion­ship where the machines will no longer need the humans, and the humans will become the par­a­sites on this asso­ci­at­ed life­form. That’s cer­tain­ly a pos­si­bil­i­ty. I mean, evo­lu­tion­ary process­es are com­plex process­es, so it’s very hard to pre­dict where they’re going to go—but per­son­al­ly I think that’s pret­ty unlike­ly. I think we’re far more like­ly to see a refine­ment of the sym­bi­ot­ic rela­tion­ship. That does­n’t mean that things won’t go wrong, right? Things will go wrong, but the things that will go wrong should be thought of as patholo­gies. They’re ill­ness­es in a sym­bi­ot­ic rela­tion­ship. I think a lot of the dooms­day books that are out there think of it as more like a war with an alien species, but it’s more like an ill­ness in a sym­bi­ot­ic rela­tion­ship and we have ill­ness­es like that in biology.

If some­thing goes wrong with your rela­tion­ship with your guy bio­me, you can get quite sick. If a com­put­er virus like the WannaCry ran­somware virus that kind of took off in 2017 starts to run wild, it cre­ates a huge dis­rup­tion for human beings and you can real­ly see how that kind of dis­rup­tion actu­al­ly is an ill­ness. You can also see the affect that social media has on our cul­ture, and we all wor­ry about the dig­i­tal per­sona of our kids and things like that. When things aren’t going the way we’d like to see them going, we can think of this as an ill­ness, as opposed to a war of the worlds. It’s not that machines are an alien species com­ing to take over; it’s more like con­tin­u­al evo­lu­tion that can result, of course, in ill­ness­es that must be treat­ed as illnesses.

Mason: You spend a lot of time in the book chal­leng­ing this idea that what will even­tu­al­ly emerge is human lev­el AI. Emergence is one of these things that seems to be tricky, because it’s the way in which we explain away life. There’s all these process­es that hap­pen, and mat­ter comes togeth­er and sud­den­ly con­scious­ness emerges, and here con­scious­ness is. But every­thing can’t be explained away by this tricky thing called emer­gence, can it? It’s real­ly a lazy way to assume that we will get human lev­el AI. The argu­ment for the emer­gence of machine con­scious­ness is that, well, our brains are a mat­ter that came togeth­er and sud­den­ly con­scious­ness popped into exis­tence. Surely once enough sil­i­con comes togeth­er and enough net­works come togeth­er, some­thing like con­scious­ness might just pop into exis­tence, might just emerge into exis­tence. You chal­lenge that notion.

Lee: Yeah, I actu­al­ly chal­lenge it from sev­er­al dif­fer­ent angles, and I think maybe it’s worth focus­ing on two of them. One is that there’s a very strong back­ground assump­tion that a lot of peo­ple make, par­tic­u­lar­ly my col­leagues in com­put­er sci­ence, that human cog­ni­tion is a com­pu­ta­tion­al process at its root. That’s a back­ground assump­tion and for many of my col­leagues that I talk to, they say, Of course, we all learned about the uni­ver­sal Turing machine.” and they mis­in­ter­pret the uni­ver­sal Turing machine as a uni­ver­sal machine. It’s not a uni­ver­sal machine. A uni­ver­sal Turing machine is a machine that can imple­ment any Turing machine. But Turing machines are a very par­tic­u­lar kind of machine, and have very dis­tinct prop­er­ties. They’re algorithmic—everything in them pro­ceeds as a sequence of steps, a sequence of dis­crete steps. All of their infor­ma­tion is dig­i­tal. It’s dis­crete and finite, and they’re ter­mi­nat­ing process­es. The sequence of dis­crete steps has to stop.

I think the only prop­er­ty of those three that humans have is th ter­mi­nat­ing one. We all ter­mi­nate, but we’re not algo­rith­mic, actu­al­ly. In fact, algo­rithms are pret­ty dif­fi­cult for human cog­ni­tion. There is evi­dence that peo­ple have deliberately—or per­haps just been mis­led to misinterpret—to say, Well, ulti­mate­ly the mech­a­nisms under­ly­ing the process­es in the human brain must be dig­i­tal.” So they point to things like the dis­crete fir­ing of neu­rons. This is a the­sis that start­ed with McCulloch and Pitts in the 1940s, and it devel­oped kind of into a phi­los­o­phy in the 1960s, led by Hilary Putnam, who talked about…mul­ti­ple real­is­abil­i­ty was the term that he used. The idea was that neu­rons are ulti­mate­ly just real­is­ing log­ic func­tions, and if you just repli­cate those log­ic func­tions in some oth­er machine—and we know how to make sil­i­con that repli­cates log­ic functions—then you will repli­cate all of the log­ic process­es in the brain. But the prob­lem is that this actu­al­ly ignores a lot of what is actu­al­ly going on in the brain. One of the key things it ignores is the tim­ing of the neu­ron fir­ing, and biol­o­gists know, neu­ro­sci­en­tists know that tim­ing is an extreme­ly impor­tant part of how neu­rons work, and that’s com­plete­ly ignored by this basic log­ic func­tion thesis.

So, there are sev­er­al oth­er assump­tions that under­lie this basic premise that our cog­ni­tion must be a com­pu­ta­tion­al process. I’ve bor­rowed the term from Yuval Noah Harari’s book—his won­der­ful book called Homo Deus—and he coined the term dataism, for a faith. I actu­al­ly argue in my book that this is real­ly, ulti­mate­ly a faith, that these process­es are at their root, com­pu­ta­tion­al. It’s a faith with, actu­al­ly, rather weak evi­dence for it.

Mason: We fall so eas­i­ly into the trap of believ­ing that humans are sim­i­lar to machines because of the way in which we use metaphor to describe the human brain as a com­put­er and what­ev­er. Consciousness is soft­ware. We’re used to hav­ing metaphors com­par­ing the brain to a machine. We talk about our think­ing as cogs turn­ing, and now when I’m think­ing, I’m pro­cess­ing what you’re say­ing. We’ve tak­en that metaphor and we’ve almost run with it, and that seems to be what’s at the core of dataism. A mis­un­der­stand­ing that metaphor is what’s actu­al­ly there, what’s actu­al­ly occur­ring, what’s actu­al­ly hap­pen­ing in the human brain. It just fits the best tech­no­log­i­cal descrip­tion we have of the day. Is that what dataism is, or is there more nuance there?

Lee: There is a lit­tle more nuance. One of the impor­tant points is—I actu­al­ly take the stand in my book, that: Let’s assume that the brain actu­al­ly is a machine. That’s not the core of the dataist the­sis. The core of the dataist the­sis is that it’s a com­pu­ta­tion­al machine. That’s a spe­cial kind of machine. It’s a machine that oper­ates in dis­crete steps, on dig­i­tal data. That’s what the uni­ver­sal Turing machine is all about. Noone has invent­ed a uni­ver­sal machine, they’ve only invent­ed a uni­ver­sal Turing machine which is a spe­cial kind of algo­rith­mic machine.

Even if we accept the hypoth­e­sis that the human brain actu­al­ly is a machine, that does­n’t lead you to the con­clu­sion that it can be repli­cat­ed by a com­put­er, because the com­put­ers are all Turing machines—that’s what they are.

There’s a sec­ond angle from which I attack this hypothesis—this dataist hypothesis—which bor­rows from a the­sis that has become quite pop­u­lar in psy­chol­o­gy in the last 10 to 20 years, which is embod­ied cog­ni­tion. This was, I think, very nice­ly advo­cat­ed by Esther Thelen, the lead­ing psy­chol­o­gist who devel­oped this the­sis. She argued that the cog­ni­tive mind isn’t some­thing that resides inside the brain, get­ting sen­so­ry data from the envi­ron­ment and then doing actions in the envi­ron­ment. She argued that the cog­ni­tive mind actu­al­ly is the inter­ac­tion between the brain and its envi­ron­ment. That it’s that inter­ac­tion that makes the cognition—not what’s going on in your brain.

That is actu­al­ly very inter­est­ing. If you look at that from a tech­no­log­i­cal per­spec­tive, one of the things that we’re see­ing is that robot­ics is much more dif­fi­cult than soft­ware. If you’re build­ing soft­ware that just oper­ates on data, that field is then pro­gress­ing very rapid­ly. Robotics has been pro­gress­ing much more slow­ly. We make robots that can fold tow­els very slow­ly, and they cost hun­dreds of thou­sands of dol­lars. It’s extreme­ly dif­fi­cult to get these dig­i­tal algo­rith­mic machines to mean­ing­ful­ly inter­act with their envi­ron­ment. There’s been this huge opti­mism about self-driving cars, but they seem to be actu­al­ly stalled, right now. We’re not see­ing them get­ting deployed as rapid­ly as many peo­ple were pre­dict­ing, and the tech­nol­o­gy has proved to be much more dif­fi­cult, because these machines have quite a bit of dif­fi­cul­ty inter­act­ing with their envi­ron­ment. That inter­ac­tion with their environment—with the messy, ana­logue, phys­i­cal world—makes those machines much less com­pu­ta­tion­al. They’re much less about algo­rithms and much more about an inter­ac­tion with phys­i­cal dynamics.

This idea of embod­ied cog­ni­tion also sug­gests that in order to get human lev­el AI, we’re going to have to have these machines inter­act­ing with their phys­i­cal envi­ron­ment much more than they cur­rent­ly are. That inter­ac­tion is going to be much more dif­fi­cult to design, and it’s going to make the machines less dig­i­tal and less algorithmic.

Mason: So it could turn out that we’re not a com­put­er, or our brain isn’t a com­put­er as we under­stand it now, but it could be a quan­tum device that we don’t ful­ly under­stand. It could be a num­ber of things. I want to look a lit­tle close­ly at that idea of how we under­stand human beings as machines, because this mis­un­der­stand­ing is also why you’ve made such a com­pelling argu­ment against the idea and the pos­si­bil­i­ty of mind upload­ing. The abil­i­ty to take the brain and import it to anoth­er, per­haps sil­i­con sub­strate. In your pre­vi­ous book, you argue that mind upload­ing is prob­a­bly unlike­ly, sim­ply because if the mind was infor­ma­tion, then sure­ly that infor­ma­tion could be inherited—in the same way that you inher­it from your moth­er and your father their genetics—and that is the basis under which your body is formed. But when it comes to your mind, you don’t inher­it your moth­er and your father’s mem­o­ries. Therefore, mem­o­ry might not be some­thing reducible to information.

I just won­der if you could tell us a lit­tle bit more about why mind upload­ing is such a prob­lem­at­ic idea with­in this context.

Lee: When peo­ple talk about infor­ma­tion, because we’re sur­round­ed by a rel­a­tive­ly new technology—information technology—that’s root­ed in com­put­ers, many peo­ple assume that what we mean by infor­ma­tion is dig­i­tal infor­ma­tion. That every piece of infor­ma­tion can be rep­re­sent­ed by a sequence of bina­ry dig­its. But if you look at the root of what the word infor­ma­tion’ real­ly means, and you look at the infor­ma­tion the­o­ries that have been devel­oped about how to under­stand what infor­ma­tion real­ly is, it’s not restrict­ed to being dig­i­tal. In fact, dig­i­tal infor­ma­tion is a tiny, tiny sub­set of the infor­ma­tion that is poten­tial­ly out there in the world.

The ques­tion of whether you can upload your brain: It turns out—there’s a real­ly won­der­ful math­e­mat­i­cal result devel­oped by Claude Shannon when he was at Bell Labs in the 1950s. He showed that if you have a com­mu­ni­ca­tion chan­nel that can con­vey infor­ma­tion from one place to anoth­er, if the com­mu­ni­ca­tion chan­nel is imper­fect in any way—which every com­mu­ni­ca­tion chan­nel is; imperfect—then the chan­nel can­not car­ry more than a finite num­ber of bits of infor­ma­tion. In order to upload our brain, or our mind, we have to assume that our mind is rep­re­sentable by a finite num­ber of bits of infor­ma­tion. There’s actu­al­ly no valid rea­son to assume that. In fact, I argued in my pre­vi­ous book that that assump­tion can nev­er actu­al­ly be a sci­en­tif­ic the­sis, because it’s untestable by exper­i­ment. You can­not con­struct an exper­i­ment that would ever fal­si­fy that hypoth­e­sis. To prove that state­ment would require some math.

In this new book, I avoid all that, and I just say: Let’s just cast doubt on the idea that infor­ma­tion con­tained in my mind, that rep­re­sents my mind, that is my mind—let’s assume it is infor­ma­tion. I’m will­ing to assume that. In fact, I believe it is; it is infor­ma­tion. Let’s cast some doubt to this hypoth­e­sis that it’s dig­i­tal infor­ma­tion. The hypoth­e­sis that it is, is actu­al­ly untestable, which means that if some­one came and offered you a prod­uct to upload your mind to a com­put­er and you decide to try it, it’ll be impos­si­ble for anyone—outside of you, perhaps—to know whether it worked. It can­not be done. Noone will ever know whether it worked.

Mason: What you’re say­ing in many ways feels like again, this mis­un­der­stand­ing of metaphor. DNA can be reducible to data, but that does­n’t mean that GATTACA, the rep­re­sen­ta­tion of DNA in infor­ma­tion will one day just bounce up and emerge as biol­o­gy. It’s that issue of where this rep­re­sen­ta­tion becomes matter—is that right?

Lee: Human DNA mol­e­cule has about two giga­bytes of data in it, which is about 1000 times less than the lap­top that I’m using to talk to you now. It’s actu­al­ly not a lot of infor­ma­tion. I refer to it as the DNA fal­la­cy, where peo­ple naive­ly assume that DNA encodes humans, and there­fore, I, as an enti­ty, am reducible to two giga­bytes of data. But there’s a prob­lem with that—I mean a lot of prob­lems with it—but one of them is that I, as an enti­ty, as a bio­log­i­cal enti­ty, am part of a process that start­ed four bil­lion years ago, rough­ly, and has been com­plete­ly unin­ter­rupt­ed for the four bil­lion years. There’s a whole sequence of chem­i­cal, bio­log­i­cal process­es that are four bil­lion years old, that I’m part of—with no gaps in that process. If there were any gaps in that process, I would­n’t be here. How much infor­ma­tion was con­veyed along that process, com­pared to the infor­ma­tion in the DNA? My argu­ment is that that process is actu­al­ly capa­ble of car­ry­ing vast­ly more infor­ma­tion than two giga­bytes of data.

Biologists are now start­ing to think that using CRISPR tech­nol­o­gy, for exam­ple, that they’ll be able to cre­ate the wool­ly mam­moth. But how are they going to do it? They’re not going to take DNA of a wool­ly mam­moth that they find, and feed that into a machine that cre­ates a wool­ly mam­moth. No, what they’re going to do is splice that DNA into a germline cell of an ele­phant, and then they’re going to implant that in the womb of a moth­er ele­phant. Then the moth­er ele­phant and the womb, and the cell into which they put the DNA—all those car­ry infor­ma­tion. That infor­ma­tion is poten­tial­ly vast­ly more than the two giga­bytes of infor­ma­tion in the DNA itself. So there’s this mis­con­cep­tion that since DNA is dig­i­tal, humans must be ulti­mate­ly dig­i­tal. It’s just an incor­rect conclusion.

Mason: We’ve focused a lot on how humans might be like machines, but real­ly at the core of the book is this idea that soft­ware arte­facts could be con­sid­ered liv­ing, and in many ways, machines might actu­al­ly resem­ble liv­ing crea­tures. Could you tell me more about these things called liv­ing dig­i­tal beings, or LDBs?

Lee: When I was work­ing on drafts of this book, my work­ing title was Living Digital Beings, and through­out the book I referred to them as LDBs, and the pub­lish­er did­n’t like that word at all. They thought it was a sil­ly word, and that it would under­mine a seri­ous mes­sage, to use a sil­ly word for this. But it’s a metaphor that’s try­ing to get us to think about our rela­tion­ship with the machines in a dif­fer­ent way. Instead of think­ing of them as our tools over which we ulti­mate­ly have com­plete con­trol, think of them more as evolv­ing beings in our ecosys­tem. They’re things that we have rela­tion­ships with, that affect us just as much as we affect them. We’re not just using them, they’re using us. They’re not using us in the sense of hav­ing agency or delib­er­ate deci­sion mak­ing or any­thing like that—not yet. I do look in my book at what it might take for them to get there. But they’re using us in ways that can be thought of as quite anal­o­gous to our gut biome.

It turns out—I learned in the course of research­ing for this book—that your gut bio­me will actu­al­ly syn­the­sise pro­teins that release hor­mones that will make you crave cer­tain foods that the guy bio­me likes. They con­trol your brain to make you crave cer­tain things so that they can be health­i­er. Of course, they’re not doing this in a delib­er­ate way; they’re doing this as a result of a Darwinian evo­lu­tion, because it’s a ben­e­fi­cial thing for them and not too ter­ri­bly harm­ful for you.

Digital machines that we work with also mess with our brains and cre­ate crav­ings. Look at Twitter addic­tion. Think about how Twitter is con­trol­ling the brain of Donald Trump, right now. It’s very clear that there have been tremen­dous floods of hor­mones in his brain, mak­ing him extreme­ly angry, and yet he’s yelling at peo­ple in the White House to issue exec­u­tive orders to con­strain these com­pa­nies. Twitter is result­ing in the releas­ing of these hor­mones in his brain, and affect­ing his behav­iour as a con­se­quence. His behaviour—because he’s powerful—is going to affect Twitter, and the whole sys­tem around social media.

So, there’s this feed­back loop, right? If you think about this kind of rela­tion­ship between the humans and the machines in this more anal­o­gous way as if we’re in an ecosystem—we’re par­tic­i­pants in an ecosys­tem, rather than them just being pas­sive tools under our control—that’s the metaphor that I’m after here.

Mason: So in many ways it’s not the fault of tech­nol­o­gy plat­forms or the machines that we might be addict­ed to. It actu­al­ly might be the fault of the humans, and it real­ly is a feed­back loop to change our pri­or­i­ties to then change the bio­sis of the soft­ware. We’re so quick to blame the addic­tion caused by social media, but in actu­al fact, it’s only addic­tive because we’re giv­ing it the feed­back that it wants to see, and then opti­mis­es to be addic­tive. Is that the right under­stand­ing of what you’re say­ing there?

Lee: That’s exact­ly right, Luke. The key thing is, I think, this mis­un­der­stand­ing that we have of the rela­tion­ship with tech­nol­o­gy leads to inef­fec­tive reg­u­la­tion. We want to reg­u­late on the basis of the assump­tion that if addic­tion is the result of tech­nol­o­gy, that that was the delib­er­ate deci­sion of some soft­ware engi­neers or of some Silicon Valley exec­u­tives to make that addic­tion happen.

The argu­ment that I make in my book is: It actu­al­ly came about in a rather dif­fer­ent way. The tech­nol­o­gy that we use is the result of a selec­tion process. We think: Okay, well Facebook was the result of a bril­liant mind who cre­at­ed this thing. But actu­al­ly, there were thou­sands of oth­er pieces of soft­ware that were real­ly doing very sim­i­lar things, most of which died and went extinct in this com­pet­i­tive ecosys­tem. One of them sur­vived through a selec­tion process. The ones that sur­vive are the ones that prop­a­gate most effec­tive­ly, and get­ting humans addict­ed to them is what results in that prop­a­ga­tion. So it’s a com­plete­ly Darwinian process; it’s nat­ur­al selec­tion. The crea­tures that thrive in an ecosys­tem are the ones that have the pro­cre­ative prowess; that are able to spread them­selves. Getting humans to be addict­ed to them is a fan­tas­ti­cal­ly pow­er­ful way to spread yourself.

If we want to find ways to effec­tive­ly steer the process towards favourable out­comes for humans, we need to under­stand that that’s not just about get­ting the engi­neers to design things eth­i­cal­ly. That’s not going to result in the out­comes that we want, unless this dig­i­tal cre­ation­ism hypoth­e­sis is actu­al­ly true. I believe it is not true, and there­fore what we need to do is under­stand this as a dynam­ic ecosys­tem with a lot of feed­back loops. When you have feed­back loops like this, humans get addict­ed to tech­nol­o­gy which then caus­es that tech­nol­o­gy to prop­a­gate which then makes it more addic­tive as it gets devel­oped because there’s this feed­back loop. Whenever you have a feed­back loop, you can inter­vene at any  point in that feed­back loop. You don’t have to just inter­vene at the tech­nol­o­gy devel­op­ment. There’s oth­er places to inter­vene. For exam­ple, you could edu­cate the users, right? Have them under­stand more how tech­nol­o­gy is play­ing a role in our soci­ety. If in our schools, we had seri­ous cours­es that looked at the cul­tur­al con­text of tech­nol­o­gy, we might have our kids grow­ing up with a more sophis­ti­cat­ed under­stand­ing of—and a more sophis­ti­cat­ed way of —relat­ing with the tech­nol­o­gy. That’s an inter­ven­tion point that I don’t think we’ve even tried.

Mason: So in many ways, the idea that we’re being con­trolled by these plat­forms is real­ly just a byprod­uct of a belief in dig­i­tal cre­ation­ism. The idea that the thing that will fix this is top-down reg­u­la­tion, because the tech­nol­o­gy must have been designed top-down by a human being or an engi­neer. In actu­al fact you’re sug­gest­ing there is much more nuanced.

Lee: It is much more nuanced. It’s a much more Darwinian process, where each piece of soft­ware that shows up in this ecosys­tem is a muta­tion of some pre­vi­ous piece of soft­ware, and that muta­tion was cer­tain­ly affect­ed by soft­ware engi­neers and by exec­u­tives in the com­pa­nies that pay for the soft­ware engi­neers. It was affect­ed by them, but it was­n’t real­ly cre­at­ed from scratch by them. It’s a muta­tion of a pre­vi­ous thing. Most of those muta­tions die out. The ones that don’t die out are the ones that thrive in the ecosys­tem. That process is real­ly what’s dri­ving the devel­op­ment of the tech­nol­o­gy much more than the delib­er­ate deci­sion mak­ing of individuals.

Mason: Viewing dig­i­tal tech­nol­o­gy as a new life form—it feels like a very con­tro­ver­sial idea. It brings into ques­tion this idea of what you mean by life, and how some­thing can be alive, or A‑live; arti­fi­cial­ly alive—in the case of arti­fi­cial life—or exhib­it a form of live­ness. We recog­nise the traits with­in a piece of soft­ware that trig­gers the part of our brain that makes us think that it has, or exhibits, some form of vital­i­ty. Could you help explain a lit­tle bit more by what you mean by life, when you talk about this idea of liv­ing dig­i­tal beings?

Lee: Yeah, that’s a won­der­ful ques­tion. I should point out that this idea of think­ing of tech­nol­o­gy as liv­ing, again, is not an idea that I orig­i­nat­ed. I actu­al­ly first heard it from Kevin Kelly, who was the found­ing exec­u­tive direc­tor of Wired mag­a­zine. He wrote a book—a won­der­ful book—called What Technology Wants’. He has a won­der­ful TED talk on this top­ic of think­ing of tech­nol­o­gy as a liv­ing thing. He coined the term tech­ni­um, for what he called the Seventh Kingdom of Life, and described it as a new life form on our planet.

I start­ed look­ing at his argu­ment in some depth, and there’s some prob­lems with this argu­ment because he includes in tech­nol­o­gy even things that, to me, are very inan­i­mate. He talks about a coro­net, for example—which is a musi­cal instrument—as if it were a liv­ing thing. To me a liv­ing thing is a process, not a thing. It can’t be a sta­t­ic object, it’s got to be a process. It’s the process that’s the liv­ing thing. Digital tech­nol­o­gy and soft­ware is a much bet­ter match to that metaphor, because an exe­cut­ing piece of soft­ware is a process. It’s pure­ly a process.

So, I start­ed look­ing at: What aspects of liv­ing does that process have? If you pick a par­tic­u­lar exam­ple, my favourite exam­ple is Wikipedia, which I used through­out my book. By the way, I should men­tion that I’ve made a pub­lic com­mit­ment to con­tribute all the roy­al­ties from this book to the Wikimedia foun­da­tion. They will get the profits—not me—because Wikipedia is my favourite liv­ing dig­i­tal being, today. Wikipedia was born, I think, 19 years ago, 20 years ago—somewhere in that range. It was born on a sin­gle serv­er and it start­ed react­ing to its envi­ron­ment, which is react­ing to stim­u­lus com­ing in over the inter­net, and it’s been run­ning as a con­tin­u­al process ever since then, for the last 19 or 20 years. The servers on which it orig­i­nal­ly ran no longer exist—it’s run­ning on a com­plete­ly dif­fer­ent set of servers today—just like you and I are run­ning on a com­plete­ly dif­fer­ent set of cells that we had when we were born. The process, not the indi­vid­ual servers, is what is the liv­ing thing.

In my book, I look in some depth at: What oth­er aspects of liv­ing does it have? Does it have the abil­i­ty to repro­duce? Wikipedia has arguably pro­duced very pro­lif­i­cal­ly. I mean, if you go to…there’s many, many Wiki pages all around the world—thousands, mil­lions probably—of Wiki pages all around the world serv­ing lots of dif­fer­ent func­tions that are arguably prog­e­ny from Wikipedia. They’ve inher­it­ed traits in the form of these pieces of [inaudi­ble], so they have inher­i­tance as well. They even have process­es that we think of as very bio­log­i­cal, like home­osta­sis. Homeostasis is the abil­i­ty to main­tain sta­ble inter­nal con­di­tions. Our bod­ies main­tain a sta­ble tem­per­a­ture. Well, the com­put­er con­trolled air con­di­tion­ing sys­tems in the Wikipedia serv­er cen­tres are main­tain­ing an inter­nal sta­ble tem­per­a­ture, so they even have prop­er­ties like that.

You don’t want to push this anal­o­gy too far, but the fact is that it’s a use­ful way, I think, to think about how we relate to tech­nol­o­gy, and that’s real­ly the empha­sis of my book. That’s what I’m try­ing to get us to do: To look at our rela­tion­ship with tech­nol­o­gy through new eyes, that are more able to give a more sophis­ti­cat­ed under­stand­ing of what the process­es actu­al­ly are and how we can nudge them. We’re not going to be able to con­trol them—that’s one of my points. This isn’t about con­trol­ling tech­nol­o­gy devel­op­ment. Noone knows how to con­trol an evo­lu­tion­ary process, but you can influ­ence it if you know how it works or have a bet­ter under­stand­ing of how it works. Then you’re more like­ly to be able to effec­tive­ly influ­ence it.

Mason: The great thing about what you’ve just said there is it reori­ents how we think about tech­nol­o­gy. In oth­er words, tech­nol­o­gy and the idea of AI does­n’t become scary any­more, because if you’re argu­ing that it coe­volves with us, as human beings, using Darwinian forces, what it’s doing is dri­ving dig­i­tal tech­nol­o­gy to be com­ple­men­tary, rather than com­pet­i­tive. It’ll find it’s best option is not to kill us and make us obso­lete, but in actu­al fact to keep us around and to work with us, so that it’s reliant on us. In the same way that we have found out that we are so reliant on the inter­net and all of the process­es that the inter­net enables—whether it’s com­mu­ni­ca­tion on bank­ing, or a mul­ti­tude of prod­ucts and ser­vices that now our life runs on. That’s chal­leng­ing. That’s a chal­leng­ing way to think about tech­nol­o­gy. That real­ly puts a span­ner in the works for all of the indi­vid­u­als who are the AI doom­say­ers, who go, No, no, no, no. This thing is going to realise it does­n’t need us around.” How do you think they’re approach­ing this idea, that it’s not going to evolve past us—but con­tin­u­ous­ly and for­ev­er evolve with us?

Lee: The cur­rent pan­dem­ic that we’re in, I think, can offer some lessons here. One of the things that has made it so much worse in some ways than some of the pre­vi­ous pan­demics is that the virus­es have been much more lethal. They kill the host. They kill the host very quick­ly, with high con­fi­dence. The mor­tal­i­ty rate of the coro­n­avirus is not quite so high, not near­ly as high as some of these oth­ers, and that has actu­al­ly helped it spread.

That’s a nat­ur­al part of a Darwinian evo­lu­tion­ary process. If you have a rela­tion­ship between two liv­ing process­es, and one of them is extreme­ly destruc­tive to the oth­er, if it’s also depen­dent on the oth­er, it’s like­ly that they’re both going to die out—or at least one of them is going to die out. I think that right now, the machines are very depen­dent on humans. They’re not going to progress very rapid­ly if the humans sim­ply stop work­ing on them. The humans are absolute­ly a big part of their pro­cre­ative processes.

The machines are cur­rent­ly very depen­dent on us, and in that kind of rela­tion­ship, as it evolves, there is a ten­den­cy to then…mutations that would lead to patholo­gies tend to get sup­pressed. We see this, for exam­ple, in com­put­er virus­es like the WannaCry ran­somware com­put­er virus. The way that humans react­ed to that is to inoc­u­late the machines with, essen­tial­ly, anti­bod­ies that would sup­press this muta­tion of this piece of soft­ware. That’s a nat­ur­al thing that’s going to hap­pen when a patho­log­i­cal phe­nom­e­non emerges from this evo­lu­tion­ary process. The patho­log­i­cal phe­nom­e­na are going to appear, but we’re going to fight them, and that feed­back loop leads to a like­li­hood that it’s the sym­bio­sis that gets strength­ened rather than the competition.

That’s large­ly what makes me rel­a­tive­ly much more opti­mistic than many of these doom­say­er books that say, Well, we’re just going to be com­plete­ly side­lined because the tech­nol­o­gy is going to realise it no longer needs humans.” In the part­ner­ship between humans and machines, it’s actu­al­ly the humans that are the scari­er part—not the machines. Through our delib­er­ate deci­sions in choos­ing to devel­op cer­tain kinds of tech­nol­o­gy that are by intent destruc­tive to humans—that’s where the real­ly scary out­comes from the tech­nol­o­gy will come. Not from the AIs just learn­ing to pro­gramme them­selves and then real­is­ing they don’t need humans any­more. I don’t think that’s the kind of mech­a­nism that we’re going to see lead­ing to the real­ly destruc­tive effects.

Mason: We’ve also designed tech­nolo­gies that enhance what it means to be human. You look at some of these in the book in the form of the intel­lec­tu­al pros­the­sis and the cog­ni­tive pros­the­sis that we’ve cre­at­ed. In what way has tech­nol­o­gy become an exten­sion of our minds and changed the way that we remem­ber, and that we com­mu­ni­cate? Are these neur­al pros­the­ses, these intel­lec­tu­al prostheses—are they mak­ing us smarter or are they mak­ing us dumber?

Lee: I actu­al­ly think that we’re at least col­lec­tive­ly get­ting smarter, if not indi­vid­u­al­ly. I personally…I could not have writ­ten a book like this with­out Google and Wikipedia, and a num­ber of oth­er tech­no­log­i­cal tools that I used to build this argu­ment and under­stand the nuances. The real­i­ty is that a search engine is able to make links between pieces of infor­ma­tion in a far more pow­er­ful way than any human brain can. It affects our think­ing and it affects the mean­ing of the information.

When two pieces of infor­ma­tion come up ear­ly in a Google search, it can change what those pieces of infor­ma­tion mean to the humans; it can devel­op in that way. I quote in my book a con­ver­sa­tion that a his­to­ri­an of sci­ence had with Richard Feynman, the physi­cist. The his­to­ri­an had found hand­writ­ten notes that Feynman had used when he was devel­op­ing his quan­tum elec­tro­dy­nam­ics the­o­ry. The his­to­ri­an described these notes as a record of Feynman’s think­ing, and Feynman said, No, those aren’t a record of my think­ing. Those are my think­ing.” The his­to­ri­an said, No, the think­ing was in your brain, and this is just a record­ing on paper.” and Feynman said, No, that’s not actu­al­ly the way it works. The think­ing was hap­pen­ing on the paper and in my brain, togeth­er. The paper and the pen­cil is an intel­lec­tu­al pros­the­sis that enables a way of think­ing that can­not be done with­out the paper and pen­cil.” That’s what I mean by an intel­lec­tu­al pros­the­sis. The way that we use tech­nol­o­gy is way more pow­er­ful today, than just pen­cil and paper. It is affect­ing our way of think­ing and affect­ing what we can accom­plish with our thinking—very strongly.

Mason: The way we’ve cod­ed the world has an effect on the devel­op­ment of the brain and the evo­lu­tion of the brain itself. What you’re refer­ring to there—the idea that the brain can live out­side the body—is what Merlin Donald used to call exter­nal sym­bol­ic stor­age. The idea that we can port mem­o­ries into exter­nal sym­bol­ic sources that we can then revis­it. That must be hav­ing a mas­sive effect on the way in which our brain devel­ops. Surely there’s an impact on this tech­nol­o­gy, on our own bio­log­i­cal evolution?

Lee: Yeah, there’s a lot of won­der­ful work going on these days in under­stand­ing, for exam­ple, how our abil­i­ty today to record and organ­ise and sort vast num­bers of dig­i­tal pho­tographs is affect­ing our mem­o­ry. What it actu­al­ly means to remem­ber events has changed over time, because of the tech­nol­o­gy; it does affect our brains.

Mason: And it’s affect­ing them from a bio­log­i­cal stand­point, as well. You talk about, in the book, how our brains are get­ting smaller.

Lee: Yeah. The human brain is about 10% small­er than it was 10,000 years ago. How could that pos­si­bly be a favourable evo­lu­tion­ary out­come? One of the argu­ments is that: Well, it can be, because over that 10,000 years, we’ve become increas­ing­ly reliant on exter­nal pros­the­ses to aug­ment our brain capa­bil­i­ties; to deal with aspects of our lives that our brains are not very good at. So for exam­ple, work­ing with num­bers, or hav­ing reli­able records in order to be able to make transactions.

I talk in my book about the dis­cov­ery of the Sumerian tablets, which are from about…more than 4000 years ago. When these were first dis­cov­ered, they had to be deciphered—because no one knew the writ­ing sys­tem. It was pro­found­ly dis­ap­point­ing when they found that most of what was writ­ten on these tablets was real­ly quite bor­ing. It was most­ly bureau­crat­ic record keep­ing. So the tablets were real­ly func­tion­ing as cog­ni­tive pros­the­ses that enabled a soci­ety to devel­op in a cer­tain way, that would not have been pos­si­ble with­out this kind of writ­ing sys­tem. It’s com­pen­sat­ing for deficiencies—for our inabil­i­ty, in our heads, to do cer­tain things. To work with num­bers reli­ably, to work with records reliably—we’re just not very good at that.

Mason: Now all of these ideas—they raise some chal­lenges on how we under­stand and oper­ate with machines. The first of those, I guess, is our abil­i­ty to become cyborgs. Would you argue, Edward, that we’re already cyborgs, because of the way we’re already evolv­ing with tech­nol­o­gy? Or, is there still yet a point at which we might find tech­nol­o­gy inte­grates in a more embod­ied way?

Lee: Well I think that the real­ly remark­able thing that is hap­pen­ing right now is that ever since at least the inven­tion of writ­ing, tech­nol­o­gy has become a part of our cog­ni­tive processes—but this has real­ly accel­er­at­ed with dig­i­tal tech­nol­o­gy and the mech­a­nisms that we have today. I think it’s accel­er­at­ed very dra­mat­i­cal­ly. The accel­er­a­tion itself is evi­dence of the effect that this is hav­ing on our cog­ni­tive minds. The fact that we can actu­al­ly put togeth­er unbe­liev­ably com­plex tech­nolo­gies that were com­plete­ly unimag­in­able 20 years ago is, in large part, because our brains are get­ting bet­ter able to do these kinds of things. To deal with this com­plex­i­ty by using these cog­ni­tive pros­the­ses in order to do them. It’s hav­ing a very big effect on us.

Whenever you get these rapid bursts of evolution…Right now, okay, with the coro­n­avirus pan­dem­ic, we’re see­ing a burst of evo­lu­tion in our rela­tion­ship with tech­nol­o­gy. I used to have a pret­ty embar­rass­ing­ly naive under­stand­ing of evo­lu­tion; I thought it was a slow, grad­ual process. Biologists actu­al­ly know that no, it’s more like a punc­tu­at­ed equi­lib­ri­um. You get huge dis­rup­tions in an ecosys­tem and a lot of stuff changes, and muta­tions that sur­vive that huge dis­rup­tion tend to look quite dif­fer­ent from what was before the disruption.

We’re see­ing exact­ly that right now, with this coro­n­avirus pan­dem­ic. We’re becom­ing dig­i­tal humans. The fact that you and I are not sit­ting in that won­der­ful space in London in front of a live audi­ence is a result of the pan­dem­ic, and the fact I’ve been learn­ing to turn slop­py Zoom record­ing talks into some­thing more pol­ished by doing some editing—none of that I was doing two months ago. Everyone around us—we’re inter­act­ing with all of our friends through dig­i­tal­ly medi­at­ed tech­nol­o­gy. It’s hav­ing a huge impact on our rela­tion­ship with tech­nol­o­gy. This is a punc­tu­a­tion point in a punc­tu­at­ed equi­lib­ri­um. We’re going to see that our rela­tion­ship with tech­nol­o­gy, when we emerge from this, is going to be quite dif­fer­ent from what it was before.

The tech­nol­o­gy is going to be dif­fer­ent, as well. We’re going to see a very rapid set of changes in what tech­nol­o­gy we use and how we use it.

Mason: What’s so refresh­ing about read­ing your book is that it’s very dif­fer­ent from the sorts of writ­ing about AI and robot­ics that we’ve seen. They always seem to end up in the con­clu­sion that even­tu­al­ly, we will have human-like machines. Machines in the image and like­ness of humans. Really, what you’re argu­ing is: No. It is always going to be this coevo­lu­tion­ary process. When you start shar­ing ideas like, Machines might be life, or they have sim­i­lar­i­ty to life.”, it does pro­voke this idea of: What could hap­pen if machines even­tu­al­ly real­ly did become liv­ing? How would we deal with the idea that machines were alive? How would we recog­nise those machines are being alive? What would they need to devel­op for us to be able to under­stand them as con­scious’ or liv­ing’?

In many ways, it feels like account­abil­i­ty and agency will be the two things that we will need to iden­ti­fy. How will we go about iden­ti­fy­ing the pos­si­bil­i­ty of inde­pen­dent, autonomous life?

Lee: Let me first say that, emphat­i­cal­ly, being con­scious and being alive are not the same thing. Most of the liv­ing things around us, we would not ascribe any agency to, or we don’t hold them respon­si­ble for their actions—and yet they’re alive. The plants in our gar­den, or the microbes in our gut—we don’t think of them as hav­ing any consciousness.

It turns out that con­scious­ness is not a bina­ry thing. It’s not some­thing you either have or don’t have. Douglas Hofstadter writes very nice­ly about this in his book, I am a Strange Loop.’ It’s more of a gra­da­tion. People have done stud­ies on worms that have rel­a­tive­ly sim­ple ner­vous sys­tems, right? Just a cou­ple of hun­dred neu­rons, for exam­ple. It turns out that worms have an ability—these worms with just a cou­ple of hun­dred neurons—have an abil­i­ty to dis­tin­guish self from non-self in a cer­tain sort of way. If their sens­es detect motion under their body, they can tell the dif­fer­ence between motion that was caused by them­selves mov­ing, ver­sus motion that was caused by some exter­nal event. Being able to tell that dif­fer­ence is impor­tant in the devel­op­ment of a sense of self. The sense of self that humans have is much more sophis­ti­cat­ed than that, but it has that essen­tial ele­ment. The fact that when my periph­er­al vision detects a hand wav­ing in my face, my brain does­n’t react in alarm, because my brain knows that it made my hand do that. That’s dis­tin­guish­ing self from non-self, and it’s an intrin­sic part of our biology.

It’s some­thing that, actu­al­ly, a lot of the soft­ware out there has already. It has that abil­i­ty. It’s got at least those kinds of low lev­el mech­a­nisms. So, I look at my book at what it would take for these low lev­el mech­a­nisms to devel­op into things that ulti­mate­ly do involve agency—what we would call agency, and what we would call con­scious­ness. The con­clu­sion I come to in the book is rather nuanced. It’s not a sim­ple sto­ry, and that’s prob­a­bly the part of the book that’s the most dif­fi­cult to read. The essen­tial argu­ment there is that if machines do ever devel­op a first per­son self—a sense of self that we can ascribe agency to—the argu­ment I build is that we will actu­al­ly nev­er be sure that we’ve accom­plished that. That we’ll nev­er be able to tell whether that’s true of those machines.

That’s, in many ways, a dis­ap­point­ing con­clu­sion for many peo­ple. Fundamentally, not being able to know some­thing is nev­er a very sat­is­fy­ing con­clu­sion, but the argu­ment I make for it is, I think, extreme­ly com­pelling, and hard to refute.

Mason: Surely that’s the case with life we recog­nise in nature. The idea that a plant has a sense of self. It could be argued that if you watch a plant over a cer­tain peri­od of time and speed that up, you see it mov­ing in rela­tion­ship to the light and clos­ing itself up in rela­tion­ship to the envi­ron­ment. Fundamentally, it has some sort of the rela­tion­ship with the envi­ron­ment. It has a feed­back loop that it goes through and all of these things togeth­er seem real­ly impor­tant in under­stand­ing whether some­thing has agency, or not. Why is it so impor­tant for us to assign agency to non-human objects?

Lee: The rea­son why we would want to be able to assign agency to non-human objects is that we’re start­ing to see tech­nolo­gies get­ting deployed that have quite a bit of auton­o­my. That oper­ate large­ly inde­pen­dent­ly of human oper­a­tors, and in fact they can devel­op in such a way that they become quite dis­con­nect­ed from the humans who devel­oped them. Consequently, they could have effects, where we’re going to have an extreme­ly hard time find­ing any­one to blame for those effects.

People talk about self-driving cars, and the dam­age that they can do when an acci­dent occurs. I think that’s one of many things that could hap­pen with tech­nolo­gies that have a cer­tain amount of auton­o­my. I think that we’re very quick­ly going to reach a point where you’re sim­ply not going to be able to find a human being on whom you can pin the blame for some­thing that went wrong. In that case, who do we hold respon­si­ble? That ques­tion, I think, becomes very, very nuanced.

One of the things that I dive into in my book is that in order to hold an agent respon­si­ble for some­thing, you have to assume that that agent is able to rea­son about cau­sa­tion. That agent is able to have said, Well, if I do this, I’m going to cause this. If I do some­thing dif­fer­ent, I’m going to cause some­thing dif­fer­ent.” Well, it turns out that rea­son­ing about cau­sa­tion is some­thing that actu­al­ly can’t occur in an objec­tive way. It can only be sub­jec­tive. You have to have a first per­son self, in order to be able to rea­son about cau­sa­tion. The fact that you can’t ever know whether the machines that we build will have a first per­son self means we can’t ever know whether they will be able to rea­son about cau­sa­tion, which means we can’t ever know whether, for sure, we should be assign­ing respon­si­bil­i­ty for actions. In some ways, it’s a very unsat­is­fy­ing conclusion—but it means that as a cul­ture, we’re going to have to find a way to man­age these more autonomous tech­nolo­gies, and fig­ure out how they’re going to oper­ate with­in our cul­tur­al, soci­etal, struc­tur­al, legal sys­tems, for example.

Mason: If those things are so hard to iden­ti­fy, how then do we deal with the issue of things like machine rights?

Lee: I think that those are things that are, ulti­mate­ly, going to become cul­tur­al deci­sions; that will be part of the sys­tems of jus­tice that we cre­ate, and so forth. I think we’re a long way off from ever want­i­ng to give rights to machines, and we may nev­er get there because we may always take a speciesist approach, which is that the only crea­tures that deserve rights are humans, and that’s because it’s the humans who are in con­trol of those rights. But even humans are not like that, right? We do give cer­tain rights to ani­mals, for exam­ple. I think those are things that are real­ly part of a cul­tur­al evo­lu­tion over the very long run.

Mason: I mean, you try and deal with some of these ques­tions, odd­ly enough through the exam­ple of AI gen­er­at­ed art. We’ve had Arthur Miller on the pod­cast who has spo­ken about machine cre­at­ed cre­ativ­i­ty. The ques­tion always comes up of agency. When it comes to AI gen­er­at­ed art, who is the artist? Is it the non-human agent who cre­at­ed the art­work, or was it the human who set up the para­me­ters of the soft­ware to allow it to gen­er­ate this final form, or pic­ture, or paint­ing in some cas­es? You go one step fur­ther and say that in actu­al fact, it’s not just a chal­lenge of whether it’s the human artist or the non-human artist. In actu­al fact, it might be a mul­ti­tude of non-human and human enti­ties that could be the orig­i­na­tor of that art. I just won­der if you could explain that exam­ple a lit­tle bit further.

Lee: Yeah, so I talk about this famous por­trait that was cre­at­ed by these three French guys who call them­selves Obvious’. They’re these three French artists. They have an AI gen­er­at­ed por­trait that they sold at Christies for some 430,000 dol­lars or so. They put it forth as the first AI gen­er­at­ed paint­ing. There are a cou­ple of ques­tions. One is to assign who is the artist. It’s also an impor­tant ques­tion: What is the art­work? To me, the art­work there was actu­al­ly much more a piece of con­cep­tu­al art. The idea of a first AI cre­at­ed paint­ing was the artwork.

I think it was a bril­liant art­work, because it cre­at­ed this enor­mous fury and dis­cus­sion and con­tro­ver­sy about, well—these guys just down­loaded some soft­ware writ­ten by a teenag­er, and large­ly used it unchanged and cre­at­ed the paint­ing. Shouldn’t the teenag­er have been the real cre­ator? Well, the teenag­er was using a tech­nique that was cre­at­ed by Ian Goodfellow called Gan—shouldn’t Ian Goodfellow get some of the cred­it for this?

We have a ten­den­cy as humans to real­ly want to over­sim­pli­fy any cre­ative work and say, Well, it had one cre­ator.” I think this is what we do, for exam­ple, with software—with this dig­i­tal cre­ation­ism hypoth­e­sis. We want to sin­gle out the one cre­ator of this arte­fact. It was Zuckerberg who cre­at­ed Facebook.” We have a very strong ten­den­cy to want to do that as humans, and it’s a mis­take. No piece of cre­ative work was cre­at­ed by an indi­vid­ual. Every piece of cre­ative work evolves in a con­text where the con­text huge­ly influ­ences the outcome.

If you think of the por­trait that sold for 430,000 dol­lars as a piece of con­cep­tu­al art, that con­cept is three guys who were the first to put such a por­trait onto Christies. If that was their cre­ative work, that was a real­ly small delta on every­thing that was around, but it was a very clever delta. Perhaps they did deserve to get some 400,000 dollars—well they got less than that, because there were big com­mis­sions and stuff.

This feeds into the over­all theme in my book about a co-evolutionary process. We’ve got to stop try­ing to pin every devel­op­ment on a sin­gle cre­ator, because the sto­ry is much more com­pli­cat­ed than that.

Mason: And because that sto­ry is much more com­pli­cat­ed, that means we have to reap­proach how we look at tech­nol­o­gy. One of the ways we can do that is through some­thing called dig­i­tal human­ism. By tak­ing a more human-centric—or per­haps a life-centric approach to tech­nol­o­gy. How do you pro­pose that ulti­mate­ly, giv­en our new under­stand­ing of how we relate with tech­nol­o­gy, how should that change the way in which we study and approach the under­stand­ing of tech­nol­o­gy through some­thing like dig­i­tal humanism?

Lee: Yeah, I real­ly like this term: dig­i­tal human­ism, which I cred­it to Hannes Werthner, who was—at the time when he coined this term—the dean of com­put­er sci­ence at The Technical University of Vienna. Hannes organ­ised a series of work­shops on this top­ic. He’s a com­put­er sci­en­tist like me, but his goal was to get a much more sophis­ti­cat­ed dia­logue hap­pen­ing among com­put­er sci­en­tists, and between com­put­er sci­en­tists and soci­ol­o­gists and psy­chol­o­gists and sci­en­tists in oth­er fields. I pro­posed to him that this was a lit­tle bit anal­o­gous to the Vienna Circle, and the effect it had on the devel­op­ment of the phi­los­o­phy of sci­ence in the ear­ly 20th century.

What we need is a new phi­los­o­phy of tech­nol­o­gy that is much more inte­grat­ed with our under­stand­ing of cul­ture and human process­es, and human sys­tems like eco­nom­ics and pol­i­tics. Those are things that are well beyond the skillset of peo­ple in any one of the dis­ci­plines that they touch on. You do need to have peo­ple who have a sophis­ti­cat­ed under­stand­ing of the tech­nol­o­gy involved, because oth­er­wise you get very over­sim­pli­fied ver­sions of the tech­nol­o­gy. But, you also need to have peo­ple with very sophis­ti­cat­ed under­stand­ings of cul­ture and how human cul­ture devel­ops, and of eco­nom­ics and of psy­chol­o­gy, and of biol­o­gy. All of these things need to be part of the sto­ry, and much of the way in which the Vienna Circle in the ear­ly 20th cen­tu­ry brought togeth­er philoso­phers and sci­en­tists and social sci­en­tists to get a more sophis­ti­cat­ed approach to science…At that time, the cri­sis that it was deal­ing with was the enor­mous pow­er that sci­ence was acquir­ing with its abil­i­ty to cre­ate atom­ic bombs, for exam­ple. It had­n’t hap­pened yet—at the time of the Vienna Circle—but they were com­ing, and peo­ple were under­stand­ing this enor­mous pow­er that was requir­ing sci­en­tists to grow up, in a sense, and start engag­ing with the broad­er world around them.

Digital human­ism is say­ing that tech­nol­o­gists today need to grow up, and start engag­ing in a much more sophis­ti­cat­ed way with the world around us. We need to be ele­vat­ing the lev­el of our dia­logue and our dis­course about how tech­nol­o­gy devel­ops and how we can affect it, and how it’s affect­ing us.

Mason: And let’s make sure that AI also has a place at that table, when it comes to dis­cussing dig­i­tal human­ism. Edward A.Lee, thank you for your time.

Lee: My plea­sure. Thank you, Luke. I always enjoy talk­ing with you.

Mason: Thank you to Edward, for shar­ing his insights into the coevo­lu­tion of humans and machines.

You can find out more by pur­chas­ing his new book, The Coevolution: The Intertwined Futures of Humans and Machines, avail­able from MIT Press, now.

If you like what you’ve heard, then you can sub­scribe for our lat­est episode. Or fol­low us on Twitter, Facebook, or Instagram: @FUTURESPodcast.

More episodes, tran­scripts and show notes can be found at Futures Podcast dot net.

 Thank you for lis­ten­ing to the Futures Podcast.

Further Reference

Episode page, with intro­duc­to­ry text and pro­duc­tion notes. Transcript orig­i­nal­ly by Beth Colquhoun, repub­lished with per­mis­sion (mod­i­fied).