Luke Robert Mason: You’re lis­ten­ing to the Futures Podcast with me, Luke Robert Mason.

On this episode, I speak to quan­ti­ta­tive futur­ist, Amy Webb.

I would like to see a future in which we all still have agency, and my con­cern is that we are get­ting fur­ther and fur­ther away from a future in which each one of us has the abil­i­ty to make decisions.
Amy Webb, excerpt from interview.

Amy shared her insights into the impor­tance of trend fore­cast­ing, the glob­al chal­lenges faced by mod­ern busi­ness, and the tools you need for think­ing like a futur­ist. This episode was record­ed on loca­tion in London, England, where Amy was due to give a keynote presentation.

Luke Robert Mason: So Amy Webb, you are a futur­ist. What does that term—futurist—mean to you?

Amy Webb: So in my case, as a futur­ist I con­sid­er myself to be a quan­ti­ta­tive futur­ist which is to say that I use data and quan­ti­ta­tive evi­dence and qual­i­ta­tive evi­dence, and use that to mod­el out plau­si­ble, prob­a­ble and pos­si­ble sce­nar­ios in the long term, and then devel­op strate­gies around that. So it’s a data dri­ven process.

Mason: So how did you get inter­est­ed in this thing—the future?

Webb: The future. So the short end of the sto­ry is that this is my sec­ond career. My first career was as a for­eign cor­re­spon­dent. I was liv­ing in Tokyo and China in the mid-90s when a lot of the con­sumer tech­nol­o­gy that we take for grant­ed today was first being pro­to­typed. So I got to see very ear­ly ver­sions of phones that were con­nect­ed to the Internet, phones that had cam­eras, and I remem­ber think­ing how dra­mat­i­cal­ly that tech­nol­o­gy was going to change every day life. I con­tin­u­al­ly had chal­lenges con­vinc­ing the jour­nal­ists that I was work­ing with that some­day in the very near future, we were all going to have the Internet in our pock­et, and have access to news 247, and we were gonna prob­a­bly have prob­a­bly have new dis­tri­b­u­tion chan­nels to enable any­body to share news. And by the way, I could take a pho­to which would prob­a­bly mean that I could prob­a­bly some­day be able to take a video and post it from wher­ev­er I hap­pen to be. I got con­stant feed­back and edi­tors say­ing, Who would ever pub­lish a grainy pho­to tak­en from some­body’s phone? Nobody would ever do that—a grainy pho­to will nev­er run in a news­pa­per.” And I remem­ber say­ing, I’m not talk­ing about the phys­i­cal news­pa­per, I’m talk­ing about the Internet.” So I got tired of hav­ing those argu­ments and quite frankly the news­room was tired of me bring­ing up those argu­ments, and we part­ed ways. I start­ed a R&D lab that was pro­to­typ­ing news fea­tures, most­ly in the dis­tri­b­u­tion realm but basi­cal­ly we were work­ing all the time on inter­est­ing and dif­fer­ent ways to col­lect and share news. That was all about the future. At the same time I had dis­cov­ered Alvin Toffler which then led to all of the futur­ists from the late 1800s through the 70s or so. I read every­thing and decid­ed, Wow, there are peo­ple who do this all the time. They think about and mod­el out future sce­nar­ios and they do that for all dif­fer­ent types of pur­pos­es, and that’s what I should be doing next”.

Mason: So your new book The Signals Are Talking is real­ly the method­ol­o­gy for how we analyse and look at the future. Could you share some of those method­olo­gies that you share in the book?

Webb: Sure. So my method­ol­o­gy is six parts, and it was def­i­nite­ly influ­enced by oth­er futur­ists who are in the sort of aca­d­e­m­ic space. My mod­el alter­nates between what Stanford’s D School would have called flared and focused think­ing.’ It’s been my obser­va­tion that when peo­ple are think­ing about the future—especially when it comes to technology—they tend to focus on just one thing. If they’re try­ing to fig­ure out the future of cars—and I just had a long con­ver­sa­tion with an auto com­pa­ny about this—what they’re try­ing to do is fig­ure out the future of peo­ple mov­ing around. They’re not actu­al­ly try­ing to fig­ure out the future of cars, because that would assume that we will only ever have cars. That nar­row think­ing is the result from not going real­ly broad in a method­i­cal way, and going nar­row when it makes sense.

So my method is six steps. It starts with hunt­ing down weak sig­nals at the fringe, so these are changes in tech­nol­o­gy, changes in soci­ety, and what I would call The 10 Modern Sources of Change which involve every­thing from ecol­o­gy to eco­nom­ics and wealth changes. That allows me to cre­ate a map and I call that map a fringe sketch’, but for peo­ple who have done any kind of sta­tis­tics, it’s just a bunch of notes and con­nec­tions. That essen­tial step—especially when you do it with a team of people—it helps you find all of the dif­fer­ent pieces that you oth­er­wise would have missed. It forces you to change the ques­tion from, What’s the future of cars?” to, What’s the future of peo­ple, pets and objects mov­ing around?” 

From there, the sec­ond step is to focus and do pat­tern recog­ni­tion, and look for pat­terns from those sig­nals. At that point you should have dif­fer­ent trend can­di­dates. Trends are impor­tant because they are way­points to the future. You know, a lot of times peo­ple think iden­ti­fy­ing trends—that’s the whole goal, that’s the end. Really that’s the begin­ning, because once you’ve iden­ti­fied trends you have to do three things. So that’s the next cou­ple of steps of the process. One is you have to make sure you did­n’t screw up. A whole bunch of peo­ple get dis­tract­ed by shiny objects. The exam­ple that I like to use is Foursquare and check­ing in badges. If you can remem­ber way back, many many years ago to 2013 when every­body was check­ing in and earn­ing their badges. Lots of com­pa­nies invest­ed, lots of com­pa­nies made cus­tom badges and every­body thought that the badges and the check-ins were the future. That was­n’t the future. Location based ser­vices, which is real­ly bor­ing—that was the future. That was what was pay­ing atten­tion to. That was the trend.

The third step of the process is to focus, to ask your­self a bunch of ques­tions and to go through data, to go through the mod­els to make sure you did­n’t mess any­thing up.

Then the fourth step is to nar­row once again and think through tim­ing and tra­jec­to­ry, and then you have to take some kind of action. So the fifth and sixth steps have to do with devel­op­ing a strategy.

So it’s a long expla­na­tion, but I should say the rea­son I just explained it all is because as of last month, I have open-sourced all of my IP so all of my research, all of the work I’ve ever done is now freely available.

Mason: What’s the reason?

Webb: That seems nuts! Why on earth would you do that?

Mason: Well when so many futur­ists it seems kind of pro­tect their method­olo­gies. Futurists do this mag­i­cal think­ing back in their offices and yet the first thing they say when they get on the stage is, No-one under­stands what I do.” There’s always this fake mythos that’s cre­at­ed around what I like to call the medi­a­tised futur­ist.’ The futur­ist who has a keynote speak­ing career but real­ly does­n’t do the hard graft of deal­ing with the dif­fi­cult ques­tions asso­ci­at­ed with this thing called the future.

Webb: Right—excellent point, and good ques­tion. The rea­son is because—well. I’ve always thought it strange that peo­ple who run gov­ern­ments and busi­ness­es are expect­ed to learn how to use a spread­sheet. They’re expect­ed to under­stand basic account­ing, and they’re not expect­ed to under­stand how to think like a futur­ist. I’ve always thought that was real­ly strange because osten­si­bly their jobs are the future.

I think we’re on a new kind of time hori­zon with regards to tech­nol­o­gy. It’s our gen­er­a­tion that is liv­ing through a transition—it just does­n’t feel like it. My daughter—who is pret­ty young—she’s going to be prob­a­bly the last group of human beings who have to learn how to dri­ve. My father—who is in his 70s—is prob­a­bly going to be in the last gen­er­a­tion of peo­ple who still have to type. 

We’re look­ing at a whole bunch of fun­da­men­tal­ly ground­break­ing tech­nolo­gies that range from the var­i­ous facets of arti­fi­cial intel­li­gence, to genom­ic edit­ing, to all kinds of automa­tion. All of these things togeth­er will fun­da­men­tal­ly change what it’s like to be a human. At the same time, we are all also all liv­ing through a geo-politically unsta­ble moment in time. I hope it’s a moment. Part of that is the fault of the per­son run­ning our country—my country—right now. If it was any­body else you could use game the­o­ry to sort of mod­el out what might hap­pen next. We’re in a sit­u­a­tion where we tru­ly don’t know what might get Tweeted next or what might hap­pen next, and I’m con­cerned. What my feel­ing is, now more than ever peo­ple are fetishis­ing the future and they feel very anx­ious about the future, and I want every­body to make smarter deci­sions and to get informed, and to use the tools that I use to make bet­ter deci­sions. I see no harm in open-sourcing every­thing. I see that only as a big ben­e­fit, because if we are all using futur­ist tools and mod­els and we’re doing it in a seri­ous way, that will help everybody.

Mason: Do you think some of the interest—or at least the pub­lic interest—in notions and pos­si­bil­i­ties of the future comes from an attrac­tion to shiny objects? How do you extract or remove peo­ple from pure­ly that fas­ci­na­tion and help them realise that there’s things a lot more dif­fi­cult to nav­i­gate? We’re told, The future is going to be awe­some.” You get these peo­ple who stand on stage and they sell these incred­i­ble futures. But it always feels like the rea­son why peo­ple are so attract­ed to these futures is because there’s some­thing so fun­da­men­tal­ly wrong with the present, and this feels like a poten­tial to escape into some­thing that will be better.

Webb: I think you’re onto some­thing. I would par­tial­ly blame it on the pat­tern recog­ni­tion parts of our brain that start fir­ing off when we’re look­ing to make sense of some­thing. I think par­tial­ly what attracts peo­ple to tech-utopianism—I think you’re absolute­ly right. I think it’s the same rea­son that we go watch movies in the the­atre. It’s because we want an escape. Maybe it’s also why peo­ple go to church. They want the promise of a bet­ter tomor­row. But there’s also the oth­er side of that coin which is the dystopi­an visions of the future. There’s plen­ty of peo­ple who also stand on stage and talk about the end of the world coming.

A cou­ple of things are going on. As humans, we’ve always been sur­round­ed by a lot of data. We’re espe­cial­ly sur­round­ed by and assault­ed by enor­mous amounts of data today, and the way that our brains our wired is that we’re con­stant­ly look­ing for pat­terns to help us make sense of what’s around, and the eas­i­est way for us to do some­thing with that infor­ma­tion once we’ve recog­nised pat­terns is to fit it into a nar­ra­tive. That’s why sto­ry­telling is so fun­da­men­tal to human­i­ty. It’s because that’s how we pass infor­ma­tion. The peo­ple who tell these crazy sto­ries about the future—whether they’re pos­i­tive or neg­a­tive or strange or whatever—you know, it’s easy to con­nect to them and to what they’re saying.

But the thing to keep in mind is that I am a pro­fes­sion­al futur­ist and I have absolute­ly no idea what the future is. My job isn’t to tell you or to pre­dict what the future is. My job is to fig­ure out, giv­en what we know to be true today—what are the like­ly paths and what does the prob­a­bilis­tic mod­el show? Then we use that infor­ma­tion to make bet­ter deci­sions. But that’s not as eas­i­ly under­stood as some­body stand­ing on stage with a pret­ty pic­ture in the back­ground and space­ships fly­ing over­head and Uber-taxiing it—or what­ev­er they’re call­ing it—these five min­utes and say­ing, Everything is going to be great. Just wait 15 or 20 years for AGI to kick in.”

Mason: Why do you think the cur­rent state of dis­cussing or fram­ing the future sits with­in this bina­ry of either, You’re an eter­nal optimist—AI is going to save us, it’s going to make us more intel­li­gent about our­selves” or, AI is going to be the thing that kills us, if it isn’t nano-tech or if it isn’t some sort thing that falls from space or syn­thet­ic biol­o­gy.” Why do you think it has to sit across these two dichotomies? I’ve always felt that when Elon and Co. say that, Oh AI is going to be our last inven­tion,” it some­times feels like it’s just real­ly good mar­ket­ing. I don’t think that the tech­nol­o­gy is quite there yet, but if they instill that fear in peo­ple, they believe that the future is clos­er than it actu­al­ly is. I think the future itself is being used as a form of lever­age in a weird sort of way.

Webb: Yeah that’s a real­ly, real­ly good per­spec­tive. It’s the third step of my methodology—once you’ve heard some­thing or you’ve decid­ed some­thing is—like a technology—is a thing, and even if it’s bina­ry at that point, step num­ber three says, Eviscerate every­thing that you know. Tear it all apart, and if at the end of you pok­ing holes into every sin­gle thing, you still believe that, then fine—and if you’ve got evi­dence to back it up.” What I would say is that every­body has—usually—different rea­sons for offer­ing polar­is­ing views, and usu­al­ly those rea­sons have to do with some kind of gain, and so that’s part of it. The oth­er part of it is we live in a world where infor­ma­tion is every­where, and in the dig­i­tal realm, atten­tion is cur­ren­cy. It’s hard­er and hard­er to get peo­ple’s atten­tion with­out say­ing some­thing salacious.

So Marvin Minsky, one of the founders of mod­ern AI and one of the peo­ple who coined the term in the 60s calls AI a suit­case word’ and the rea­son is because you can pack a lot of stuff in a suit­case, and it is a suit­case. But once you open the suit­case up you can have a thou­sand dif­fer­ent things in it. AI—artificial intelligence—is a suit­case word, because inside that suit­case is any­thing from machine read­ing com­pre­hen­sion to deep nets, and machine learn­ing and com­pu­ta­tion­al linguistics—I mean there’s a lot that’s in there. To try and have a con­ver­sa­tion with some­one who is not a tech­nol­o­gist or who does­n’t fol­low what’s hap­pen­ing, their eyes are going to gloss over. Therefore it’s either, AI is going to kill us all,” or, AI is going to save us all.” That’s what grabs the atten­tion. In real­i­ty, AI—artificial and nar­row intelligence—is already here. We all already use it and inter­act with it every sin­gle day. Anything in life, the sub­tleties are what always get missed. But those are usu­al­ly the most impor­tant com­po­nents to be pay­ing atten­tion to.

Mason: So what is the role, then, of the futur­ist? To bet­ter edu­cate the gen­er­al pub­lic around the lan­guage asso­ci­at­ed with poten­tial new forms of tech­nol­o­gy? It always feels like there’s a lan­guage issue, with the exam­ple of AI and the suit­case. People are very con­fused as to what this tech­nol­o­gy is capa­ble of doing and what it does right now. There’s a mis­com­mu­ni­ca­tion between what con­sti­tutes arti­fi­cial intel­li­gence ver­sus what is essen­tial­ly intel­li­gence aug­men­ta­tion. How can the futur­ist bet­ter help aver­age Joe or Jane nav­i­gate these com­plex times?

Webb: I see my role as part­ly edu­ca­tion­al for that rea­son. To help the pub­lic make sense of tech­nol­o­gy in their lives and the deci­sions that we’re mak­ing with regards to that tech­nol­o­gy. Part of it is edu­ca­tion­al, part of it is advi­so­ry. So I do advise the United States gov­ern­ment, and the mil­i­tary, and dif­fer­ent com­pa­nies. I think that there are sort of dual pur­pos­es. There are cer­tain­ly futur­ists who work in a con­sul­ta­tive capac­i­ty and I don’t think do the pub­lic edu­ca­tion. I view myself as a pub­lic intel­lec­tu­al as much as any­thing and I feel like I have an oblig­a­tion to not just tell peo­ple, This is what I see,” but to show them my work. Especially now when every­thing is poten­tial­ly con­sid­ered fake news, the last thing we need is fake futures news. That would be a real problem.

Mason: Do you think we are in a sit­u­a­tion where we’re being shown a cer­tain degree of fake futures? It goes back to that issue around lever­age and future being used as leverage—either because there’s per­son­al gain or there’s prof­itabil­i­ty or there’s some sort of polit­i­cal gain there. That’s been since the 60s where old Kennedy was going, We’re not doing it cause it’s easy, we’re doing it cause it’s hard,” and went off to space to prove that they owned both the future and the present, ver­sus the Russians. I just won­der when either per­son­al agen­das, prof­itabil­i­ty agen­das or polit­i­cal agen­das col­lide with the future, the inevitable out­come is fake futures in the same way that we have fake news.

Webb: That’s right and that is what we called the AI win­ter in the 60s. So the answer to your ques­tion is, Yeah”—and that’s not good when that hap­pens. So for peo­ple who aren’t famil­iar with this already, lead­ing up to the 1960s there was a lot of activ­i­ty hap­pen­ing with new kinds of com­put­ers and com­put­ers mov­ing from the first era of com­put­ing to the sec­ond era of com­put­ing. If the first era real­ly just was tab­u­la­tion, the sec­ond was more about com­pu­ta­tion and com­plex computation.

There was a lot of activ­i­ty in the 40s, 50s and 60s around con­cep­tu­al­is­ing a frame­work where humans could teach machines to think, and so that was the gen­e­sis of all this. All the the­o­ries were fas­ci­nat­ing. Especially now, it’s real­ly inter­est­ing to go back and read some of those ear­ly aca­d­e­m­ic papers about whether or not humans might some­day teach machines to think and what the machines might do. Minsky actu­al­ly had a paper…he had obvi­ous­ly sev­er­al papers, but one of the papers he wrote talked about whether or not machines could maybe gain con­scious­ness. So there was a lot of real­ly inter­est­ing debate and dis­cus­sion at the same time that com­put­ers were get­ting faster com­po­nents, the price of com­po­nents were drop­ping. We had addi­tion­al com­put­er pow­er, we had more peo­ple who knew what to do with com­put­ers, we had the birth of mod­ern com­put­er sci­ence as an aca­d­e­m­ic discipline—and then every­body start­ed mak­ing a lot of promis­es. So one of the promis­es that got made in the United States was—and this is at the height of the Cold War—that arti­fi­cial­ly intel­li­gent machines could be used to simul­ta­ne­ous­ly trans­late Russian into English, which would have been a game chang­er. To sort of mon­i­tor con­ver­sa­tions that were hap­pen­ing and to simul­ta­ne­ous­ly trans­late those mes­sages. The ulti­mate spy­ing tool. But there was no way that that was going to work, so there was a lot of over­promis­ing about the future, a lot of fake news about the future of AI in the 60s, and when a lot of that failed to mate­ri­alise, all of that exu­ber­ance and excite­ment and most impor­tant­ly fund­ing dried up. The fake news about the future actu­al­ly wound up dra­mat­i­cal­ly impact­ing the future and we set our­selves back.

There’s a lot of excite­ment again and every­body’s talk­ing about AI now, and there’s a lot of the same exu­ber­ance, a lot of the same insane fund­ing cycles, you know.

Mason: So do you think we’re due anoth­er win­ter? Another AI winter?

Webb: I mean I would hope not. There’s always going to be a pock­et of peo­ple that push the tech­nol­o­gy for­ward. I think at this point it’s too big to fail. There’s so much fund­ing tied up. China has promised a chunk of it’s sov­er­eign wealth fund. I don’t see AI, the field, going through the same thing it did dur­ing that first AI win­ter. However, I see a lot of peo­ple get­ting dis­tract­ed by the shiny. The shiny object I think in this case is a lot of what you see celebri­ties talk­ing about, and celebri­ty tech­nol­o­gists talk­ing about. But also, we’re heav­i­ly influ­enced by enter­tain­ment media and a lot of these images are indeli­ble. So Her—the movie Her -

Mason: - And Black Mirror, and Humans in the UK, Westworld in the US?

Webb: Absolutely. Now some of those, I don’t think I’ve… Westworld, by the way, is my favourite show ever. Most of the Black Mirror episodes are my sec­ond favourite. I haven’t come across any­body who believes that Westworld is like­ly in our future. I have, how­ev­er, heard a bunch of peo­ple ref­er­ence the movie Her, and Samantha—the char­ac­ter in it, on a pret­ty reg­u­lar basis, which means that when peo­ple think about the prospect of talk­ing to machines, that movie is so stuck in their heads that that’s what they’ve envi­sioned for the future. That’s prob­a­bly not what the future is going to look like in the near-term but it’s a good reminder that we influ­ence the out­comes of the future through effec­tive storytelling.

Mason: Do you think there is cer­tain memet­ic pow­er in sci­ence fic­tion that actu­al­ly under­lines cer­tain tra­jec­to­ries towards the future? We have less peo­ple read­ing sci­ence fiction—more peo­ple know Charlie Brooker’s Black Mirror than they do William Gibson’s Neuromancer.

Webb: You think so?

Mason: Oh yeah, yeah.

Webb: As you were say­ing that I was­n’t think­ing of Gibson. I was think­ing more of Asimov, or of Philip K. Dick.

Mason: It feels like that guides most indi­vid­u­al’s think­ing, and Black Mirror is an inter­est­ing exam­ple cause it feels so close.

Webb: Yeah. I think the con­trol and nostalgia—they’re incred­i­bly pow­er­ful feel­ings and so there’s a sense of not hav­ing any con­trol when it comes to the future because you don’t know exact­ly what’s going to hap­pen next unless you think that we are liv­ing in Elon Musk’s robot world, right. So that sense of not hav­ing con­trol is incred­i­bly pow­er­ful and dis­ori­en­tat­ing. It engages our lim­bic sys­tems and our lim­bic sys­tems start fir­ing off, and the squishy com­put­ers inside of our heads—our brains—enter fight or flight mode and we feel anx­ious and then we start making…you know. The sto­ries we tell our­selves in our heads are always worse than real life. They always are.

So I think that’s one com­po­nent and if you think about just every­day tech­nol­o­gy, I would posit that a very small sliv­er of the gen­er­al pop­u­la­tion feels 100% com­fort­able any time they get a new tele­vi­sion or get a new tele­phone or some­thing. They realise they’re not going to break it and they’re okay mak­ing mis­takes and tin­ker­ing and fid­dling around, and it’s not a big deal. I would argue that prob­a­bly 90% of the pop­u­la­tion feels some sense of anx­i­ety every time they have to replace their mobile device, their mobile phone, or they have to get a new com­put­er or they have to do some­thing dif­fer­ent with email. It fires off that lim­bic sys­tem, and there is this sense that you don’t have con­trol. And to be fair—we don’t. We don’t real­ly con­trol any of the devices in our lives, some­body else does. Amazon does, Google does, Twitter does, Facebook does…pick a com­pa­ny. So I think that’s a big piece of it.

But as you were talk­ing, I’m won­der­ing if we’re always yearn­ing for sim­pler times when we were kids. That’s a theme, right? Simpler times when we were kids. I don’t know when I was a kid my life was any sim­pler nec­es­sar­i­ly than it is now, but I think we all think that it was. I won­der if part of that sto­ry­telling that goes on inside of our heads sort of feeds into that, Life is going to be much more com­pli­cat­ed.” Technology is part of every sin­gle thing that we do. There is no way to extract it. My hunch is that there is this under­ly­ing sense of anx­i­ety that every­body feels because of tech­nol­o­gy. All the time.

Mason: Anxiety and also depres­sion. Do you think we’re in this weird lim­i­nal space at the moment where we haven’t quite gone through what we were promised and we’re not entire­ly sure as to what may emerge? Do you think it takes us a lot longer to deal with the impact of these devices or these tools in our lives?

Webb: Yes—to what you just said. The answer is yes, but let me explain why. You made me think of a cou­ple of inter­est­ing things. One thing that you made me think of was that I was at an event a cou­ple of months ago with Ev Williams, one of the founders of Twitter, and there were about 2000 jour­nal­ists in the room. One of the things…he did­n’t address what has become of Twitter. He did­n’t talk about it, and he did­n’t address Twitter’s impact on geo-politics, he did­n’t talk about any of it. Okay I under­stand that. He’s on the Board, he’s got a fidu­cia­ry respon­si­bil­i­ty to make sure that Twitter does­n’t tank because of some­thing he said. But a jour­nal­ist, final­ly dur­ing the Q and A, did ask him, Did you ever stop and think that Twitter may be hijacked by bots or by peo­ple who would want to spread mis­in­for­ma­tion?” and his answer to that, I thought, was real­ly telling. His answer was, It nev­er occurred to us because we weren’t think­ing about it, we were just try­ing to build a cool prod­uct,” right? If I had a nick­el for every time I heard some­one say, We’re just work­ing on the prod­uct right now,” that’s bull­shit. The prob­lem is I either think that’s untrue, or wild­ly irre­spon­si­ble. I can­not fath­om that. Especially because before Twitter, Ev had anoth­er project. Do you know what else he found­ed before that? Have you ever heard of Blogger? It’s not as though he had nev­er seen some­body use a free plat­form to spread ill right around the world. 

My point is we’re past a time when you can just work on the prod­uct and not think about any­thing else, because any tech­nol­o­gy that comes into the media space is sub­ject to mis­use and use for good and all of these oth­er things, but you have to start think­ing through the sec­ond, third, fourth, fifth order impli­ca­tions of what­ev­er it is that you’re build­ing. If you’ve done that and you acknowl­edge, but then you choose to not wor­ry about it—fine. But just own up to it, you know.

I kind of wish Alvin Toffler was alive today, you know. Unfortunately he just passed, recent­ly passed. I wish that he was alive today and that he had­n’t yet writ­ten Future Shock. I won­der what the 2017 ver­sion of his book Future Shock would sound like. My gut tells me it would sound a lot like it did in the 60s, right—but prob­a­bly with more urgency. Humans go through cycles, so it may feel right now like life is mov­ing very fast and we don’t have a lot of con­trol, and we feel very anx­ious and peo­ple are mak­ing bad deci­sions. But if you look at a lot of the lit­er­a­ture and movies, and shows and stuff that was being writ­ten in the 60s, peo­ple felt the exact same way. If you go back to the 40s—the same, and the 20s—the same. There’s a his­to­ry of this.

Mason: I won­der if we’ve always been in this feel­ing of increas­ing accel­er­a­tion. Whether we’ve always felt like tech­nol­o­gy, the movie cam­era and all of these oth­er things have always been in this con­stant state of flux. I won­der if this is a nor­mal sit­u­a­tion. All that changes is the medi­um of transmission.

Webb: Well but that’s an impor­tant piece of it. I actu­al­ly think—and plen­ty of peo­ple would argue with me—I agree that we have always been in this state of flux. However, we have nev­er in human his­to­ry cre­at­ed this much data. Nor have we in human his­to­ry had the abil­i­ty to ingest as much data as we do every sin­gle day. So if you think back to the 60s, there was tele­vi­sion, there was radio, there were news­pa­pers and there were mag­a­zines, and that was it…and books, right…and movies. That was still rel­a­tive­ly slow. So you could have break­ing news­casts but for the most part if you want­ed to find out, you can go to the Washington Post and the New York Times in the United States. Both have archives that are open and eas­i­ly search­able. If you look at the vol­ume of news being report­ed about AI when it was new and ter­ri­fy­ing and inter­est­ing, and the reac­tions to that were all over the place in sci­ence fic­tion. If you go back, there was­n’t a tonne of insan­i­ty. There was­n’t a tonne of writ­ing. Today, it’s inescapable. You can­not get through the day unless you com­plete­ly unplug from every­thing, right? Which most peo­ple don’t do. You can­not get through the day with­out hear­ing some kind of news about change, right? Whether that’s tech­no­log­i­cal change, or eco­nom­ic change, or dis­en­fran­chise­ment, or some­thing nut­ty hap­pen­ing with pol­i­tics some­where in the world. I think that’s the key dif­fer­ence, but that’s an impor­tant dif­fer­ence because if our sense of change and anx­i­ety is that much more height­ened then the sto­ries we tell our­selves about the future get that much cra­zier and I think that it has this cas­cad­ing effect where we wind up hav­ing these polar­is­ing, bina­ry respons­es to any­thing hap­pen­ing to do with tech­nol­o­gy. Then—at least in my country—all of it gets politi­cised, and so you wind up with peo­ple say­ing, Climate change,” or, There is no cli­mate change,” or, AI is com­ing,” or Our cab­i­net offi­cials being AI deniers.” You know, you wind up with all kinds of crazy infor­ma­tion and thought.

Mason: Is that because we’re try­ing to aim to work at the same speed as cap­i­tal? So to go back to Ev Williams, they’re turn­ing Twitter—which a lot of indi­vid­u­als are say­ing should be hand­ed over as a pub­lic service—into a busi­ness that now has to return a hun­dred X return. But the num­bers don’t make sense.

Webb: The busi­ness mod­el does­n’t make sense.

Mason: The busi­ness mod­el does­n’t make sense. Part of the speed with­in news and media is because they have to cre­ate click-through to actu­al­ly sell and ser­vice the ads. Are we los­ing some­thing very human to cap­i­tal? If sud­den­ly Twitter start­ed to slow down in how much return on ad invest­ment it was mak­ing, it would slow­ly but sure­ly die as an organ­i­sa­tion. I always feel like the best thing Jack could do is hand it over to the gen­er­al pub­lic and go, Look -”

Webb: Oh no, no, no. Don’t give it to the gen­er­al pub­lic. That would be worse. I think it should become a um -

Mason: Platform coop­er­a­tive. I’m fully -

Webb: Well, okay so a cou­ple of things -

Mason: It’s a pub­lic ser­vice, it’s a pub­lic good. It should be used in that way.

Webb: I think, so…there was a world­wide con­sor­tium of jour­nal­ists who have been doing phe­nom­e­nal inves­tiga­tive work that result­ed in some­thing called the Panama Papers and now the Paradise Papers. One of the things that I recent­ly said was that Twitter is the wire ser­vice of the 21st cen­tu­ry. I did not…and the con­text around that was that news goes over the tran­som as quick­ly as it did at the begin­ning of the ear­ly days of wire services.

However, unlike the AP of Reuters, or the AFP which only allowed qual­i­ty jour­nal­ism that had been vet­ted and report­ed and sourced and edit­ed, any­body can put their stuff out through Twitter. That’s actu­al­ly not a good thing. It can be used as a 21st cen­tu­ry wire ser­vice if there’s a glob­al con­sor­tium of news organ­i­sa­tions that get it, some­how. I don’t think it’s pur­chasable by any­body. And they allow the pub­lic to con­tin­ue using it.

However, there are plen­ty of ways to make sure that net­works aren’t tak­en over by bot­nets and that mis­in­for­ma­tion does­n’t spread. I could lit­er­al­ly talk to you for about an hour in very, very deeply tech­ni­cal terms and explain to you exact­ly how that would work. 

To your ques­tion about cap­i­tal­ism ver­sus the future, which I think is actu­al­ly an inter­est­ing debate and sort of right on. Twitter is not a good use case for that because they’re not mak­ing mon­ey out­side of a hand­ful of licens­ing deals and I’m not sure how sus­tain­able their mod­el is in the longer term. However Google and Amazon, and Tencent in China, and Baidu and Alibaba—there are plen­ty of com­pa­nies that are very, very large, that are in the per­son­al and pub­lic infor­ma­tion busi­ness. Now in that case, those are all publicly…well not in the Chinese com­pa­nies but in the United States…those are all pub­licly trad­ed com­pa­nies and the eco­nom­ic inter­ests don’t always align with what’s best for soci­ety in the longer term. But you could also argue that in a cap­i­tal­is­tic soci­ety, you know a busi­ness which has a respon­si­bil­i­ty, a fidu­cia­ry respon­si­bil­i­ty to shareholders—has to put it’s busi­ness inter­ests first. So you could argue that these com­pa­nies are doing exact­ly what they are set up to do, and they are doing it well. The chal­lenge is that we now see some of the effects of Silicon Valley essen­tial­ly oper­at­ing inde­pen­dent­ly of what the rest of our coun­try, or the rest of the world is doing.

Mason: You as an individual—beyond the work that you do in terms of pre­dict­ing oth­er peo­ple’s or oth­er busi­ness­es’ future—what’s the sort of future that you would like to see?

Webb: That’s an easy ques­tion to answer. I would like to see a future in which we all still have agency, and my con­cern is that we are get­ting fur­ther and fur­ther away from a future in which each one of us has the abil­i­ty to make deci­sions, and that’s because we con­trol less and less of the data—our own per­son­al data.

We are fur­ther and fur­ther removed from the algo­rithms that both mine and refine and process that data, and we have very lit­tle insight into how deci­sions are being made on our behalf—when that’s even hap­pen­ing. There’s no trans­paren­cy around how deci­sions are being made and that may not sound like a tech­ni­cal issue, how­ev­er all of the tech­nol­o­gy that you use in your life whether that is your tele­phone, your smart­phone, or your email or the game that you’re playing…you have almost no say or con­trol in how you use that device and how that device uses you.

The chal­lenge is that the more our tech­nol­o­gy gets sophis­ti­cat­ed in it’s approach, the clos­er that we move to a zero UI real­i­ty where things hap­pen more seam­less­ly; the more that we are allow­ing peo­ple to pro­gramme machines to make machines to make deci­sions for us. That sci-fi future ter­ri­fies me more than any­thing I have seen on Black Mirror, because that’s every­day life. So the best that I can hope for is that every­body starts think­ing through the impli­ca­tions of all the tech­nol­o­gy that we have access to and comes to a uni­fied deci­sion that we are about to an enable enor­mous tragedy of the com­mons. We are the com­mons, right—and that we col­lec­tive­ly decide that we want some­thing bet­ter for ourselves.

Mason: Well on that note, Amy Webb, thank you for your time.

Webb: Thank you, this was a lot of fun.

Mason: Thank you to Amy for shar­ing her thoughts on how we can think more crit­i­cal­ly and deeply about the future. You can find out more by pur­chas­ing Amy’s books, or down­load­ing her open source fore­cast­ing tools at Amy Webb dot io.

If you like what you’ve heard, then you can sub­scribe for our lat­est episode. Or fol­low us on Twitter, Facebook, or Instagram: @FUTURESPodcast.

More episodes, tran­scripts and show notes can be found at Futures Podcast dot net.

Thank you for lis­ten­ing to the Futures Podcast.

Further Reference

Episode page, with intro­duc­to­ry and pro­duc­tion notes. Transcript orig­i­nal­ly by Beth Colquhoun, repub­lished with per­mis­sion (mod­i­fied).