Luke Robert Mason: You’re lis­ten­ing to the Futures Podcast with me, Luke Robert Mason.

On this episode, I speak with author and media com­men­ta­tor Rachel Botsman.

We rarely think about the link between trust and progress and inno­va­tion, and how soci­eties move for­ward. But when you start to think of it like that, you real­ize that trust is actu­al­ly the key com­po­nent not just for com­pa­nies but any orga­ni­za­tion that wants human beings to try new things.
Rachel Botsman, excerpt from interview

Rachel shared her insights into new poten­tials for trust, the cur­rent state of col­lab­o­ra­tive con­sump­tion, and inno­v­a­tive new uses for blockchain tech­nol­o­gy. This episode was record­ed on loca­tion in London, England at the offices of pub­lish­er Penguin Random House.


Luke Robert Mason: So Rachel, your new book is on trust. So, how do we define trust and how should we define trust? 

Rachel Botsman: Go straight in there with the tough ques­tion. No, the rea­son why I say it’s a tough ques­tion is it’s actu­al­ly fas­ci­nat­ing that trust is the most debat­ed soci­o­log­i­cal con­cept in terms of agree­ing on a def­i­n­i­tion. And there are actu­al­ly hun­dreds of papers dis­agree­ing on the def­i­n­i­tion of trust.

So, I was look­ing at all of these things, and a lot of them are around pre­dictabil­i­ty of out­comes and peo­ple’s expec­ta­tions. And I thought this was real­ly inter­est­ing because what was miss­ing for me is when you’re ask­ing human beings to trust, there is a degree of vul­ner­a­bil­i­ty involved. And so I start­ed real­ly think­ing about the rela­tion­ship between trust and risk.

And what I real­ized is that in any sit­u­a­tion where you’re ask­ing some­one to trust—that can be in a human rela­tion­ship, that could be in a new prod­uct, it could be in a new con­cept, a new place—you’ve always got these two vari­ables going on. So you’ve got a known state that you are com­fort­able in, and then you’ve got an unknown state. And then the gap between is what we call risk.

But risk isn’t what enables human beings to try new things and to sort of move for­ward, so trust is lit­er­al­ly the bridge between the known and unknown. And that’s why define trust as a con­fi­dent rela­tion­ship to the unknown.

Mason: This is such a won­der­ful phrase, a con­fi­dent rela­tion­ship with the unknown. In what way is trust this bridge? You talk about it being some sort of bridge between those two things. Why that con­cept for you? Why do you think that is for you the most use­ful def­i­n­i­tion of trust.

Botsman: Because I think many def­i­n­i­tions of trust, it feels like trust is an attribute rather than a process. And one of the things I felt real­ly impor­tant is that we rarely think about the link between trust and progress and inno­va­tion, and how soci­eties move for­ward. But when you start to think of it like that, you real­ize that trust is actu­al­ly the key com­po­nent not just for com­pa­nies but any orga­ni­za­tion that wants human beings to try new things.

The oth­er thing that I think is real­ly impor­tant is that often peo­ple talk about trust and trans­paren­cy as sort of broth­er and sis­ter. But I actu­al­ly don’t think they’re two sides of the same coin. Because if you think about a rela­tion­ship, I have a dear friend who shall remain name­less and she’s all, Oh you know, my hus­band and I, we have a great rela­tion­ship. It’s so trust­ing.” But she checks his emails. And she checks his mes­sages. And that is actu­al­ly… That’s in her words trans­paren­cy, right. So I think when you need orga­ni­za­tions or you need things to be com­plete­ly trans­par­ent we’ve actu­al­ly giv­en up on trust.

And so what I love about trust is that there is this degree of uncer­tain­ty, that we don’t know the out­comes of how things are going to turn out, and that’s… Really when you think about it that’s how progress happens.

Mason: You set up so won­der­ful­ly at the begin­ning of the book this envi­ron­ment in which it feels like today trust is at an all-time low. Where you talk about things like fake news, set that up as a mis­de­meanor. You think in actu­al fact all these things that are hap­pen­ing in the world means that trust is even more impor­tant. Could you explain that?

Botsman: Yeah, my ideas in books—like most books—they start as a hunch. And this was like you know, I kept open­ing every mag­a­zine, every paper, you could look at the head­line but the symp­tom was that trust was in cri­sis. So whether that was to do with health­care or pol­i­tics or the media or fake news, old world, new world, tech com­pa­nies, I was like you know, this does­n’t add up. This feels a fear­ful meme that is being spread like a virus.

And I don’t think we are a less trust­ing soci­ety. I think sus­pi­cion and fear is very high. But one of the things I start­ed to won­der was why do we say we don’t trust bankers but yet two mil­lion peo­ple will [?] on Airbnb. And so what was real­ly help­ful was when I start­ed to think of trust like ener­gy. And like ener­gy it can­not be destroyed but it’s chang­ing form. And so that was a real sort of light­bulb moment if you like, because then what I real­ized was that trust that used to flow upwards to ref­er­ees and experts and reg­u­la­tors and insti­tu­tions was now start­ing to flow in a dif­fer­ent direc­tion and tech­nol­o­gy was accel­er­at­ing and enabling this process. 

Mason: Now, we’re going to talk about how col­lab­o­ra­tive con­sump­tion and trust work togeth­er. But could you quick­ly explain what col­lab­o­ra­tive con­sump­tion is?

Botsman: Yeah. So col­lab­o­ra­tive con­sump­tion was the sub­ject of my first book What’s Mine Is Yours that I wrote in 2009. And then what was fun­ny about it was I inten­tion­al­ly picked that moniker and that term because I want­ed it to be the oppo­site of hyper­con­sump­tion. And it was based on a very sim­ple idea of rec­og­niz­ing that all kinds of assets in our lives, not just phys­i­cal assets like spaces and stuff but also human assets—people skills, peo­ple’s pas­sions, mon­e­tary assets—that the world was full of this idling capac­i­ty and that what tech­nol­o­gy was enabling us to do is unlock it and make it liquid.

So the book opened with Airbnb and it’s real­ly embar­rass­ing to read because it’s like, There’s ten thou­sand rooms around the world and it’s going to be mas­sive.” And I remem­ber my edi­tor say­ing to me like, You should­n’t open with this sto­ry because this com­pa­ny’s going to be dead by the time the book comes out because it’s just not going to work. Strangers are not going to trust one another.”

Now what was inter­est­ing is like, when the book came out the real les­son to me, because it was the mid­dle of the finan­cial cri­sis and every­one thought this was a trend, that is was about peo­ple being cheap, try­ing to make mon­ey. And I sort of under­es­ti­mat­ed how much ideas, if you like, they react to the envi­ron­ment in which they’re born.

But then it start­ed to get trac­tion. And then peo­ple real­ized that they did­n’t like the term any­more so they said like, No, we’ve go to rebrand it the shar­ing econ­o­my.’ ” And I always said that was a prob­lem because in some instances, there were real­ly real­ly beau­ti­ful forms of shar­ing. And they still exist, we just don’t hear about them any­more in the media, where peo­ple are doing amaz­ing things.

And then oth­er plat­forms, like Uber was just launch­ing at the time, they were about the effi­cien­cy of assets. They were about these asset-light net­works that were basi­cal­ly enabling peo­ple to get things cheap­er, more con­ve­nient­ly, and more effi­cient­ly, and I knew the time shar­ing was going to become an Achilles heel.

Mason: It’s seemed that Airbnb is a good exam­ple of this thing called the shar­ing econ­o­my. And to a degree, this term col­lab­o­ra­tive con­sump­tion is a reac­tions that. Because Airbnb isn’t real­ly shar­ing, is it? It’s about uti­liz­ing a plat­form to allow you to exchange excess sur­plus. If it was tru­ly shar­ing, the cur­ren­cy would be I would give out my sofa for two weeks and that would gain me two weeks—

Botsman: Like couchsurfing.

Mason: Like couch​surf​ing​.org, which used to be won­der­ful mod­el, which is now being destroyed by Airbnb.

Botsman: Ooh, capitalism.

Mason: Well, I was a big user of couch​surf​ing​.org when I was trav­el­ing through San Francisco in 2012, and now those sofas that were pre­vi­ous­ly giv­en out in exchange for the abil­i­ty to either do up some­one’s house or hang out with some inter­est­ing peo­ple, and they now have a dol­lar fig­ure on them. That sofa is now worth some­thing, thanks to Airbnb.

Botsman: I don’t think… I don’t think shar­ing has to be for non-monetary rea­sons. I think mon­ey can be involved. And I think the trou­ble with Airbnb is it’s a very mixed mod­el So I do think there is a seg­ment that it gen­uine­ly is shar­ing. Even when you’re charg­ing for a room… Like, I’ve met many of these hosts, and yes the mon­ey is an impor­tant dri­ver and we should­n’t under­es­ti­mate the per­cent­age of peo­ple that now depend on that for their mort­gage or their rent. But they’re doing it because they do want to share their home and they do want to share their local expe­ri­ence and have a human con­nec­tions. So you know, I think the emp­ty nesters a real­ly inter­est­ing. That it is for social rea­sons as well as— You know, you do meet hosts and they don’t need the mon­ey. They just like the human exchange.

The prob­lem is that when any plat­form hits scale, com­mer­cial inter­ests will take over. And you’ve got land­lords that are just buy­ing up build­ings and there is no ele­ment of shar­ing involved and it’s a mixed mod­el, right. And so when you go on that mar­ket­place it’s real­ly hard to make the dis­tinc­tion. Which is why I think they’re mov­ing in the food, in the expe­ri­ences direc­tion because they’re try­ing to bring back that humannnes and that local fla­vor that was real­ly kin­da key to their suc­cess in the ear­ly days.

Mason: I always saw that as Airbnb real­iz­ing that unlike Marriott or Hilton, Airbnb hands you over to your hosts and then there’s no longer a touch­point with them. So when you go and check into a hotel you have the Hilton or the Marriott expe­ri­ence. The choco­late on the pil­lows, Hilton or Marriott. The tow­els are Hilton or Marriott. With Airbnb, their touch­point ends as soon as you meet your your host. 

And I almost have the flip inverse view of per­haps why they’re so obsessed with try­ing to build equi­ty around the entire expe­ri­ence— I think they’re wor­ried that you know, they want to keep you in the plat­form. They want to be the thing that you book your expe­ri­ence in San Francisco or London with.

Botsman: Yeah, look, don’t get me wrong. I don’t think they are shy of their ambi­tions to real­ly own that hos­pi­tal­i­ty ecosys­tem, and beyond that. So it will become a por­tal into your life. But I gen­uine­ly… I mean, I know those founders. And I gen­uine­ly believe this isn’t about them want­i­ng to be in the mid­dle of the rela­tion­ship. I think they want to help facil­i­tate deep­er rela­tion­ship and for Airbnb to have a broad­er mean­ing. But I do think they’ve real­ized that it’s been dilut­ed and that the expe­ri­ences com­po­nent and the trip com­po­nent is a way to inject that back in.

I also think they real­ize that there’s so many rival prod­ucts com­ing onto the mar­ket. Like you look at the hotel brands now, and you look at things like ROAM, it’s like Airbnb in a hotel. So yeah. I’m more skep­ti­cal than you.

Mason: I’ve always won­dered why there was nev­er a Hiltonbnb. I always thought that Hilton demand­ed reg­u­la­tion around Airbnb or was at least one of the groups that demand­ed reg­u­la­tion about Airbnb. I always won­dered why they did­n’t go, You know what? We’re going to start a Hiltonbnb, but the rules are you’re able to give out your free room if it has a pow­er show­er and a fifteen-[?] pil­low and goose feath­er pil­lows and acces­sories.” And I always won­dered why that was­n’t the reac­tion to that. They up the qual­i­ty game and actu­al­ly own that space in the same way that Airbnb did.

Botsman: Well I think it’s… One of my favorite writ­ers and thinkers is Clay Shirky. And he put it so bril­liant, I think it was in his sec­ond book where he said insti­tu­tions will always try to pre­serve the prob­lem to which they are the solu­tion. And when you under­stand that quote, it explains so many ways that we just nat­u­ral­ly respond to disruption. 

But, we are see­ing that. Hyatt just bought Onefinestay. Wyndham just bought Love Home Swap. So I find it amaz­ing that it’s nine years lat­er. And many of the hotel brands are now build­ing these local com­mu­ni­ty hubs where you have a room but you share your meal and it does­n’t feel like a hotel and all the rooms are different. 

Mason: Is that less of a reac­tion to Airbnb and more the rise of bou­tique hotels? There seems to be some­thing quite inter­est­ing about hav­ing an oth­er expe­ri­ence as opposed to the same homog­e­nized expe­ri­ence that you can get in any city through any hotel-branded experience?

Botsman: Yeah. I mean, I think it was an emerg­ing trend in trav­el where peo­ple were say­ing like— And this is where it’s real­ly inter­est­ing where it relates to trust. Because you know, Hilton has spent more than a hun­dred years build­ing a brand that is about consistency—or Marriott—that stay in a Marriott in Budapest and then you stay in one in Tokyo and they—

Mason: Look exact­ly the same.

Botsman: —exact­ly the same, down to the pil­lows. And that was…the concierge and it’s all about reli­a­bil­i­ty… So you can imag­ine them as brands being like, What just hap­pened? Our whole brand promise was essen­tial­ly on the trust of con­sis­ten­cy and reli­a­bil­i­ty, and now the trav­el­ers’ say­ing, Actually, we want a lit­tle bit of mis­align­ment. We want a lit­tle bit of sur­prise. We want things to be dif­fer­ent…’ ” And what they’re actu­al­ly trust­ing is some­thing com­plete­ly different.

Mason: Well let’s talk again about trust and how it oper­ates in plat­forms such as Airbnb. So, to degree you make a deci­sion when you book an Airbnb as to the price and the loca­tion, but also the trust­wor­thi­ness of the user. In part because we have seen some exam­ples of fraud through the plat­form but also because we want to make sure that human being is a good human being, and every indi­vid­ual not just has a pro­file for how com­fort­able their room is but a pro­file for how per­son­able they are. They’re build­ing this thing that you call a rep­u­ta­tion, they’re build­ing a rep­u­ta­tion. And how does trust and rep­u­ta­tion interplay?

Botsman: Yeah. I mean, it’s a real­ly inter­est­ing ques­tion because one of the things that I’m fascinated—and let’s just stay on Airbnb—is where does trust real­ly lie? So, def­i­nite­ly when Airbnb start­ed and they were an unknown brand, trust was very much between the host and the guest. And then I think as the plat­form became more sophis­ti­cat­ed, so like even from the algo­rithm around the rec­om­men­da­tions, pay­ment sys­tems, instant book­ing, you could argue that the trust sort of migrat­ed more to the tech­nol­o­gy. And then now as Airbnb’s become of a brand, the brand still plays an impor­tant role.

But I still think— And peo­ple often say, Well, there’s no trust between the host and the guest,” because often they don’t meet one anoth­er, but I don’t think that’s true. Because you’ve always got to trust the host in terms of you know, is the pic­ture real­ly like the place? Is it con­sis­tent in their offer­ing? So one of the things that always fas­ci­nat­ed me is how do we make judg­ments about strangers? How do we place our faith in strangers? And what kind of sig­nals do we use to make an assessment?

And we saw this in eBay. We saw how pow­er­ful— You look at that sys­tem and it’s so rudimentary—it’s five stars, basi­cal­ly. And so now what’s hap­pen­ing is that these pro­files and these rat­ing and review sys­tems are becom­ing more and more sophis­ti­cat­ed, if you like, and more and more con­tex­tu­al. So peo­ple are real­iz­ing it’s okay if you’re a nice per­son but actu­al­ly it real­ly meant if you’re clean, and if you’re polite if you’re going to be a good host as well. And so peo­ple are start­ing to real­ize that their rep­u­ta­tion is a cur­ren­cy. So if you are a host and you have a lot of real­ly good reviews and you have a high rat­ing, you’re more like­ly to get a booking. 

Similarly as a guest you know, it hap­pened to me where I had an unfor­tu­nate expe­ri­ence and I left ear­ly for the air­port— I am tak­ing respon­si­bil­i­ty for this, but I was not the one who checked out and my kids had left a lit­tle bit of a mess. And I got a real­ly bad review. And I found it hard to get a book­ing after that. And that’s what I think is so pow­er­ful about these sys­tems is yes, there’s a lot of flaws in these sys­tems. But now with the blind review process, it does keep the mar­ket­place strange­ly account­able to one anoth­er. People do behave dif­fer­ent in Airbnb than they behave in a hotel.

Mason: You’ve used exam­ples of spe­cif­ic behav­ior change with regards to how towels—

Botsman: [laughs] Well I was think­ing about myself as a guest. And I was think­ing like, What do I do in a hotel that I don’t do in an Airbnb?” And I’m like, guilty of leav­ing tow­els on the floor.

Mason: Uh huh. Me too.

Botsman: Guilty, right? Now, I try not to use a dif­fer­ent tow­el every day because I object to that. But when I leave, it’s fine if they’re on the floor. I would nev­er ever do that on an Airbnb because I just know that that could lead to a judg­ment about me and it’s not worth it because it could dam­age my reputation.

Mason: So these ways in which we’re cap­tur­ing trust and turn­ing that into rep­u­ta­tion, you talk about these things called rep­u­ta­tion dash­boards. The abil­i­ty for every sin­gle human to have a dash­board of how trust­wor­thy they are.

Botsman: Yes. Do you know, I feel I was very naïve when I first start­ed speak­ing about those things?

Mason: Well, could you explain— I mean, how this rep­u­ta­tion dash­board idea may oper­ate? Or if that idea now is a defunct idea.

Botsman: It’s not a defunct idea. I just… I did­n’t real­ly… I had this idea, and it was basi­cal­ly from talk­ing to lots and lots of users, where they would say, I’ve got this super high rat­ing on Airbnb, but I want to become a TaskRabbit.” Or, I just want to start sell­ing on Etsy and I’m like a ghost in the system.”

And at the same time, I was think­ing about my own life, where I’ve lived in every con­ti­nent but Antarctica. And the biggest pain point is not set­tling into a city, it’s that I can­not get a phone. I can­not get insur­ance. I can­not get a bank account, because I’m a ghost in the sys­tem. And it’s real­ly hard to port your cred­it his­to­ry across countries. 

So my idea was if we could own all this data— This data is us. Like, this data belongs to us, all this data being gen­er­at­ed on us, it has val­ue. And could I have a Rachel Botsman dash­board that say, when I go to my insur­ance com­pa­ny or I go and try and get a flat, I could pay a low­er ten­an­cy bond because I have a real­ly good rep­u­ta­tion? And the idea was that the more you gath­er that infor­ma­tion it could become high­ly con­tex­tu­al. Because if I’m a real­ly good dri­ver on blah­blah car, maybe I could use that in terms of my dri­ving insurance.

I think where I said where I was naïve is A, the com­pa­nies don’t want to give us this data, right— 

Mason: I won­der have you seen any poten­tial solves for that? Because the rep­u­ta­tion data that we build, say in eBay or Amazon or Airbnb or Uber—our Uber ranking—they all exist in sep­a­rate stacks across sep­a­rate sys­tems, and maybe that’s a good thing. Maybe you don’t want your Uber rank­ing to define your abil­i­ty to oper­ate a car, for example.

Botsman: Yeah. I mean that’s— The tricky thing about this is peo­ple, they tend to for­get that trust is high­ly con­tex­tu­al. And so what I real­ly did­n’t think hard enough about is for­give­ness and trans­gres­sions, and that you know, as much as a rep­u­ta­tion could empow­er us and log val­ue, then all the bad mis­takes that we’ve made could also fol­low us the rest of our lives.

I think it’s… You know, there’s many star­tups in this space. And a few are start­ing to crack it. So there’s a real­ly inter­est­ing com­pa­ny called Traity. They’ve been in the game now like for sev­en years, and the founder said like they’ve just been warm­ing up, run­ning around the track so that they’re ready to go. And they’re focus­ing actu­al­ly on some emerg­ing mar­kets where peo­ple can­not move into the city because they can­not get an apart­ment, a place live. Because they have no cred­it history.

Or, as we were talk­ing about tran­sient peo­ple, they then have to pay pay a real­ly high ten­an­cy bond. And now they’re work­ing with insur­ance com­pa­nies and have actu­al­ly got a prod­uct on the mar­ket using peo­ple’s rep­u­ta­tion data. But I think what’s real­ly inter­est­ing about that is that they’re going into sit­u­a­tions where peo­ple under­stand the risk and peo­ple under­stand that rep­u­ta­tion could be a risk pre­mi­um, that it has a val­ue ver­sus con­vinc­ing you that your rep­u­ta­tion has value.

Mason: I mean, have we found any solu­tions for relin­quish­ing our rep­u­ta­tion data from the stack? Is any­body work­ing specif­i­cal­ly on find­ing solu­tions to allow­ing us to have these things such as rep­u­ta­tion dash­boards? There’s a part of me that kind of likes the idea, for the same rea­son that you were men­tion­ing what hap­pens when you enter a mar­ket start­ing from zero. So if you’re a brand new Airbnb host? A lot of times I’ve looked at a brand new Airbnb host, real­ized they’ve got no reviews, the house looks real­ly nice. But then I’m like, Is that too good to be true?

Botsman:know. It’s annoying.

Mason: Because they’ve got zero rank­ing, there’s like a zero rat­ing. It’s only been there for a week. And there is a part of you that goes you know what, I’m gonna go for the oth­er one which is slight­ly more expen­sive, slight­ly less… Looks slight­ly less nice but at least they’ve gone eight-nine reviews. I mean if you start at zero, how do you enter the rep­u­ta­tion market?

Botsman: And it becomes hard­er as the mar­ket becomes more mature. Because those rat­ings are proof­ing. They’re real­ly real­ly impor­tant psy­cho­log­i­cal­ly in a booking.

I think the oth­er real­ly bit prob­lem is like… So there’s a new plat­form launched called Kid & Coe, and it’s focus­ing on the fam­i­ly seg­ment. And we have a home that is per­fect­ly set up for fam­i­lies, and we need to go into fam­i­ly homes. I can’t port my my rep­u­ta­tion onto Kid & Coe. And that’s where it’s frus­trat­ing, because it’s the same behav­ior, it’s the same offer, but I’m locked in.

Now, some plat­form… So, there was Legit. There’s one called Good Karma. There’s one called TrustCloud. There’s one called Trust Portal. They’re all fig­ur­ing out how to scrape this data, but it’s still this ques­tion as to whether it real­ly belongs to you.

Mason: So there’s no open API for rep­u­ta­tion as such.

Botsman: No. And you can argue it will hap­pen with the blockchain, poten­tial­ly. The prob­lem is that we have don’t have a dig­i­tal data lock­er to pull it into. So you think of the num­ber of leaps you’re try­ing to get the aver­age con­sumer through, right. Oh, your rep­u­ta­tion has val­ue. You should try and port it. Oh it can live over here. And we haven’t quite fig­ured this out yet but it’s going to have uses in all dif­fer­ent parts of your life.

And then I think the ick fac­tor is that we don’t like to be judged. And so this was what real­ly played out in who can you trust, espe­cial­ly when I start­ed study­ing and research­ing what was going on in China, where every cit­i­zen will have a trust score, so to speak, or a social cit­i­zen score—by 2020 it will be absolute­ly manda­to­ry. And that is the real extreme exam­ple of this, and that’s what I think peo­ple fear. So they place more fear on the idea than value.

Mason: Let’s talk about those Chinese cit­i­zen scores, as you give this won­der­ful def­i­n­i­tion and dia­gram in the book. I mean, it’s every­thing from sort of how you act as indi­vid­u­als to even your shop­ping habits becom­ing a mark­er of your char­ac­ter. Could you just explain that spe­cif­ic Chinese example?

Botsman: Yeah. And so I should give a lit­tle bit of con­text. And so the inten­tion behind this, or so the Chinese gov­ern­ment say, is part­ly eco­nom­ic, right. So fraud is a real­ly big prob­lem in China. And what we under­es­ti­mate is that many peo­ple do not have a cred­it his­to­ry. So the way it was set up ini­tial­ly was that we can bypass cred­it scores and we can look all these dif­fer­ent inputs that show whether a per­son is trust­wor­thy and the like­li­hood of how they can behave. Which sounds quite logical. 

But then you look at the inputs, and what they ini­tial­ly did—which they’ve now banned—is they gave licens­es to the big data com­pa­nies. So they went out to Tencent, they went out to Alibaba. And as you say, they could track… The exam­ple is like, say I bought nap­pies because I have kids, my score might go up because I’m a respon­si­ble par­ent. But if you were play­ing video games, you’re lazy and your score goes down. 

And I don’t think it’s going to stop there because you can see like, they must be able to see the behav­ior with­in the video game. So the ques­tion that I then…I think the next wave of this is like, what kind of play­er are you? Do you get score—

Mason: Are you a PewDiePie or are you a…

Botsman: And then, there’s no line to it. And I think the thing that real­ly fright­ens me about it is the inputs, when they start to get into social net­works based on like if you said some­thing about Tiananmen Square, that your score could go down. And because social con­nec­tions are built in and it’s Chinese cul­ture that you are account­able to how your friends and fam­i­lies and col­leagues behave, that you could get pun­ished for what some­one else does. So you can see like, Well, I’m going to unfriend that per­son because they’re drag­ging my score down.” 

But then the fright­en­ing thing…and I’m sure many peo­ple have watched Black Mirror. And one of my favorite episodes is Nosedive,” which is just genius where the main char­ac­ter, who’s Lacie Pound, and she’s liv­ing in this world. And she’s liv­ing in this where she wakes up in the morn­ing and she prac­tices her smile because she might earn a cou­ple of points. And she gets a cof­fee and she rates how the milk was swirled. And she’s try­ing to earn these points because she wants to stay in this apart­ment and her score has to reach a cer­tain level. 

And I could go into the whole episode, but the fright­en­ing thing was with the sys­tem in China they did exact­ly the same thing, that ini­tial­ly it was all built around reward. So you could get a fast-track visa. You get bet­ter inter­est rates on your loans. Your chil­dren could even go to dif­fer­ent schools. 

But then they announced the penal­ties. And what was so scary in Nosedive” and this weird way that art mir­rors real­i­ty is that ends and Lacie can­not take an aero­plane— She can’t take this plane trip because her score’s dropped too low. And in China they banned more than six mil­lion peo­ple for tak­ing flights, because they had low trust scores.

And so what I find very very dystopi­an and dis­turb­ing is that the pun­ish­ment does­n’t fit the crime—there’s no cor­re­la­tion. So I get it’s this like, nation­al sys­tem to try and make peo­ple more account­able but real­ly it’s gam­i­fied obedience.

Mason: I mean there’s a won­der­ful oppor­tu­ni­ty inso­far as iden­ti­ty then becomes divorced from sort of biol­o­gy, whether it’s gen­der or race, you’re not judged on those char­ac­ter­is­tics, you’re judged on the qual­i­ty of your char­ac­ter. And there’s great pos­si­bil­i­ties with that. But then the flip­side is essen­tial­ly it cre­ates anoth­er class sys­tem all of its own right, the trust­wor­thies ver­sus the untrust­wor­thies.

Botsman: But the thing that wor­ries me is… And I rewrote that chap­ter and rewrote it because I kept say­ing, Is this my Western view? Is this my Western lens on this?” And you know, it’s a pop­u­lar­i­ty con­test that by design only a few peo­ple can win. But the rea­son why I kept rewrit­ing it is we’re not that far off. Like you know, you think…you go, Oh, that’s nev­er going to hap­pen here.” But you look at the way peo­ple thumbs up, the thumbs down… You hear peo­ple say­ing, Oh, I’m going to be friends with you cause you have thou­sands of fol­low­ers on Instagram,” and that peo­ple’s [crowds?] has influ­ence now and—

Mason: But influ­ence is dif­fer­ent from trust, though, isn’t it? Because I would nev­er trust say, one of these big YouTube stars like a Logan Paul. I cer­tain­ly would­n’t trust him with my kids. If I ever had kids I cer­tain­ly would­n’t trust Logan Paul with them. Influence is very dif­fer­ent from trust, and influ­ence is built on gen­er­at­ing char­ac­ter. I mean, these are per­for­ma­tive per­sonas that exist online through a cer­tain lens, through a media lens. It’s very easy to…not fake influ­ence, but it’s very easy to gen­er­ate influ­ence by actu­al­ly fol­low­ing the tropes of what goes let’s say viral.

Botsman: Yeah.

Mason: Whereas trust is an entire­ly dif­fer­ent thing.

Botsman: It is. And it’s a real­ly good point. I mean, you could buy influ­ence, right—

Mason: Bought your fol­low­ers, yeah.

Botsman: —And Klout was a real­ly good exam­ple of that. PeerIndex and all those things. But where I was going with it is the behav­ior, the mechan­ic, of con­stant­ly look­ing at what some­one’s doing and hav­ing a response to it in real time is… The next wave on is, is then that a judg­ment of how not whether they look nice but whether they’re in some way com­pe­tent or hon­est or… That’s when it starts to get into how trust­wor­thy that per­son is. And that’s where I think it gets incred­i­bly frightening. 

And inevitable. I real­ly, I mean I do believe that by the time I fin­ish— I mean I think it’s actu­al­ly already hap­pen­ing. It’s just that we’re not vis­i­ble to the way com­pa­nies are mak­ing judg­ments about us.

Mason: So humans— And you raise this in the con­clu­sion of the book. Humans are these flawed, messy indi­vid­u­als. We make mis­takes. And then how does this sort of sys­tem allow for indi­vid­u­als to make mis­takes? To be rebels. To rebel. To be radical. 

I mean, in the UK Theresa May was argu­ing for the fact that you could­n’t remove your dig­i­tal iden­ti­ty when you reach thir­teen and start afresh and you know, gen­er­ate a new life your­self and get your par­ents to remove all your Facebook pho­tos of you as a baby that poten­tial­ly could be ana­lyzed to work out whether you’re already anal­ly reten­tive. You’re suck­ing your thumb in a in a pho­to of you at 5 years old. Where, or how do we build in a degree of allowance for gen­uine mistakes?

Botsman: And this is the fright­en­ing thing, because I think what is beau­ti­ful about human beings and what makes us human is that we are com­pli­cat­ed, and we are messy, and we have bad days and we make mis­takes, and that that’s how we learn and that’s how we move on. And I was prob­a­bly the last generation—because I’m about to turn forty—where…it’s gone, right. There is no record of my uni­ver­si­ty days. There’s no pic­tures of me at Piers Gaveston. Which is a very good thing.

But that was like, part of me grow­ing up. That was part of me dis­cov­er­ing like, it’s not a good idea to do that thing. So I real­ly am with the Prime Minister. I do think that every… It’s like on your eigh­teenth birth­day that you get the right to con­trol and delete. Because if we’re judged by those errors when we go for our first job or what­ev­er it is. Even like where our chil­dren can go to school. That’s a real­ly pre­car­i­ous place where we’re tak­ing society.

Mason: Should it be an explain­ing sys­tem rather than a com­pli­ance sys­tem, sim­i­lar to how tax works in the UK ver­sus the US. There’s an explain­ing sys­tem here where­as in the US it’s the IRS that basi­cal­ly own you. There’s no expla­na­tion. You can’t explain your way out of any issues that you have.

Botsman: Yeah. And I think that’s a real­ly good anal­o­gy. Because so much of this comes down to who has con­trol over these deci­sions. Like where does this data reside. So that’s why I think the new reg­u­la­tions com­ing in, the GDPR, like…it could be a real change and I think if we had own­er­ship and con­trol over it, we could we could actu­al­ly unlock the val­ue around it in very excit­ing ways. It’s when a gov­ern­ment or a very very large net­work monop­oly owns this that we edge into sur­veil­lance and con­trol. And I think the reoc­cur­ring theme in the book was how eas­i­ly we now give away our trust.

Mason: Can an anony­mous iden­ti­ty be trust­ed? You look in the book at the exam­ple of…Silk Road was essen­tial­ly a drug-based buy­ing plat­form that was heav­i­ly built on trust. I mean these are the soci­eties arguably most problematic…

Botsman: Untrustworthy peo— Yeah. They’re not, though.

Mason: Arguably, soci­ety says that drug deal­ers are the most untrust­wor­thy peo­ple. You can’t trust these indi­vid­u­als who are doing crim­i­nal activ­i­ty, and yet they had these incred­i­ble rat­ings on Silk Road. The entire ecosys­tem is built on the fact of are you gonna get your ket­a­mine deliv­ered and is it going to be a good cut of cocaine?” But those weren’t… They were anonymized iden­ti­ties at the same time at which they were trust­ed entitles.

Botsman: Yeah.

Mason: I mean how does anonymi­ty real­ly play into this?

Botsman: Yeah, I mean con­text is king. I do think that. And it was in Jamie Bartlett’s book actu­al­ly, The Dark Web, where he said it’s not dark. There’s actu­al­ly a thou­sand torch­es shin­ing on how the deal­ers behave. And the amaz­ing thing about the dark web is it actu­al­ly gives the con­sumer a lot of pow­er, right. Because there’s so much choice that those dealers—to your point—they have to send the drugs on time. They have to be the cor­rect weight and the right qual­i­ty. And if you actu­al­ly look at the test­ing of the drugs, they’re say­ing that is the case. 

And the inter­est­ing thing is you know, they do have pseu­do­nyms. They often don’t have pho­tos. They’re just…logos. But I would say that is a mech­a­nism that is in some way actu­al­ly mak­ing peo­ple more trust­wor­thy. Because the key ingre­di­ent of trust, peo­ple often think it’s about com­pe­tence and reli­a­bil­i­ty. They’re pret­ty easy for human beings to achieve. Its inten­tions that is key. So, are your inten­tions aligned with mine? 

Now, if you’re street delay­ing, you could argue no. But if you’re deal­ing on the Web and that rat­ing is crit­i­cal for you future income, sud­den­ly you’ve aligned the inten­tions of the buy­er and the sell­er in a real­ly inter­est­ing way. 

Mason: So in oth­er words trust can be divorced from authen­tic iden­ti­ty. Because…I think Randi Zuckerberg was one of the first to say with Facebook you need to have your real name used. It can’t be anony­mous. It needs to be your real” identity.

Botsman: Yeah. I think it real­ly depends on the sit­u­a­tion. So I think the expec­ta­tion is on the dark web— Like if you’re using a real iden­ti­ty, you’d kind of be sus­pi­cious. So it’s like it’s an expect­ed social norm. Whereas if I was King Porn on BlahblahCar, you’d you’d be sus­pi­cious because you’d want to know that dri­ver’s name actu­al­ly match­es their license. But I do you think you can have a trust­wor­thy sys­tem. I think you can have trust even when peo­ple are anonymous.

Mason: We men­tioned very briefly blockchain. And blockchain allows you to decen­tral­ize some of this say, rep­u­ta­tion data or bio data or cur­ren­cy, in a way that per­haps that might be the mod­el through which we can build an exchange of trust and rep­u­ta­tion. Could you explain your inter­est in blockchain, around these ideas more specifically?

Botsman: Yeah. I mean I have a… I have a hard time with blockchain. I get what it is, but I have a hard time because peo­ple are describ­ing it as a trust­less sys­tem. And I think the rea­son why describ­ing it as a tru— Have you heard that term? Or a trust machine. Or like it’s this idea that you no longer need inter­me­di­aries or… Because you can trans­fer trust direct­ly. And I think that is rub­bish, because a lot of trust is required even in how the blockchain works.

The inter­est­ing thing to me about the blockchain is whether it can tru­ly remain decen­tral­ized. And are we see­ing what’s play­ing out right now with the banks, that they just take a tech­nol­o­gy and they’re real­ly good at pri­vate own­ing it and putting fence guards so it just becomes a more effi­cient way of trans­fer­ring assets. So that’s the bit that I’m kind of cyn­i­cal around.

Mason: But we’re see­ing— You men­tioned Ethereum in the the book. And orig­i­nal­ly most peo­ple know blockchain through Bitcoin and the abil­i­ty for Bitcoin to buy drugs from the dark web. But Ethereum’s slight­ly dif­fer­ent inso­far as Ethereum allows for these things called smart con­tracts which…kinda helps this whole trust—

Botsman: Yes.

Mason: —ele­ment. It’s that mag­ic piece of paper where I write on my piece of paper and it repli­cates on your piece of paper and all the oth­er pieces of paper across the world. I mean, in what way is Ethereum dif­fer­ent and how does it actu­al­ly help with this issue of trust and build­ing reputation?

Botsman: Yeah. So I think what’s real­ly impor— I mean it sounds so basic, but the blockchain is like the back­bone under Bitcoin. And then the eas­i­est way to think of it is that the orig­i­nal blockchain that was built by Satoshi Nakamoto was a bit like a calculator—it only had one func­tion, right. So it could trans­fer mon­ey. And when Vitalik and the founders of Ethereum came along… They might describe it like this but it was kind of like the smart­phone, right. So how could we make this this open decen­tral­ized plat­form where peo­ple could build lots and lots of apps on top of it?

The piece that excites me the most and I think has the most promise for trust is these smart con­tracts. And this idea that two par­ties could agree to some terms before some kind of event, where there is a clear out­come. So, that could be an exchange of a house. That could be the results of Wimbledon. But there has to be clear a out­come. And that the smart con­tract could auto­mat­i­cal­ly trans­fer the asset or pay out, based on that out­come. I think that is… We can’t even imag­ine the appli­ca­tions of that, and that we could remove so-called trust­ed inter­me­di­aries,” whether that’s real estate bro­kers or bet­ting agents or [whis­pers] lawyers (God for­bid), or accoun­tants or… When you start to think of the poten­tial of that, you start to think of that’s why I think peo­ple are say­ing that this is real­ly the next wave of the Internet in terms of we real­ly trans­formed com­mu­ni­ca­tions and knowl­edge trans­fer but we did­n’t change the way he beings could fun­da­men­tal­ly trust one anoth­er around the trans­fer of assets.

So, I think you’re total­ly right that it’s not the cur­ren­cy piece, it’s the smart con­tract piece that is the most inter­est­ing about Ethereum.

Mason: And the cryp­tocur­ren­cy mar­ket as it stands right now feels like it’s heav­i­ly bill on trust.

Botsman: Yeah.

Mason: I mean it’s one of these weird mar­kets where it’s reliant on the mimet­ic pow­er of how peo­ple per­ceive the sta­bil­i­ty of the Ethereum and the blockchain mar­kets. And we saw an exam­ple of we thought that the founder of Ethereum had died in a car crash and sud­den­ly the mar­ket crashed. Or some­thing is writ­ten very very pos­i­tive­ly about Ethereum or Litecoin or Bitcoin, and sud­den­ly you see on CoinBase that the cur­ren­cy goes up. And the reliance and the trust mech­a­nism with­in the cryp­tocur­ren­cies seems to be a trust that every­body’s going to stay in and we aren’t going to have one or two indi­vid­u­als with a large amount of the cur­ren­cy try­ing to liq­ui­date it very very quick­ly, which will crash the mar­ket. It’s a very odd thing to witness.

Botsman: It is. It’s real­ly odd. And what also I think is slight­ly dis­ap­point­ing to me is that it’s not divorced of val­ue respond­ing to inci­dences and pub­lic per­cep­tion in the way that it is nor­mal to fis­cal cur­ren­cy. And so I think that the news around Vitalik is like it’s not real­ly a decen­tral­ized sys­tem when you can see that kind of fluc­tu­a­tion…so depen­dent on one human being.

And that’s the thing that I think I real­ly strug­gle to get my head around. People are say­ing oh it’s this immutable type of val­ue trans­fer and you’re like, No, because the founders can still go in and hack the sys­tem and change it.” And so there’s still a cen­ter. There’s still lead­er­ship and when things go wrong, who do they call for? They call for the founder. They call for the pro­gram­mers to fix the prob­lems. And I strug­gle I think like, maybe it’s a very small per­cent­age of the human pop­u­la­tion that is actu­al­ly com­fort­able in these com­plete­ly decen­tral­ized sys­tems where there is no insti­tu­tion and there is no oth­er leader to hold account­able bu yourself.

Mason: And there’s been issues with peo­ple post­ing their area bit­coin wal­let codes online, going, Oh look, I’ve tak­en my first bit­coin out!” and you can basi­cal­ly decode it from the two QR codes. What they have done is they’ve scrib­bled over the num­ber and not real­ized that the QR code is the thing that gives you the num­ber any­way and sud­den­ly we got access to that entire per­son­’s his­to­ry. Although, I still believe to a degree that blockchain will allow us to have the sort of thing you were refer­ring to with regards to these rep­u­ta­tion dash­boards. I think that’ll allow us to have some form of iden­ti­ty wal­let that’s decen­tral­ized to us, where­by the real hope and the real thing that was actu­al­ly excit­ing about these rep­u­ta­tion dash­boards, not dystopi­an, was the fact we would own it.

Botsman: Yeah, we would ow— Yeah.

Mason: If we’re going to start ambi­ent­ly cre­at­ing not just social data but ambi­ent­ly cre­at­ing neu­ro data and bio data, and we’re going to spit in a tube and send it to 23andMe and get a cer­tain degree of data back from that, if we are these enti­ties that not just pro­duced CO2 but also pro­duce data, sure­ly we should be allowed to hold it in our own cloud, our own bod­ies, or hold it to our­selves and then make the deci­sions as to how we trade it or donate it. With bio data—

Botsman: Or keep it private.

Mason: Or keep it pri­vate. You know, with bio data maybe I want to sell some­thing to a drug com­pa­ny, some of my bio data to drug com­pa­ny, but maybe I want to donate it to a med­ical research group who’re doing incred­i­ble work around rare dis­eases if I hap­pened to have a rare dis­ease. I mean…the issue comes with how do we then get the gen­er­al pub­lic think­ing about this before the cryp­tocur­ren­cy mar­kets crash and we don’t trust it.

Botsman: Right. And it might not be total­ly crypto—

Mason: I might not happen.

Botsman: Yeah. But no, I think the thing is that you know, we’re going to laugh I think in ten, fif­teen years’ time that we were wor­ried about our stars and reviews going in this lock­er because the exhaust will become so much rich­er and so much more per­son­al, and it will be like we’re admit­ting how our bod­ies func­tion on a minute-by-minute basis and what hap­pens to that data. So I think get­ting the plat­form and the dash­board and the lock—whatever lan­guage, we’ve got to get it right now, right. And so we have to take the own­er­ship back. And I do think the tech­nol­o­gy that offers the most promise is the blockchain around that. 

And you can see it in coun­tries like Estonia, right, where they’re say­ing… You know, they did­n’t have for­mal insti­tu­tion­al sys­tems and they’re start­ing off and say­ing, Right, every­one has a dig­i­tal iden­ti­ty, and it is going to sit on the blockchain. And then once you have that, your your health data can go in there. Your social ser­vices data can go in there. But you the cit­i­zen own that data. And you have to give us per­mis­sion as the gov­ern­ment to access that.”

Mason: Well it might not be gov­ern­ments or insti­tu­tions. It might be oth­er non­hu­man enti­ties such as bots and algo­rithms that make those deci­sions based on that data on our behalf. You you men­tion in the book, and it’s one of the most excit­ing chap­ters for me per­son­al­ly, is this idea of the pos­si­bil­i­ty for us plac­ing trust not just in humans but in algo­rithms to help guide us. And there’s some won­der­ful pos­si­bil­i­ties there but there’s also some poten­tial­ly prob­lem­at­ic and scary outcomes.

Botsman: Oh no. There’s no scary out­comes. No, I mean… The inter­est­ing thing is like when you talk about this idea of trust­ing a bot, and I find this real­ly fas­ci­nat­ing, is the reac­tion is imme­di­ate­ly neg­a­tive. Like, you might be a rare excep­tion, but there very few peo­ple that go, Right, Rachel. Like, could you tell me how that’s going to change my life for bet­ter?” The human response is, Holy crap. Robots are going to take con­trol. We’ll lose con­trol. We are super­sed­ing pow­er to these algorithms.” 

And then you start to say well you know, do you let Netflix choose your shows for you, or Amazon make rec— We’ve already done that, right. We’re already trust­ing algo­rithms to make deci­sions, we’re just not con­scious or the process. And I think what’s hap­pen­ing with self-driving cars in a strange way is its brought this issue of how do you trust the inten­tion of a bot or a machine or what­ev­er it is, and it’s bring­ing that ques­tion to the sur­face even though it’s been there for quite a long time.

Mason: Well they’re try­ing to code the human back in the— The exam­ple of autonomous cars. They’re try­ing to code the human ele­ments back into the design of autonomous cars because of that word inten­tion­al­i­ty.” So when you’re dri­ving a car, you look at the cues of the human behind the wheel. So you know they’re not pay­ing atten­tion or they’re star­ing else­where or they’re on their phone or they’re not star­ing direct­ly at you, and now you have researchers at cer­tain uni­ver­si­ties in the US design­ing these cars that have these light sys­tems that’re essentially…blinking at you. There’s a human ele­ments these self-driving cars where as a human you’re cross­ing the road and you’re not sure of the inten­tion­al­i­ty of that car as to whether it’s actu­al­ly seen you or not. And now they’re queu­ing in these kind of ways in which these non­hu­man agents talk back to us and show their inten­tion­al­i­ty. And I think that issue, the word inten­tion­al­i­ty is going to be the most impor­tant thing to allow us to take this fuzzy notion called trust and actu­al­ly gift it to nonhumans.

Botsman: No, yeah. And I think it’s not even how the machine responds. It’s even ear­li­er than that. So one of the inter­est­ing stud­ies I came across was being done by MIT, where they intentionally—no pun [?]— They inten­tion­al­ly pro­grammed the car with dif­fer­ent inten­tions. And so there was the car that would always make the right choice, and the trol­ley prob­lem, all that, and would be quite ratio­nal if it was in a sit­u­a­tion where it had to choose who to kill. And then there was a car that would always pro­tect the dri­ver. And you ask peo­ple which car would you choose. And if they’re talk­ing about oth­er peo­ple they’re like, Oh, you should pick the car that makes the ratio­nal judg­ment.” If they’re talk­ing about which car they would pre­fer to buy, it’s the one that would always pro­tect themselves.

And so trust­ing inten­tions is… The per­son mak­ing that deci­sion as to how the car’s going to respond at the moment is the car com­pa­nies, and the engi­neers. And so we have to trust the inten­tions of them, that they’re not just pro­gram­ming the car that’s going to sell the best because it’s going to pro­tect the driver.

Mason: Well there was an argu­ment that I’ve seen made with how we port a degree of our iden­ti­ty into self-driving cars. So if you self-driving car could ambi­ent­ly cap­ture your entire social data and find out what sort of bias­es that you may implic­it­ly or unim­plic­it­ly have, it may make deci­sions on your behalf as to what sort of deci­sions it would make on the road. So if it was the Donald Trump self-driving car, it would make the deci­sion to career into the fam­i­ly of immi­grants, for exam­ple, ver­sus career into the one white male who is walk­ing on the oth­er side of the road. That’s being posit­ed as a pos­si­bil­i­ty which then cre­ates a whole bunch of—

Botsman: You can super­im­pose your bias­es and discrimination—

Mason: Superimpose your bias­es on these algo­rithms inso­far as these algo­rithms have already been designed with a degree of human bias—

Botsman: Yeah.

Mason: —implic­it in them.

Botsman: Yeah. And that’s— You know, I think one the best books on this is, it’s called Weapons of Math Destruction by Cathy O’Neil. And she talks a lot about this. And I think this is the real­ly tricky thing, is that the opti­mistic side of me says where do humans make very bad judg­ments? Either because they lis­ten to the wrong trust sig­nals, or because the mind is as Jonathan Haidt puts it it’s a sto­ry processor—it’s not log­i­cal. And bots can be very log­i­cal and non-judgmental in a way that human beings can’t.

And so that…it’s like we’ve almost skipped over all the pos­si­bil­i­ties of algo­rithms and bots being able to make bet­ter deci­sions than human beings because we fear—where you’re going—that you could take Donald Trump’s per­son­al­i­ty and sort of impress it on an algo­rithm and it would make judg­ments based on that, and that could hijack an entire sys­tem. So, I find it real­ly inter­est­ing that we go neg­a­tive so quick­ly to this idea where there is so much poten­tial for algo­rithms to actu­al­ly make bet­ter deci­sions than human beings. And I think the thing that we’re wor­ried about is that… You know, I think of my rela­tion­ship to tech­nol­o­gy is still that it’s very pre­dictable. And there’s an easy way for me to assess that predictability—does my car turn on? But as soon as things start mak­ing deci­sions for us, then you have to trust the inten­tions behind the deci­sions, and that’s real­ly hard to assess.

Mason: It’s one of the issues that IBM Watson has right now with regards to how IBM Watson is diag­nos­ing for cer­tain dis­eases. IBM Watson…at least their PR team, are very focused on say­ing IBM Watson does­n’t diag­nose. IBM Watson makes suggestions—

Botsman: Suggestions, yeah.

Mason: —that then a human doc­tor looks at. And the human doc­tor diag­noses. They always want their inter­me­di­ary. Even though the press and the pop­u­lar press will, Oh, IBM Watson has diag­nose some­one with this rare dis­ease.” They go, Whoa, okay. IBM Watson was­n’t a human. It’s isn’t a person.”

Botsman: But how do you force that human judg­ment, right? So if the machine makes a rec­om­men­da­tion— So you you hear this a lot with com­pa­nies who are doing very very sophis­ti­cat­ed back­ground checks, say for recruit­ing, and they’re giv­ing peo­ple the infor­ma­tion to say is this per­son a real­ly good fit for your com­pa­ny. But they’re say­ing you know, a human being must look at the score­card, right. And you have to look at the con­text as to why they might have scored low­ly on anti­so­cial. Like, how do you enforce that?

Because whether you’re a doc­tor or whether you’re a recruiter, you nat­u­ral­ly want things to become more effi­cient. And this is what I find so hard, is that tech­nol­o­gy nat­u­ral­ly makes things more seam­less and speeds things up, and it’s this very accel­er­at­ed mode that is the ene­my of trust.

Mason: But if you know what the tech­nol­o­gy or the algo­rith­m’s look­ing for, isn’t it then pos­si­ble to trick and play that game? So for exam­ple, friends of mine are being inter­viewed at the moment through video. And these videos, these are for cor­po­rate jobs and these videos are being watched in the first instance by essen­tial­ly a bot for cer­tain cues. And the folks who know what that algo­rithm is look­ing for inside of that video with regards to cer­tain cues are able to fake their way through that first piece of the system. 

Or for exam­ple, cer­tain peo­ple have been told how to sig­nal with the body for cer­tain cues to fake the results of lie detec­tives. If you know the rules of the game, then it’s pos­si­ble to cheat.

Botsman: And it’s human nature to try and beat or trick the machine, right.

Mason: hun­dred per­cent, yeah.

Botsman: Yeah, I mean I think there’s an argu­ment that the… And again like, do we want this, that the machine… Could the machine and the arti­fi­cial intel­li­gence get to a point where it under­stands every sin­gle trick in the book, because it learns over time?

Mason: But I’m just say­ing, could you fake trust­wor­thi­ness? So I used to have a friend—

Botsman: Yeah, no no. You can.

Mason: I used to have a friend, every time he used to be in a bar in London, he used to go—this was back when Foursquare was big. He used to go into Foursquare and find the local veg­an restau­rant that was open and check into the local veg­an restau­rant instead of the pub. And I used to ask him, Why’re you doing that?” He goes, Well, if the insur­ance com­pa­nies, or if the med­ical insur­ance com­pa­nies ever do look at my Foursquare data, I wan­na make it look like I’m liv­ing a healthy life and not sit­ting in the pub every so often.” So it’s pos­si­ble to a degree—

Botsman: It is.

Mason: —to fake this stuff.

Botsman: I think, though, the com­fort­ing thing about trust is its sis­ter, which is trust­wor­thi­ness. So that’s the trait that human beings have that we assess. It’s actu­al­ly quite easy to trick com­pe­tence, which is one, reli­a­bil­i­ty, which is the oth­er. It’s very very hard to trick integri­ty, which is tied to that inten­tions piece. So I think you know, how does…how do you… I don’t know. Like, how would you trick— I don’t know how—

Mason: But when design­ing the algo­rithms, you’ve got to look at cer­tain things which… Again, it’s con­text, as you said so won­der­ful­ly at the begin­ning this con­ver­sa­tion, it’s a con­text issue. And for exam­ple, we’ve come to accept that politi­cians will philander.

Botsman: Yeah.

Mason: But that does­n’t make them a bad politi­cian. And I know that you were involved with the Clinton Foundation. Maybe we’ll cut this bit. But you were involved with the Clinton Foundation. He was a good politi­cian who hap­pened to do some things which where in the eyes of the pub­lic not seen to be nec­es­sar­i­ly virtuous—

Botsman: Yeah.

Mason: —but that does­n’t… And yes, to a degree he lied to the pub­lic about his doings, you know, I did not have sex­u­al rela­tions with that woman.”

Botsman: Are you doing his accent now?

Mason: Trying and fail­ing. But, it did­n’t make him a bad politi­cian. He was a bril­liant politician.

Botsman: Yeah. And and this I think is real­ly inter­est­ing. So if you think of the ingre­di­ents of trust­wor­thi­ness being com­pe­tence, reli­a­bil­i­ty, integri­ty, and benevolence—how much peo­ple care—the alche­my of that is very dif­fer­ent. So like a lawyer, for exam­ple…not sure benev­o­lence if I real­ly want a lawyer that’s going to go after some­one and is the oppo­site of me—

So I think that is the point, that peo­ple can still be trust­wor­thy to do— It’s what are you are ask­ing them to do which is the issue with so many of these sur­veys where they go, Do you trust the media? Do you trust jour­nal­ists?” right. Well, to do what? Like, what is it you’re ask­ing the ques­tion around? So yeah, con­text is king when it comes to trust.

Mason: So if con­text is king, then almost it’s impos­si­ble right now to design those sys­tems. It has to be built on some form of messi­ness. There has to be some degree of the fuzzi­ness in the sys­tem. You call these moments where we’re sud­den­ly able to trust these new sys­tems trust leaps.” Explain what a trust leap is.

Botsman: So a trust leap the way I define it is, it’s when you take a risk to do some­thing new or dif­fer­ent­ly to the way you’d pre­vi­ous­ly done it. So, it does­n’t have to be mon­u­men­tal things. It could be some­thing rel­a­tive­ly minor, so I no longer need a paper bill, I’ll check my bill online. It could be the first time you use an ATM machine, put your cred­it card details in a web site, used eBay, get in a self-driving car. 

And human beings, we are nat­u­ral­ly very good at tak­ing trust leaps. We’ve been doing it through­out the his­to­ry of time. Bartering to cur­ren­cy is a real­ly good exam­ple for a trust leap. I think the dif­fer­ence today is that we are being asked to leap faster and high­er than ever before, which is why every­one talks about this unprece­dent­ed rate of change. But ulti­mate­ly trust leaps are real­ly real­ly good in terms of under­stand­ing again, inno­va­tion and progress. Because this is what enables peo­ple to move for­ward. So they’re kind of like this con­duit or chan­nel that enables new ideas to travel.

But I think where com­pa­nies often make a mis­take is that ask­ing peo­ple to leap too far too fast, or there isn’t enough social proof—which is dif­fer­ent when I’m think­ing of ear­ly adopters—but there’s not enough peo­ple who’ve made the leap for the rest of us to go, Safe. It’s worth try­ing. There’s some­thing in it for me.” And so…it’s kin­da weird but when I start­ed visu­al­iz­ing human beings leap­ing into all these dif­fer­ent areas and real­ly trust was sort of enabling that you know, it does start to explain so many pat­terns of change and progress.

Mason: But then… What I wor­ry about is if you’re con­struct­ing an iden­ti­ty based on rep­u­ta­tion, it’s going to make you very averse to tak­ing risks.

Botsman: Well not nec­es­sar­i­ly. So look, I just want to be clear that I am now full aware that this is a very dan­ger­ous sit­u­a­tion, that peo­ple have rep­u­ta­tion dash­boards. But I could argue the flip side of that, because if there was a real bonus if my score went up because I have a high propen­si­ty to take risks, because I am will­ing to go places and try things that oth­er peo­ple haven’t tried, and through that there’s med­ical progress, what­ev­er it is, you could argue that I get a lit­tle boop! in my rep­u­ta­tion. So it can… 

But the thing that wor­ries me is con­tin­u­ous­ly using penal­ties and rewards to moti­vate human behav­ior. And that’s the piece that I strug­gle to get my head around, is how do you… And this guy who’s the founder of Traity, he put it real­ly well. He said, You know, I don’t think we should see rep­u­ta­tion as a cur­ren­cy. I think we should see it as a risk pre­mi­um.” And I think that’s a real­ly good way of fram­ing it.

Mason: Do you think the pre­vail­ing wind on where all of this is going has some­thing more nuanced with regards to how peo­ple are par­ent­ed in the mod­ern age? There was a run­ning joke that the rea­son for Donald Trump is America had dad­dy issues and it just needed…

Botsman: Daddy, yeah.

Mason: It just need­ed a dad­dy. It need­ed a par­ent who was kind of tough, and was going to tell it what to do, and then that pop­u­lace could relin­quish a degree of con­trol over their lives and put their trust into some­thing else, in the same way that you know, when we see a doc­tor all the rules change. All the rules go out the win­dows. It’s like take blood, take urine, take any­thing you need. Take all the data you want from me, just make me bet­ter, I trust you. You want to see a doc­tor to basi­cal­ly relin­quish the con­trol of the sit­u­a­tion that you’re in. 

Do you think there’s some­thing under­ly­ing all of this— It’s the thing I kept think­ing about when I was read­ing your book is like God, maybe there’s just some issues with how this gen­er­a­tion’s been par­ent­ed, that they’re look­ing for some­thing to just trust or look up to or to find some sort of author­i­ty at the same time at which they’re taught to reject author­i­ty. But then they’re like, But we need author­i­ties in our lives.” There this kind of…whatever expands, con­tracts, expands, contracts. 

Botsman: I think that’s a real­ly astute obser­va­tion and I think… I think Trump is an exam­ple of too much trust in the wrong places. But you look at… Not to go back into the elec­tion but. I think this is much deep­er than him just appeal­ing to peo­ple’s feel­ings and under­stand­ing how you tap into anger and fatigue and be anti­estab­lish­ment. To your point I think you know, prob­a­bly not my par­ents but my grand­par­ents. They had a very clear struc­ture around trust, right. Like, they had a very clear hier­ar­chy around it. And whether that was in the schools and the respect for they’re teach­es, there was a very clean lying around that. Like, you did trust what experts and econ­o­mists and sci­en­tists said. And we’ve grown up being told to ques­tion every­thing, and to being told even like you know… What is in your Facebook feed at the top next to pic­tures of your friend’s new­born baby may not be true.

And so I think it’s this vac­u­um that is very very dan­ger­ous because to your point, what ris­es up are these arche­types and these very strong fig­ures that are actu­al­ly incred­i­bly smart at tap­ping into the pulse of that trust vac­u­um. You know, Michael Gove is anoth­er real­ly good exam­ple of that. And peo­ple are then plac­ing so much hope and trust into some­thing that is essen­tial­ly untrust­wor­thy. And so I do think it is very much tied to this infor­ma­tion over­load, to hier­ar­chi­cal struc­tures break­ing down, where we don’t know where to look. Do we look up? Do we look to the per­son on the bus? Do we look to our friend on Instagram? And through that con­fu­sion you see new forms of dic­ta­tor­ship, new forms of aggres­sion. Because they’re just very loud and clear to people.

Mason: I think maybe to a degree there’s a frac­tur­ing of how we gen­er­ate our iden­ti­ty. We have to gen­er­ate on iden­ti­ty for the online per­sona. Something else out here. Then there’s an arguable rise in men­tal ill­ness with regards to how peo­ple are oper­at­ing in the world. And then they look to trust a ther­a­pist to tell them the way through. So we have these con­stant pulls over the way in which we oper­ate. Trust then becomes this kind of very very fuzzy, very prob­lem­at­ic ele­ment of that.

Botsman: It does. And it’s fun­ny, because when I… Generally a very opti­mistic per­son. I like to redesign things so that they’re bet­ter. And the hard thing I found writ­ing this book is I’m lying to the read­er if I pre­tend this all ends well. 

Mason: It’s all a lie.

Botsman: And I’m not say­ing the book is very depress­ing but the man­u­script came back and it was like, Hope. Hope.” And I said look, there is hope in this. But the hope in this is actu­al­ly around indi­vid­ual account­abil­i­ty. It’s around all of us stop­ping blam­ing the insti­tu­tions and blam­ing the tech com­pa­nies and start­ing to say, I accept those terms with­out think­ing. I accept I place more val­ue on con­ve­nience and trust when I take an Uber ride. I buy books on Amazon and they don’t pay their fair share of tax­es.” And that for me is actu­al­ly the hope for peace in this, is that through the mess we see greater indi­vid­ual account­abil­i­ty in soci­ety. And that by let­ting maybe the machines and the algo­rithms take over deci­sion­mak­ing we in a fun­ny way real­ize what it means to be human.

Mason: Thank you to Rachel for shar­ing her thoughts on how mod­ern tech­nol­o­gy has gen­er­at­ed a trust shift. You can find out more by pur­chas­ing Rachel’s new book Who Can You Trust? How Technology Brought Us Together – and Why It Could Drive Us Apart, avail­able now. 

If you like what you’ve heard, then you can sub­scribe for our lat­est episode. Or fol­low us on Twitter, Facebook or Instagram: @FuturesPodcast.

More episodes, tran­scripts and show notes can be found at future​spod​cast​.net.

Thank you for lis­ten­ing to the Futures Podcast.