Luke Robert Mason: You’re listening to the Futures Podcast with me, Luke Robert Mason.
On this episode, I speak with author and media commentator Rachel Botsman.
We rarely think about the link between trust and progress and innovation, and how societies move forward. But when you start to think of it like that, you realize that trust is actually the key component not just for companies but any organization that wants human beings to try new things.
Rachel Botsman, excerpt from interview
Rachel shared her insights into new potentials for trust, the current state of collaborative consumption, and innovative new uses for blockchain technology. This episode was recorded on location in London, England at the offices of publisher Penguin Random House.
Luke Robert Mason: So Rachel, your new book is on trust. So, how do we define trust and how should we define trust?
Rachel Botsman: Go straight in there with the tough question. No, the reason why I say it’s a tough question is it’s actually fascinating that trust is the most debated sociological concept in terms of agreeing on a definition. And there are actually hundreds of papers disagreeing on the definition of trust.
So, I was looking at all of these things, and a lot of them are around predictability of outcomes and people’s expectations. And I thought this was really interesting because what was missing for me is when you’re asking human beings to trust, there is a degree of vulnerability involved. And so I started really thinking about the relationship between trust and risk.
And what I realized is that in any situation where you’re asking someone to trust—that can be in a human relationship, that could be in a new product, it could be in a new concept, a new place—you’ve always got these two variables going on. So you’ve got a known state that you are comfortable in, and then you’ve got an unknown state. And then the gap between is what we call risk.
But risk isn’t what enables human beings to try new things and to sort of move forward, so trust is literally the bridge between the known and unknown. And that’s why define trust as a confident relationship to the unknown.
Mason: This is such a wonderful phrase, a confident relationship with the unknown. In what way is trust this bridge? You talk about it being some sort of bridge between those two things. Why that concept for you? Why do you think that is for you the most useful definition of trust.
Botsman: Because I think many definitions of trust, it feels like trust is an attribute rather than a process. And one of the things I felt really important is that we rarely think about the link between trust and progress and innovation, and how societies move forward. But when you start to think of it like that, you realize that trust is actually the key component not just for companies but any organization that wants human beings to try new things.
The other thing that I think is really important is that often people talk about trust and transparency as sort of brother and sister. But I actually don’t think they’re two sides of the same coin. Because if you think about a relationship, I have a dear friend who shall remain nameless and she’s all, “Oh you know, my husband and I, we have a great relationship. It’s so trusting.” But she checks his emails. And she checks his messages. And that is actually… That’s in her words transparency, right. So I think when you need organizations or you need things to be completely transparent we’ve actually given up on trust.
And so what I love about trust is that there is this degree of uncertainty, that we don’t know the outcomes of how things are going to turn out, and that’s… Really when you think about it that’s how progress happens.
Mason: You set up so wonderfully at the beginning of the book this environment in which it feels like today trust is at an all-time low. Where you talk about things like fake news, set that up as a misdemeanor. You think in actual fact all these things that are happening in the world means that trust is even more important. Could you explain that?
Botsman: Yeah, my ideas in books—like most books—they start as a hunch. And this was like you know, I kept opening every magazine, every paper, you could look at the headline but the symptom was that trust was in crisis. So whether that was to do with healthcare or politics or the media or fake news, old world, new world, tech companies, I was like you know, this doesn’t add up. This feels a fearful meme that is being spread like a virus.
And I don’t think we are a less trusting society. I think suspicion and fear is very high. But one of the things I started to wonder was why do we say we don’t trust bankers but yet two million people will [?] on Airbnb. And so what was really helpful was when I started to think of trust like energy. And like energy it cannot be destroyed but it’s changing form. And so that was a real sort of lightbulb moment if you like, because then what I realized was that trust that used to flow upwards to referees and experts and regulators and institutions was now starting to flow in a different direction and technology was accelerating and enabling this process.
Mason: Now, we’re going to talk about how collaborative consumption and trust work together. But could you quickly explain what collaborative consumption is?
Botsman: Yeah. So collaborative consumption was the subject of my first book What’s Mine Is Yours that I wrote in 2009. And then what was funny about it was I intentionally picked that moniker and that term because I wanted it to be the opposite of hyperconsumption. And it was based on a very simple idea of recognizing that all kinds of assets in our lives, not just physical assets like spaces and stuff but also human assets—people skills, people’s passions, monetary assets—that the world was full of this idling capacity and that what technology was enabling us to do is unlock it and make it liquid.
So the book opened with Airbnb and it’s really embarrassing to read because it’s like, “There’s ten thousand rooms around the world and it’s going to be massive.” And I remember my editor saying to me like, “You shouldn’t open with this story because this company’s going to be dead by the time the book comes out because it’s just not going to work. Strangers are not going to trust one another.”
Now what was interesting is like, when the book came out the real lesson to me, because it was the middle of the financial crisis and everyone thought this was a trend, that is was about people being cheap, trying to make money. And I sort of underestimated how much ideas, if you like, they react to the environment in which they’re born.
But then it started to get traction. And then people realized that they didn’t like the term anymore so they said like, “No, we’ve go to rebrand it ‘the sharing economy.’ ” And I always said that was a problem because in some instances, there were really really beautiful forms of sharing. And they still exist, we just don’t hear about them anymore in the media, where people are doing amazing things.
And then other platforms, like Uber was just launching at the time, they were about the efficiency of assets. They were about these asset-light networks that were basically enabling people to get things cheaper, more conveniently, and more efficiently, and I knew the time sharing was going to become an Achilles heel.
Mason: It’s seemed that Airbnb is a good example of this thing called the sharing economy. And to a degree, this term collaborative consumption is a reactions that. Because Airbnb isn’t really sharing, is it? It’s about utilizing a platform to allow you to exchange excess surplus. If it was truly sharing, the currency would be I would give out my sofa for two weeks and that would gain me two weeks—
Botsman: Like couchsurfing.
Mason: Like couchsurfing.org, which used to be wonderful model, which is now being destroyed by Airbnb.
Botsman: Ooh, capitalism.
Mason: Well, I was a big user of couchsurfing.org when I was traveling through San Francisco in 2012, and now those sofas that were previously given out in exchange for the ability to either do up someone’s house or hang out with some interesting people, and they now have a dollar figure on them. That sofa is now worth something, thanks to Airbnb.
Botsman: I don’t think… I don’t think sharing has to be for non-monetary reasons. I think money can be involved. And I think the trouble with Airbnb is it’s a very mixed model So I do think there is a segment that it genuinely is sharing. Even when you’re charging for a room… Like, I’ve met many of these hosts, and yes the money is an important driver and we shouldn’t underestimate the percentage of people that now depend on that for their mortgage or their rent. But they’re doing it because they do want to share their home and they do want to share their local experience and have a human connections. So you know, I think the empty nesters a really interesting. That it is for social reasons as well as— You know, you do meet hosts and they don’t need the money. They just like the human exchange.
The problem is that when any platform hits scale, commercial interests will take over. And you’ve got landlords that are just buying up buildings and there is no element of sharing involved and it’s a mixed model, right. And so when you go on that marketplace it’s really hard to make the distinction. Which is why I think they’re moving in the food, in the experiences direction because they’re trying to bring back that humannnes and that local flavor that was really kinda key to their success in the early days.
Mason: I always saw that as Airbnb realizing that unlike Marriott or Hilton, Airbnb hands you over to your hosts and then there’s no longer a touchpoint with them. So when you go and check into a hotel you have the Hilton or the Marriott experience. The chocolate on the pillows, Hilton or Marriott. The towels are Hilton or Marriott. With Airbnb, their touchpoint ends as soon as you meet your your host.
And I almost have the flip inverse view of perhaps why they’re so obsessed with trying to build equity around the entire experience— I think they’re worried that you know, they want to keep you in the platform. They want to be the thing that you book your experience in San Francisco or London with.
Botsman: Yeah, look, don’t get me wrong. I don’t think they are shy of their ambitions to really own that hospitality ecosystem, and beyond that. So it will become a portal into your life. But I genuinely… I mean, I know those founders. And I genuinely believe this isn’t about them wanting to be in the middle of the relationship. I think they want to help facilitate deeper relationship and for Airbnb to have a broader meaning. But I do think they’ve realized that it’s been diluted and that the experiences component and the trip component is a way to inject that back in.
I also think they realize that there’s so many rival products coming onto the market. Like you look at the hotel brands now, and you look at things like ROAM, it’s like Airbnb in a hotel. So yeah. I’m more skeptical than you.
Mason: I’ve always wondered why there was never a Hiltonbnb. I always thought that Hilton demanded regulation around Airbnb or was at least one of the groups that demanded regulation about Airbnb. I always wondered why they didn’t go, “You know what? We’re going to start a Hiltonbnb, but the rules are you’re able to give out your free room if it has a power shower and a fifteen-[?] pillow and goose feather pillows and accessories.” And I always wondered why that wasn’t the reaction to that. They up the quality game and actually own that space in the same way that Airbnb did.
Botsman: Well I think it’s… One of my favorite writers and thinkers is Clay Shirky. And he put it so brilliant, I think it was in his second book where he said institutions will always try to preserve the problem to which they are the solution. And when you understand that quote, it explains so many ways that we just naturally respond to disruption.
But, we are seeing that. Hyatt just bought Onefinestay. Wyndham just bought Love Home Swap. So I find it amazing that it’s nine years later. And many of the hotel brands are now building these local community hubs where you have a room but you share your meal and it doesn’t feel like a hotel and all the rooms are different.
Mason: Is that less of a reaction to Airbnb and more the rise of boutique hotels? There seems to be something quite interesting about having an other experience as opposed to the same homogenized experience that you can get in any city through any hotel-branded experience?
Botsman: Yeah. I mean, I think it was an emerging trend in travel where people were saying like— And this is where it’s really interesting where it relates to trust. Because you know, Hilton has spent more than a hundred years building a brand that is about consistency—or Marriott—that stay in a Marriott in Budapest and then you stay in one in Tokyo and they—
Mason: Look exactly the same.
Botsman: —exactly the same, down to the pillows. And that was…the concierge and it’s all about reliability… So you can imagine them as brands being like, “What just happened? Our whole brand promise was essentially on the trust of consistency and reliability, and now the travelers’ saying, ‘Actually, we want a little bit of misalignment. We want a little bit of surprise. We want things to be different…’ ” And what they’re actually trusting is something completely different.
Mason: Well let’s talk again about trust and how it operates in platforms such as Airbnb. So, to degree you make a decision when you book an Airbnb as to the price and the location, but also the trustworthiness of the user. In part because we have seen some examples of fraud through the platform but also because we want to make sure that human being is a good human being, and every individual not just has a profile for how comfortable their room is but a profile for how personable they are. They’re building this thing that you call a reputation, they’re building a reputation. And how does trust and reputation interplay?
Botsman: Yeah. I mean, it’s a really interesting question because one of the things that I’m fascinated—and let’s just stay on Airbnb—is where does trust really lie? So, definitely when Airbnb started and they were an unknown brand, trust was very much between the host and the guest. And then I think as the platform became more sophisticated, so like even from the algorithm around the recommendations, payment systems, instant booking, you could argue that the trust sort of migrated more to the technology. And then now as Airbnb’s become of a brand, the brand still plays an important role.
But I still think— And people often say, “Well, there’s no trust between the host and the guest,” because often they don’t meet one another, but I don’t think that’s true. Because you’ve always got to trust the host in terms of you know, is the picture really like the place? Is it consistent in their offering? So one of the things that always fascinated me is how do we make judgments about strangers? How do we place our faith in strangers? And what kind of signals do we use to make an assessment?
And we saw this in eBay. We saw how powerful— You look at that system and it’s so rudimentary—it’s five stars, basically. And so now what’s happening is that these profiles and these rating and review systems are becoming more and more sophisticated, if you like, and more and more contextual. So people are realizing it’s okay if you’re a nice person but actually it really meant if you’re clean, and if you’re polite if you’re going to be a good host as well. And so people are starting to realize that their reputation is a currency. So if you are a host and you have a lot of really good reviews and you have a high rating, you’re more likely to get a booking.
Similarly as a guest you know, it happened to me where I had an unfortunate experience and I left early for the airport— I am taking responsibility for this, but I was not the one who checked out and my kids had left a little bit of a mess. And I got a really bad review. And I found it hard to get a booking after that. And that’s what I think is so powerful about these systems is yes, there’s a lot of flaws in these systems. But now with the blind review process, it does keep the marketplace strangely accountable to one another. People do behave different in Airbnb than they behave in a hotel.
Mason: You’ve used examples of specific behavior change with regards to how towels—
Botsman: [laughs] Well I was thinking about myself as a guest. And I was thinking like, “What do I do in a hotel that I don’t do in an Airbnb?” And I’m like, guilty of leaving towels on the floor.
Mason: Uh huh. Me too.
Botsman: Guilty, right? Now, I try not to use a different towel every day because I object to that. But when I leave, it’s fine if they’re on the floor. I would never ever do that on an Airbnb because I just know that that could lead to a judgment about me and it’s not worth it because it could damage my reputation.
Mason: So these ways in which we’re capturing trust and turning that into reputation, you talk about these things called reputation dashboards. The ability for every single human to have a dashboard of how trustworthy they are.
Botsman: Yes. Do you know, I feel I was very naïve when I first started speaking about those things?
Mason: Well, could you explain— I mean, how this reputation dashboard idea may operate? Or if that idea now is a defunct idea.
Botsman: It’s not a defunct idea. I just… I didn’t really… I had this idea, and it was basically from talking to lots and lots of users, where they would say, “I’ve got this super high rating on Airbnb, but I want to become a TaskRabbit.” Or, “I just want to start selling on Etsy and I’m like a ghost in the system.”
And at the same time, I was thinking about my own life, where I’ve lived in every continent but Antarctica. And the biggest pain point is not settling into a city, it’s that I cannot get a phone. I cannot get insurance. I cannot get a bank account, because I’m a ghost in the system. And it’s really hard to port your credit history across countries.
So my idea was if we could own all this data— This data is us. Like, this data belongs to us, all this data being generated on us, it has value. And could I have a Rachel Botsman dashboard that say, when I go to my insurance company or I go and try and get a flat, I could pay a lower tenancy bond because I have a really good reputation? And the idea was that the more you gather that information it could become highly contextual. Because if I’m a really good driver on blahblah car, maybe I could use that in terms of my driving insurance.
I think where I said where I was naïve is A, the companies don’t want to give us this data, right—
Mason: I wonder have you seen any potential solves for that? Because the reputation data that we build, say in eBay or Amazon or Airbnb or Uber—our Uber ranking—they all exist in separate stacks across separate systems, and maybe that’s a good thing. Maybe you don’t want your Uber ranking to define your ability to operate a car, for example.
Botsman: Yeah. I mean that’s— The tricky thing about this is people, they tend to forget that trust is highly contextual. And so what I really didn’t think hard enough about is forgiveness and transgressions, and that you know, as much as a reputation could empower us and log value, then all the bad mistakes that we’ve made could also follow us the rest of our lives.
I think it’s… You know, there’s many startups in this space. And a few are starting to crack it. So there’s a really interesting company called Traity. They’ve been in the game now like for seven years, and the founder said like they’ve just been warming up, running around the track so that they’re ready to go. And they’re focusing actually on some emerging markets where people cannot move into the city because they cannot get an apartment, a place live. Because they have no credit history.
Or, as we were talking about transient people, they then have to pay pay a really high tenancy bond. And now they’re working with insurance companies and have actually got a product on the market using people’s reputation data. But I think what’s really interesting about that is that they’re going into situations where people understand the risk and people understand that reputation could be a risk premium, that it has a value versus convincing you that your reputation has value.
Mason: I mean, have we found any solutions for relinquishing our reputation data from the stack? Is anybody working specifically on finding solutions to allowing us to have these things such as reputation dashboards? There’s a part of me that kind of likes the idea, for the same reason that you were mentioning what happens when you enter a market starting from zero. So if you’re a brand new Airbnb host? A lot of times I’ve looked at a brand new Airbnb host, realized they’ve got no reviews, the house looks really nice. But then I’m like, “Is that too good to be true?”
Botsman: I know. It’s annoying.
Mason: Because they’ve got zero ranking, there’s like a zero rating. It’s only been there for a week. And there is a part of you that goes you know what, I’m gonna go for the other one which is slightly more expensive, slightly less… Looks slightly less nice but at least they’ve gone eight-nine reviews. I mean if you start at zero, how do you enter the reputation market?
Botsman: And it becomes harder as the market becomes more mature. Because those ratings are proofing. They’re really really important psychologically in a booking.
I think the other really bit problem is like… So there’s a new platform launched called Kid & Coe, and it’s focusing on the family segment. And we have a home that is perfectly set up for families, and we need to go into family homes. I can’t port my my reputation onto Kid & Coe. And that’s where it’s frustrating, because it’s the same behavior, it’s the same offer, but I’m locked in.
Now, some platform… So, there was Legit. There’s one called Good Karma. There’s one called TrustCloud. There’s one called Trust Portal. They’re all figuring out how to scrape this data, but it’s still this question as to whether it really belongs to you.
Mason: So there’s no open API for reputation as such.
Botsman: No. And you can argue it will happen with the blockchain, potentially. The problem is that we have don’t have a digital data locker to pull it into. So you think of the number of leaps you’re trying to get the average consumer through, right. Oh, your reputation has value. You should try and port it. Oh it can live over here. And we haven’t quite figured this out yet but it’s going to have uses in all different parts of your life.
And then I think the ick factor is that we don’t like to be judged. And so this was what really played out in who can you trust, especially when I started studying and researching what was going on in China, where every citizen will have a trust score, so to speak, or a social citizen score—by 2020 it will be absolutely mandatory. And that is the real extreme example of this, and that’s what I think people fear. So they place more fear on the idea than value.
Mason: Let’s talk about those Chinese citizen scores, as you give this wonderful definition and diagram in the book. I mean, it’s everything from sort of how you act as individuals to even your shopping habits becoming a marker of your character. Could you just explain that specific Chinese example?
Botsman: Yeah. And so I should give a little bit of context. And so the intention behind this, or so the Chinese government say, is partly economic, right. So fraud is a really big problem in China. And what we underestimate is that many people do not have a credit history. So the way it was set up initially was that we can bypass credit scores and we can look all these different inputs that show whether a person is trustworthy and the likelihood of how they can behave. Which sounds quite logical.
But then you look at the inputs, and what they initially did—which they’ve now banned—is they gave licenses to the big data companies. So they went out to Tencent, they went out to Alibaba. And as you say, they could track… The example is like, say I bought nappies because I have kids, my score might go up because I’m a responsible parent. But if you were playing video games, you’re lazy and your score goes down.
And I don’t think it’s going to stop there because you can see like, they must be able to see the behavior within the video game. So the question that I then…I think the next wave of this is like, what kind of player are you? Do you get score—
Mason: Are you a PewDiePie or are you a…
Botsman: And then, there’s no line to it. And I think the thing that really frightens me about it is the inputs, when they start to get into social networks based on like if you said something about Tiananmen Square, that your score could go down. And because social connections are built in and it’s Chinese culture that you are accountable to how your friends and families and colleagues behave, that you could get punished for what someone else does. So you can see like, “Well, I’m going to unfriend that person because they’re dragging my score down.”
But then the frightening thing…and I’m sure many people have watched Black Mirror. And one of my favorite episodes is “Nosedive,” which is just genius where the main character, who’s Lacie Pound, and she’s living in this world. And she’s living in this where she wakes up in the morning and she practices her smile because she might earn a couple of points. And she gets a coffee and she rates how the milk was swirled. And she’s trying to earn these points because she wants to stay in this apartment and her score has to reach a certain level.
And I could go into the whole episode, but the frightening thing was with the system in China they did exactly the same thing, that initially it was all built around reward. So you could get a fast-track visa. You get better interest rates on your loans. Your children could even go to different schools.
But then they announced the penalties. And what was so scary in “Nosedive” and this weird way that art mirrors reality is that ends and Lacie cannot take an aeroplane— She can’t take this plane trip because her score’s dropped too low. And in China they banned more than six million people for taking flights, because they had low trust scores.
And so what I find very very dystopian and disturbing is that the punishment doesn’t fit the crime—there’s no correlation. So I get it’s this like, national system to try and make people more accountable but really it’s gamified obedience.
Mason: I mean there’s a wonderful opportunity insofar as identity then becomes divorced from sort of biology, whether it’s gender or race, you’re not judged on those characteristics, you’re judged on the quality of your character. And there’s great possibilities with that. But then the flipside is essentially it creates another class system all of its own right, the trustworthies versus the untrustworthies.
Botsman: But the thing that worries me is… And I rewrote that chapter and rewrote it because I kept saying, “Is this my Western view? Is this my Western lens on this?” And you know, it’s a popularity contest that by design only a few people can win. But the reason why I kept rewriting it is we’re not that far off. Like you know, you think…you go, “Oh, that’s never going to happen here.” But you look at the way people thumbs up, the thumbs down… You hear people saying, “Oh, I’m going to be friends with you ’cause you have thousands of followers on Instagram,” and that people’s [crowds?] has influence now and—
Mason: But influence is different from trust, though, isn’t it? Because I would never trust say, one of these big YouTube stars like a Logan Paul. I certainly wouldn’t trust him with my kids. If I ever had kids I certainly wouldn’t trust Logan Paul with them. Influence is very different from trust, and influence is built on generating character. I mean, these are performative personas that exist online through a certain lens, through a media lens. It’s very easy to…not fake influence, but it’s very easy to generate influence by actually following the tropes of what goes let’s say viral.
Botsman: Yeah.
Mason: Whereas trust is an entirely different thing.
Botsman: It is. And it’s a really good point. I mean, you could buy influence, right—
Mason: Bought your followers, yeah.
Botsman: —And Klout was a really good example of that. PeerIndex and all those things. But where I was going with it is the behavior, the mechanic, of constantly looking at what someone’s doing and having a response to it in real time is… The next wave on is, is then that a judgment of how not whether they look nice but whether they’re in some way competent or honest or… That’s when it starts to get into how trustworthy that person is. And that’s where I think it gets incredibly frightening.
And inevitable. I really, I mean I do believe that by the time I finish— I mean I think it’s actually already happening. It’s just that we’re not visible to the way companies are making judgments about us.
Mason: So humans— And you raise this in the conclusion of the book. Humans are these flawed, messy individuals. We make mistakes. And then how does this sort of system allow for individuals to make mistakes? To be rebels. To rebel. To be radical.
I mean, in the UK Theresa May was arguing for the fact that you couldn’t remove your digital identity when you reach thirteen and start afresh and you know, generate a new life yourself and get your parents to remove all your Facebook photos of you as a baby that potentially could be analyzed to work out whether you’re already anally retentive. You’re sucking your thumb in a in a photo of you at 5 years old. Where, or how do we build in a degree of allowance for genuine mistakes?
Botsman: And this is the frightening thing, because I think what is beautiful about human beings and what makes us human is that we are complicated, and we are messy, and we have bad days and we make mistakes, and that that’s how we learn and that’s how we move on. And I was probably the last generation—because I’m about to turn forty—where…it’s gone, right. There is no record of my university days. There’s no pictures of me at Piers Gaveston. Which is a very good thing.
But that was like, part of me growing up. That was part of me discovering like, it’s not a good idea to do that thing. So I really am with the Prime Minister. I do think that every… It’s like on your eighteenth birthday that you get the right to control and delete. Because if we’re judged by those errors when we go for our first job or whatever it is. Even like where our children can go to school. That’s a really precarious place where we’re taking society.
Mason: Should it be an explaining system rather than a compliance system, similar to how tax works in the UK versus the US. There’s an explaining system here whereas in the US it’s the IRS that basically own you. There’s no explanation. You can’t explain your way out of any issues that you have.
Botsman: Yeah. And I think that’s a really good analogy. Because so much of this comes down to who has control over these decisions. Like where does this data reside. So that’s why I think the new regulations coming in, the GDPR, like…it could be a real change and I think if we had ownership and control over it, we could we could actually unlock the value around it in very exciting ways. It’s when a government or a very very large network monopoly owns this that we edge into surveillance and control. And I think the reoccurring theme in the book was how easily we now give away our trust.
Mason: Can an anonymous identity be trusted? You look in the book at the example of…Silk Road was essentially a drug-based buying platform that was heavily built on trust. I mean these are the societies arguably most problematic…
Botsman: Untrustworthy peo— Yeah. They’re not, though.
Mason: Arguably, society says that drug dealers are the most untrustworthy people. You can’t trust these individuals who are doing criminal activity, and yet they had these incredible ratings on Silk Road. The entire ecosystem is built on the fact of “are you gonna get your ketamine delivered and is it going to be a good cut of cocaine?” But those weren’t… They were anonymized identities at the same time at which they were trusted entitles.
Botsman: Yeah.
Mason: I mean how does anonymity really play into this?
Botsman: Yeah, I mean context is king. I do think that. And it was in Jamie Bartlett’s book actually, The Dark Web, where he said it’s not dark. There’s actually a thousand torches shining on how the dealers behave. And the amazing thing about the dark web is it actually gives the consumer a lot of power, right. Because there’s so much choice that those dealers—to your point—they have to send the drugs on time. They have to be the correct weight and the right quality. And if you actually look at the testing of the drugs, they’re saying that is the case.
And the interesting thing is you know, they do have pseudonyms. They often don’t have photos. They’re just…logos. But I would say that is a mechanism that is in some way actually making people more trustworthy. Because the key ingredient of trust, people often think it’s about competence and reliability. They’re pretty easy for human beings to achieve. Its intentions that is key. So, are your intentions aligned with mine?
Now, if you’re street delaying, you could argue no. But if you’re dealing on the Web and that rating is critical for you future income, suddenly you’ve aligned the intentions of the buyer and the seller in a really interesting way.
Mason: So in other words trust can be divorced from authentic identity. Because…I think Randi Zuckerberg was one of the first to say with Facebook you need to have your real name used. It can’t be anonymous. It needs to be your “real” identity.
Botsman: Yeah. I think it really depends on the situation. So I think the expectation is on the dark web— Like if you’re using a real identity, you’d kind of be suspicious. So it’s like it’s an expected social norm. Whereas if I was King Porn on BlahblahCar, you’d you’d be suspicious because you’d want to know that driver’s name actually matches their license. But I do you think you can have a trustworthy system. I think you can have trust even when people are anonymous.
Mason: We mentioned very briefly blockchain. And blockchain allows you to decentralize some of this say, reputation data or bio data or currency, in a way that perhaps that might be the model through which we can build an exchange of trust and reputation. Could you explain your interest in blockchain, around these ideas more specifically?
Botsman: Yeah. I mean I have a… I have a hard time with blockchain. I get what it is, but I have a hard time because people are describing it as a trustless system. And I think the reason why describing it as a tru— Have you heard that term? Or a trust machine. Or like it’s this idea that you no longer need intermediaries or… Because you can transfer trust directly. And I think that is rubbish, because a lot of trust is required even in how the blockchain works.
The interesting thing to me about the blockchain is whether it can truly remain decentralized. And are we seeing what’s playing out right now with the banks, that they just take a technology and they’re really good at private owning it and putting fence guards so it just becomes a more efficient way of transferring assets. So that’s the bit that I’m kind of cynical around.
Mason: But we’re seeing— You mentioned Ethereum in the the book. And originally most people know blockchain through Bitcoin and the ability for Bitcoin to buy drugs from the dark web. But Ethereum’s slightly different insofar as Ethereum allows for these things called smart contracts which…kinda helps this whole trust—
Botsman: Yes.
Mason: —element. It’s that magic piece of paper where I write on my piece of paper and it replicates on your piece of paper and all the other pieces of paper across the world. I mean, in what way is Ethereum different and how does it actually help with this issue of trust and building reputation?
Botsman: Yeah. So I think what’s really impor— I mean it sounds so basic, but the blockchain is like the backbone under Bitcoin. And then the easiest way to think of it is that the original blockchain that was built by Satoshi Nakamoto was a bit like a calculator—it only had one function, right. So it could transfer money. And when Vitalik and the founders of Ethereum came along… They might describe it like this but it was kind of like the smartphone, right. So how could we make this this open decentralized platform where people could build lots and lots of apps on top of it?
The piece that excites me the most and I think has the most promise for trust is these smart contracts. And this idea that two parties could agree to some terms before some kind of event, where there is a clear outcome. So, that could be an exchange of a house. That could be the results of Wimbledon. But there has to be clear a outcome. And that the smart contract could automatically transfer the asset or pay out, based on that outcome. I think that is… We can’t even imagine the applications of that, and that we could remove so-called “trusted intermediaries,” whether that’s real estate brokers or betting agents or [whispers] lawyers (God forbid), or accountants or… When you start to think of the potential of that, you start to think of that’s why I think people are saying that this is really the next wave of the Internet in terms of we really transformed communications and knowledge transfer but we didn’t change the way he beings could fundamentally trust one another around the transfer of assets.
So, I think you’re totally right that it’s not the currency piece, it’s the smart contract piece that is the most interesting about Ethereum.
Mason: And the cryptocurrency market as it stands right now feels like it’s heavily bill on trust.
Botsman: Yeah.
Mason: I mean it’s one of these weird markets where it’s reliant on the mimetic power of how people perceive the stability of the Ethereum and the blockchain markets. And we saw an example of we thought that the founder of Ethereum had died in a car crash and suddenly the market crashed. Or something is written very very positively about Ethereum or Litecoin or Bitcoin, and suddenly you see on CoinBase that the currency goes up. And the reliance and the trust mechanism within the cryptocurrencies seems to be a trust that everybody’s going to stay in and we aren’t going to have one or two individuals with a large amount of the currency trying to liquidate it very very quickly, which will crash the market. It’s a very odd thing to witness.
Botsman: It is. It’s really odd. And what also I think is slightly disappointing to me is that it’s not divorced of value responding to incidences and public perception in the way that it is normal to fiscal currency. And so I think that the news around Vitalik is like it’s not really a decentralized system when you can see that kind of fluctuation…so dependent on one human being.
And that’s the thing that I think I really struggle to get my head around. People are saying oh it’s this immutable type of value transfer and you’re like, “No, because the founders can still go in and hack the system and change it.” And so there’s still a center. There’s still leadership and when things go wrong, who do they call for? They call for the founder. They call for the programmers to fix the problems. And I struggle I think like, maybe it’s a very small percentage of the human population that is actually comfortable in these completely decentralized systems where there is no institution and there is no other leader to hold accountable bu yourself.
Mason: And there’s been issues with people posting their area bitcoin wallet codes online, going, “Oh look, I’ve taken my first bitcoin out!” and you can basically decode it from the two QR codes. What they have done is they’ve scribbled over the number and not realized that the QR code is the thing that gives you the number anyway and suddenly we got access to that entire person’s history. Although, I still believe to a degree that blockchain will allow us to have the sort of thing you were referring to with regards to these reputation dashboards. I think that’ll allow us to have some form of identity wallet that’s decentralized to us, whereby the real hope and the real thing that was actually exciting about these reputation dashboards, not dystopian, was the fact we would own it.
Botsman: Yeah, we would ow— Yeah.
Mason: If we’re going to start ambiently creating not just social data but ambiently creating neuro data and bio data, and we’re going to spit in a tube and send it to 23andMe and get a certain degree of data back from that, if we are these entities that not just produced CO2 but also produce data, surely we should be allowed to hold it in our own cloud, our own bodies, or hold it to ourselves and then make the decisions as to how we trade it or donate it. With bio data—
Botsman: Or keep it private.
Mason: Or keep it private. You know, with bio data maybe I want to sell something to a drug company, some of my bio data to drug company, but maybe I want to donate it to a medical research group who’re doing incredible work around rare diseases if I happened to have a rare disease. I mean…the issue comes with how do we then get the general public thinking about this before the cryptocurrency markets crash and we don’t trust it.
Botsman: Right. And it might not be totally crypto—
Mason: I might not happen.
Botsman: Yeah. But no, I think the thing is that you know, we’re going to laugh I think in ten, fifteen years’ time that we were worried about our stars and reviews going in this locker because the exhaust will become so much richer and so much more personal, and it will be like we’re admitting how our bodies function on a minute-by-minute basis and what happens to that data. So I think getting the platform and the dashboard and the lock—whatever language, we’ve got to get it right now, right. And so we have to take the ownership back. And I do think the technology that offers the most promise is the blockchain around that.
And you can see it in countries like Estonia, right, where they’re saying… You know, they didn’t have formal institutional systems and they’re starting off and saying, “Right, everyone has a digital identity, and it is going to sit on the blockchain. And then once you have that, your your health data can go in there. Your social services data can go in there. But you the citizen own that data. And you have to give us permission as the government to access that.”
Mason: Well it might not be governments or institutions. It might be other nonhuman entities such as bots and algorithms that make those decisions based on that data on our behalf. You you mention in the book, and it’s one of the most exciting chapters for me personally, is this idea of the possibility for us placing trust not just in humans but in algorithms to help guide us. And there’s some wonderful possibilities there but there’s also some potentially problematic and scary outcomes.
Botsman: Oh no. There’s no scary outcomes. No, I mean… The interesting thing is like when you talk about this idea of trusting a bot, and I find this really fascinating, is the reaction is immediately negative. Like, you might be a rare exception, but there very few people that go, “Right, Rachel. Like, could you tell me how that’s going to change my life for better?” The human response is, “Holy crap. Robots are going to take control. We’ll lose control. We are superseding power to these algorithms.”
And then you start to say well you know, do you let Netflix choose your shows for you, or Amazon make rec— We’ve already done that, right. We’re already trusting algorithms to make decisions, we’re just not conscious or the process. And I think what’s happening with self-driving cars in a strange way is its brought this issue of how do you trust the intention of a bot or a machine or whatever it is, and it’s bringing that question to the surface even though it’s been there for quite a long time.
Mason: Well they’re trying to code the human back in the— The example of autonomous cars. They’re trying to code the human elements back into the design of autonomous cars because of that word “intentionality.” So when you’re driving a car, you look at the cues of the human behind the wheel. So you know they’re not paying attention or they’re staring elsewhere or they’re on their phone or they’re not staring directly at you, and now you have researchers at certain universities in the US designing these cars that have these light systems that’re essentially…blinking at you. There’s a human elements these self-driving cars where as a human you’re crossing the road and you’re not sure of the intentionality of that car as to whether it’s actually seen you or not. And now they’re queuing in these kind of ways in which these nonhuman agents talk back to us and show their intentionality. And I think that issue, the word intentionality is going to be the most important thing to allow us to take this fuzzy notion called trust and actually gift it to nonhumans.
Botsman: No, yeah. And I think it’s not even how the machine responds. It’s even earlier than that. So one of the interesting studies I came across was being done by MIT, where they intentionally—no pun [?]— They intentionally programmed the car with different intentions. And so there was the car that would always make the right choice, and the trolley problem, all that, and would be quite rational if it was in a situation where it had to choose who to kill. And then there was a car that would always protect the driver. And you ask people which car would you choose. And if they’re talking about other people they’re like, “Oh, you should pick the car that makes the rational judgment.” If they’re talking about which car they would prefer to buy, it’s the one that would always protect themselves.
And so trusting intentions is… The person making that decision as to how the car’s going to respond at the moment is the car companies, and the engineers. And so we have to trust the intentions of them, that they’re not just programming the car that’s going to sell the best because it’s going to protect the driver.
Mason: Well there was an argument that I’ve seen made with how we port a degree of our identity into self-driving cars. So if you self-driving car could ambiently capture your entire social data and find out what sort of biases that you may implicitly or unimplicitly have, it may make decisions on your behalf as to what sort of decisions it would make on the road. So if it was the Donald Trump self-driving car, it would make the decision to career into the family of immigrants, for example, versus career into the one white male who is walking on the other side of the road. That’s being posited as a possibility which then creates a whole bunch of—
Botsman: You can superimpose your biases and discrimination—
Mason: Superimpose your biases on these algorithms insofar as these algorithms have already been designed with a degree of human bias—
Botsman: Yeah.
Mason: —implicit in them.
Botsman: Yeah. And that’s— You know, I think one the best books on this is, it’s called Weapons of Math Destruction by Cathy O’Neil. And she talks a lot about this. And I think this is the really tricky thing, is that the optimistic side of me says where do humans make very bad judgments? Either because they listen to the wrong trust signals, or because the mind is as Jonathan Haidt puts it it’s a story processor—it’s not logical. And bots can be very logical and non-judgmental in a way that human beings can’t.
And so that…it’s like we’ve almost skipped over all the possibilities of algorithms and bots being able to make better decisions than human beings because we fear—where you’re going—that you could take Donald Trump’s personality and sort of impress it on an algorithm and it would make judgments based on that, and that could hijack an entire system. So, I find it really interesting that we go negative so quickly to this idea where there is so much potential for algorithms to actually make better decisions than human beings. And I think the thing that we’re worried about is that… You know, I think of my relationship to technology is still that it’s very predictable. And there’s an easy way for me to assess that predictability—does my car turn on? But as soon as things start making decisions for us, then you have to trust the intentions behind the decisions, and that’s really hard to assess.
Mason: It’s one of the issues that IBM Watson has right now with regards to how IBM Watson is diagnosing for certain diseases. IBM Watson…at least their PR team, are very focused on saying IBM Watson doesn’t diagnose. IBM Watson makes suggestions—
Botsman: Suggestions, yeah.
Mason: —that then a human doctor looks at. And the human doctor diagnoses. They always want their intermediary. Even though the press and the popular press will, “Oh, IBM Watson has diagnose someone with this rare disease.” They go, “Whoa, okay. IBM Watson wasn’t a human. It’s isn’t a person.”
Botsman: But how do you force that human judgment, right? So if the machine makes a recommendation— So you you hear this a lot with companies who are doing very very sophisticated background checks, say for recruiting, and they’re giving people the information to say is this person a really good fit for your company. But they’re saying you know, a human being must look at the scorecard, right. And you have to look at the context as to why they might have scored lowly on antisocial. Like, how do you enforce that?
Because whether you’re a doctor or whether you’re a recruiter, you naturally want things to become more efficient. And this is what I find so hard, is that technology naturally makes things more seamless and speeds things up, and it’s this very accelerated mode that is the enemy of trust.
Mason: But if you know what the technology or the algorithm’s looking for, isn’t it then possible to trick and play that game? So for example, friends of mine are being interviewed at the moment through video. And these videos, these are for corporate jobs and these videos are being watched in the first instance by essentially a bot for certain cues. And the folks who know what that algorithm is looking for inside of that video with regards to certain cues are able to fake their way through that first piece of the system.
Or for example, certain people have been told how to signal with the body for certain cues to fake the results of lie detectives. If you know the rules of the game, then it’s possible to cheat.
Botsman: And it’s human nature to try and beat or trick the machine, right.
Mason: hundred percent, yeah.
Botsman: Yeah, I mean I think there’s an argument that the… And again like, do we want this, that the machine… Could the machine and the artificial intelligence get to a point where it understands every single trick in the book, because it learns over time?
Mason: But I’m just saying, could you fake trustworthiness? So I used to have a friend—
Botsman: Yeah, no no. You can.
Mason: I used to have a friend, every time he used to be in a bar in London, he used to go—this was back when Foursquare was big. He used to go into Foursquare and find the local vegan restaurant that was open and check into the local vegan restaurant instead of the pub. And I used to ask him, “Why’re you doing that?” He goes, “Well, if the insurance companies, or if the medical insurance companies ever do look at my Foursquare data, I wanna make it look like I’m living a healthy life and not sitting in the pub every so often.” So it’s possible to a degree—
Botsman: It is.
Mason: —to fake this stuff.
Botsman: I think, though, the comforting thing about trust is its sister, which is trustworthiness. So that’s the trait that human beings have that we assess. It’s actually quite easy to trick competence, which is one, reliability, which is the other. It’s very very hard to trick integrity, which is tied to that intentions piece. So I think you know, how does…how do you… I don’t know. Like, how would you trick— I don’t know how—
Mason: But when designing the algorithms, you’ve got to look at certain things which… Again, it’s context, as you said so wonderfully at the beginning this conversation, it’s a context issue. And for example, we’ve come to accept that politicians will philander.
Botsman: Yeah.
Mason: But that doesn’t make them a bad politician. And I know that you were involved with the Clinton Foundation. Maybe we’ll cut this bit. But you were involved with the Clinton Foundation. He was a good politician who happened to do some things which where in the eyes of the public not seen to be necessarily virtuous—
Botsman: Yeah.
Mason: —but that doesn’t… And yes, to a degree he lied to the public about his doings, you know, “I did not have sexual relations with that woman.”
Botsman: Are you doing his accent now?
Mason: Trying and failing. But, it didn’t make him a bad politician. He was a brilliant politician.
Botsman: Yeah. And and this I think is really interesting. So if you think of the ingredients of trustworthiness being competence, reliability, integrity, and benevolence—how much people care—the alchemy of that is very different. So like a lawyer, for example…not sure benevolence if I really want a lawyer that’s going to go after someone and is the opposite of me—
So I think that is the point, that people can still be trustworthy to do— It’s what are you are asking them to do which is the issue with so many of these surveys where they go, “Do you trust the media? Do you trust journalists?” right. Well, to do what? Like, what is it you’re asking the question around? So yeah, context is king when it comes to trust.
Mason: So if context is king, then almost it’s impossible right now to design those systems. It has to be built on some form of messiness. There has to be some degree of the fuzziness in the system. You call these moments where we’re suddenly able to trust these new systems “trust leaps.” Explain what a trust leap is.
Botsman: So a trust leap the way I define it is, it’s when you take a risk to do something new or differently to the way you’d previously done it. So, it doesn’t have to be monumental things. It could be something relatively minor, so I no longer need a paper bill, I’ll check my bill online. It could be the first time you use an ATM machine, put your credit card details in a web site, used eBay, get in a self-driving car.
And human beings, we are naturally very good at taking trust leaps. We’ve been doing it throughout the history of time. Bartering to currency is a really good example for a trust leap. I think the difference today is that we are being asked to leap faster and higher than ever before, which is why everyone talks about this unprecedented rate of change. But ultimately trust leaps are really really good in terms of understanding again, innovation and progress. Because this is what enables people to move forward. So they’re kind of like this conduit or channel that enables new ideas to travel.
But I think where companies often make a mistake is that asking people to leap too far too fast, or there isn’t enough social proof—which is different when I’m thinking of early adopters—but there’s not enough people who’ve made the leap for the rest of us to go, “Safe. It’s worth trying. There’s something in it for me.” And so…it’s kinda weird but when I started visualizing human beings leaping into all these different areas and really trust was sort of enabling that you know, it does start to explain so many patterns of change and progress.
Mason: But then… What I worry about is if you’re constructing an identity based on reputation, it’s going to make you very averse to taking risks.
Botsman: Well not necessarily. So look, I just want to be clear that I am now full aware that this is a very dangerous situation, that people have reputation dashboards. But I could argue the flip side of that, because if there was a real bonus if my score went up because I have a high propensity to take risks, because I am willing to go places and try things that other people haven’t tried, and through that there’s medical progress, whatever it is, you could argue that I get a little boop! in my reputation. So it can…
But the thing that worries me is continuously using penalties and rewards to motivate human behavior. And that’s the piece that I struggle to get my head around, is how do you… And this guy who’s the founder of Traity, he put it really well. He said, “You know, I don’t think we should see reputation as a currency. I think we should see it as a risk premium.” And I think that’s a really good way of framing it.
Mason: Do you think the prevailing wind on where all of this is going has something more nuanced with regards to how people are parented in the modern age? There was a running joke that the reason for Donald Trump is America had daddy issues and it just needed…
Botsman: Daddy, yeah.
Mason: It just needed a daddy. It needed a parent who was kind of tough, and was going to tell it what to do, and then that populace could relinquish a degree of control over their lives and put their trust into something else, in the same way that you know, when we see a doctor all the rules change. All the rules go out the windows. It’s like take blood, take urine, take anything you need. Take all the data you want from me, just make me better, I trust you. You want to see a doctor to basically relinquish the control of the situation that you’re in.
Do you think there’s something underlying all of this— It’s the thing I kept thinking about when I was reading your book is like God, maybe there’s just some issues with how this generation’s been parented, that they’re looking for something to just trust or look up to or to find some sort of authority at the same time at which they’re taught to reject authority. But then they’re like, “But we need authorities in our lives.” There this kind of…whatever expands, contracts, expands, contracts.
Botsman: I think that’s a really astute observation and I think… I think Trump is an example of too much trust in the wrong places. But you look at… Not to go back into the election but. I think this is much deeper than him just appealing to people’s feelings and understanding how you tap into anger and fatigue and be antiestablishment. To your point I think you know, probably not my parents but my grandparents. They had a very clear structure around trust, right. Like, they had a very clear hierarchy around it. And whether that was in the schools and the respect for they’re teaches, there was a very clean lying around that. Like, you did trust what experts and economists and scientists said. And we’ve grown up being told to question everything, and to being told even like you know… What is in your Facebook feed at the top next to pictures of your friend’s newborn baby may not be true.
And so I think it’s this vacuum that is very very dangerous because to your point, what rises up are these archetypes and these very strong figures that are actually incredibly smart at tapping into the pulse of that trust vacuum. You know, Michael Gove is another really good example of that. And people are then placing so much hope and trust into something that is essentially untrustworthy. And so I do think it is very much tied to this information overload, to hierarchical structures breaking down, where we don’t know where to look. Do we look up? Do we look to the person on the bus? Do we look to our friend on Instagram? And through that confusion you see new forms of dictatorship, new forms of aggression. Because they’re just very loud and clear to people.
Mason: I think maybe to a degree there’s a fracturing of how we generate our identity. We have to generate on identity for the online persona. Something else out here. Then there’s an arguable rise in mental illness with regards to how people are operating in the world. And then they look to trust a therapist to tell them the way through. So we have these constant pulls over the way in which we operate. Trust then becomes this kind of very very fuzzy, very problematic element of that.
Botsman: It does. And it’s funny, because when I… Generally a very optimistic person. I like to redesign things so that they’re better. And the hard thing I found writing this book is I’m lying to the reader if I pretend this all ends well.
Mason: It’s all a lie.
Botsman: And I’m not saying the book is very depressing but the manuscript came back and it was like, “Hope. Hope.” And I said look, there is hope in this. But the hope in this is actually around individual accountability. It’s around all of us stopping blaming the institutions and blaming the tech companies and starting to say, “I accept those terms without thinking. I accept I place more value on convenience and trust when I take an Uber ride. I buy books on Amazon and they don’t pay their fair share of taxes.” And that for me is actually the hope for peace in this, is that through the mess we see greater individual accountability in society. And that by letting maybe the machines and the algorithms take over decisionmaking we in a funny way realize what it means to be human.
Mason: Thank you to Rachel for sharing her thoughts on how modern technology has generated a trust shift. You can find out more by purchasing Rachel’s new book Who Can You Trust? How Technology Brought Us Together – and Why It Could Drive Us Apart, available now.
If you like what you’ve heard, then you can subscribe for our latest episode. Or follow us on Twitter, Facebook or Instagram: @FuturesPodcast.
More episodes, transcripts and show notes can be found at futurespodcast.net.
Thank you for listening to the Futures Podcast.