Oumou Ly: Welcome to The Breakdown. My name is Oumou. I’m a staff fellow on the Berkman Klein Center’s Assembly Disinformation program. Our episode today features our very own Jonathan Zittrain. Jonathan is the the George Bemis Professor of International Law at Harvard Law School. He’s also a professor at the Harvard Kennedy School, a professor of computer science at the School of Engineering and Applied Sciences, director of the Law School Library, and cofounder and director of the Berkman Klein Center for Internet and Society. Thank you for joining us today Jonathan.
Jonathan Zittrain: It’s my pleasure. Thank you, Oumou.
Ly: Good! So our Assembly program is wrapping up for the 2019 through 2020 year. And Jonathan is on as the faculty advisor for the Assembly program, and also as a cofounder and director of the Berkman Klein Center, of which the Assembly program is based. So Jonathan, can you talk a little bit about our yourself, a little bit about the Assembly program and how it came to be?
Zittrain: Sure. At one point, we had gotten word of one of our fellow universities getting, on rather abrupt notice, a $15 million grant to improve the state of cybersecurity. That’s…a lot of money. And we were certainly thrilled for our peers, and then couldn’t help but brainstorm gosh, if we unasked had $15 million appear…which I won’t say has happened yet, what would we do with it? And how would be deploy it in a way that did justice to the confidence of whoever would be entrusting us with that much money.
Ly: Yeah.
Zittrain: And what emerged from that discussion was a sense that in some ways the reach of academia is limited because the only people at the core of academia are academics, like the only people who write books are writers. By definition. But, what if the experiences of people who weren’t just dispositionally inclined to sit down and write you know, 250 manuscript pages also found their way into books, in the first person as narrative? Alright well then you’d have people who weren’t writers writing. And what would it mean to have people who weren’t just academics in an environment true to the highest ideals of academia? Of solving problems, of examining questions and our own assumptions about answers to those questions?
Ly: Yeah.
Zittrain: What if you could bring them together in our space, at first with cybersecurity, later with the ethics and governance of AI, and more recently on disinformation. What if you can bring them together around these really hard problems that transcend traditional disciplinary boundaries within academia, and that transcend the ability of any of the actors that maybe are most in the position to do something about them…it’s kinda out of their lanes too. Like, classically, do we want Facebook unilaterally deciding what’s true and false? By Facebook’s own account, even Facebook does not want to be doing that. And they’re right, they shouldn’t be. Alright, well then who, what? What relationships?
So, capturing those sorts of questions—a problem that is big, possibly getting worse, having very significant impact, but no one party or even group owns trying to solve it, what would it mean to try to gather people around that and work on it? And our first efforts were generally on cybersecurity and more specifically on what we call the “going dark” problem, as framed by law enforcement especially, that a bunch of stuff that they used to be able to get if they could manage to get a warrant, like access to the contents of your cell phone, are now maybe beyond reach because if you’re not willing to cough up your password—a big if, to be sure, because if they’ve got you maybe they can get the password out of you. If you’re unwilling to cough it up, and they really want to get in there even though they have the warrant, they don’t know that password, ten tries and it vanishes. That’s seen as a problem. And our group, which included government officials, civil libertarians, academics, human rights folks, had really good discussions about that, and ended up in that case putting out a report called Don’t Panic, explaining why while you can come up with an example of a mobile phone, or as the district attorney of Manhattan put it a whole roomful of them, that you can’t get into and with your warrants you should be able to.
There’s also a whole sea change going on in the world, in which we have all these devices like our webcams and our mobile phones that could be, with a warrant or other legal process, designed to turn on and surveil us all the time. And you know, there’s a bunch of that. So in a way it was saying to law enforcement “don’t panic,” and to civil libertarians maybe “you should panic” because there’s a bunch of other fronts on which to worry.
So that’s just an example of the sorts of things our group came together to do in that instance. And in intervening years it’s taken up other issues as well. And most recently, as you know, we’ve taken up the problem of disinformation. How big is it? How bad is it? How would we measure it and know if it’s getting better or worse? And who if anyone would we trust with an intervention designed to do something about it?
And I should say quickly, the Assembly program as it’s evolved has roughly now three pillars, three tracks, one of which is involving our students at the university and figuring out ways, as you have graduate students looking for thesis topics across multiple departments, or you have students like law students looking for meaningful clinical, applied, experiential work rather than just theoretical or doctrinal stuff, coming up with problems that they can lend their talents to, and having them come together as a cohort to do independent work and meet faculty from other departments that they normally wouldn’t have a chance to come across. So that’s the Assembly student fellows.
And we also have the Assembly fellows, who are people from industry and outside academia and nonprofits and NGOs who are in the trenches. They’re working day in and day out. Doesn’t mean they’re running a particular company but they’re the people within the engineering rooms of those companies trying to make a difference. And by calling them together, having them spend some time on campus here, full-time, and then scatter again. Having a companies give them a vote of confidence for their professional development but also a vote of confidence in a kind of what you’d say as a lawyer “pro bono” work. Having them work in the public interest with one another on solutions that might well require industry cooperation or standardization or interoperability. Bringing that group together, along with the academics, can maybe yield something interesting. That was the premise, and for now several years our Assembly fellows have bonded as a group, done multiple projects and presented those projects, some of which persist today with their own lives independent of the Assembly program, thanks to their work.
And then the third pillar is what we call Assembly Forum. And that’s trying to get some of the senior officials, the senior executives or their representatives at companies who are thinking at the corporate or governmental policy layer about what should be happening and who should be doing what, and get them talking with one another and kind of setting the standard of trying to have insights or ideas that they wouldn’t get in their own natural environment. Because those are people that might well be thinking about this kind of stuff all the time. And trying to be able to get them to see it from a new angle can be a nice sort of hurdle to set for ourselves. So those are the three pieces of Assembly.
Ly: So the Assembly Forum, which is the piece of our program that is for experts across sectors… I mean just thinking back over the course of the year we covered a lot of ground. The first discussion in October, we grappled with problem ownership and we really tried to pin down definitions to the terms that’re most commonly used in the space. And then as the year progressed, we tackled issues around disclosure, impact—like how do we know quantifiably that there is a causal link between a piece of false content that is online and how someone goes and behaves.
Later, are there any issues that we discussed over the course of the year on which you maybe experienced a perspective shift, had your mind changed. Maybe did you think you changed someone else’s mind?
Zittrain: Huh. I wouldn’t bet on that. But I certainly found my own thinking deepened and changed on some things. I came to an appreciation from our discussions…first of…you certainly can’t just assume that disinformation is a sourge. Or undifferentiated disinformation, just across the board it’s terrible. That some of the slicing and dicing academics are wont to do…and that we found some of the companies are doing too is they’re trying to operationalize measuring and countering it where they want to wade in, it really makes a difference to figure out well alright, what are we defining as misinformation. Even— I mean to some listeners this may be a kind of new distinction—to everybody it was new at 1 point. The difference between misinformation and disinformation.
Ly: Absolutely.
Zittrain: Misinformation being oh, you just got it wrong and disinformation became like you are wrong; you’re trying to get other people to get it wrong. With the latter being propaganda.
And even that isn’t sufficient, because you would think that alright, if some government cooks up a piece of disinformation in a lab and releases it, that is the disinformation. But if somebody repeats it credulously—they really believe it themselves, they’re engaging in misinformation with the disinformation they got. And it might well be that if you’re a platform conveying or amplifying that speech you would react to it differently if you know the actor is intending it versus the actor just being a credulous vehicle for it.
So, being more careful and precise so that we can cut to action that more narrowly addresses the worst aspects of the problem seems to me really useful in a way that just otherwise makes the problem feel so inchoate and overwhelming that it’s hard to even start with your spoon scooping out the ocean. And I think that in the particular instance of political mis- and dis-information, there’s some really interesting questions where if you have a platform like Facebook where, or have a government intelligence agency that’s charged with protecting the nation looking for threats and they see here’s another government and yep, they are absolutely trying to salt these falsehoods—and whether or not even they’re false they’re trying to make it look like whatever is being said likely false, it’s coming from say fellow Americans…now what?
And you would think well at least you should say what you see. If I’m on Facebook I would prefer that if I saw something that was supposedly from a neighbor. It turns out it’s from somebody you know, thousands of miles away getting paid by their government to like, trick me, I should know about that.
Ly: Right
Zittrain: But it’s very complicated. And one of the hypotheticals we entertains as a group was alright, suppose the government, the US government, absolutely with great certainty can say “Here is disinformation. It’s coming from this other country. It’s targeting this political candidate,” do you tell the candidate?
If you tell the candidate, what do you tell them? “By the way like, another country has it in for you; that is all.” Are you like, “Here are the specific posts,” and then you tell them by the way it’s classified so you can’t tell anyone else. Why did you tell them? What’re they supposed to do with it?
And if you tell everybody…first, well does that ruin your source or your method? And second, even if you could tell them without having to balance that against it, are you maybe doing the work of the adversary because now you have people questioning whether everything they see is in fact foreign propaganda.
Those are real questions, and I’m not sure I have answers to them all, but thinking about how we will…when some of us know what’s going on and are prepared to share it, or have any inkling and aren’t certain and maybe want to share that lack of certainty, what’s the right way to do that, general versus specific, that advances the cause against disinformation. Like that seems to me a better-articulated question that I had when I was going into it.
[Part 2]
Ly: What concerns you most about the current state of play with regard to disinformation? Is it that the problems are so intractable that we find ourselves in a status quo that seems untenable that we can’t get out of? What really keeps you up at night?
Zittrain: What keeps me up at night is the absence of trust in any referee. In anything that might feel like an umbrella under which it’s like alright, you know… I mean, just to take an example from the foundations of a legal system and a court system, if two people have a dispute so intractable and important to them and they really want to be right, or win—whatever that means—and if one wins it sure feels like the other one’s gonna lose… And it’s that bad that they are willing to endure litigation, they’re ready to go into a courthouse and spend potentially years, and tens of thousands of dollars, trying to just get an answer from a jury or a judge and then an appellate court and all that as to like, who’s right here…
Ly: Yeah.
Zittrain: It would sure be nice to know that at the end of that, when somebody wins and somebody loses if they don’t settle, that both parties…obviously the loser’s going to be disappointed but doesn’t feel like, and that it is in fact not the case that, they were robbed. That it was a corrupt system and like, I go, “Why did I even have the faith to go into that courthouse?” And how valuable it is to have a legal system that can settle disputes without the system itself being rightly called into question in every case as to whether it is the problem rather than solving the problem.
And however much rightful worry there is about whether say the American legal system meets that standard, how much less confidence there is in any credible party that is in front of us here, any possible party. Like, do you want Facebook answering this? Alright, well how about Snopes. Can Snopes be trusted? The fact that you don’t have a significant majority of people trusting anything is a huge problem. Because it’s like, you can move the pieces around however you want but unless you can create more trust and more buy-in among us, that we may disagree or we may favor different political candidates but we’d all kind of like the truth and we can achieve it among us as a shared thing and work towards it We’re lacking that right now.
And I do have some ideas on that front, some of which were really inspired by these discussions. Such as, instead of Facebook throwing up its hands and saying, “We’re going to allow all political advertising, but in nearly every instance don’t expect us to judge the truth or falsity,” and Twitter saying, “Yeah, you don’t want us deciding, either, that’s why we’re just not going to allow any political advertising at all…”
Ly: Yeah.
Zittrain: My thought was to have political ads, when submitted to a platform like that, they get assigned to an American high school class, which under the guidance of their teacher, and a grade from that teacher and maybe the help of the school librarian, work through whether this ad contains such material disinformation, or misinformation, that it shouldn’t be allowed on the platform. And they write up their findings. They get graded as to how well they do it. And their findings are binding. And so, that class or maybe it’s three classes. And then it’s like, two out of three is what the decision is. They decide. And it’s my way of saying alright, we don’t trust anybody, do we trust our own kids? And if we don’t what does that say…
Ly: Yeah.
Zittrain: We can’t trust them because they’re going to be the voters in a few years. So, that’s an example of an idea that I acknowledge is clearly crazy. And I’m hard pressed, though, when I think about it to say why it’s worse than the status quo which is clearly unacceptable to me.
Ly: Do you think that this lack of trust in traditionally-respected or trusted institutions is sort of…the result of the disinformation sort of situation that we’re in? Or do you think that there was sort of the sentiments that preceded it, and this has just sort of exacerbated it. Because I can remember something… I talked with Renée DiResta for our first episode of the series. And she said something so interesting to me, which is that social media has sort of had this democratizing effect in terms of who we consider to be a credible source. At the same time we’re experiencing so much disinformation that degrades the credibility of traditionally-respected sources. Where do you think that this has really come from?
Zittrain: Yeah. It’s likely a sadly mutual cycle. If the number of people that would find credible some tale about 5G and how 5G relates to COVID, I mean it…you know, anybody could sit down and write a page of word salad that invokes a bunch of words having to do with physics to explain how the vibrations actually change the vibrations of the vi—you know, and it’s just…it’s incoherent.
But the fact that that could have purchase, and among how many would be a way of kind of asking that question. Is it, all you needed was to have your eyes and counter those words and then it’s like a mind virus and it’s just you can’t— If that’s the case then even the employees at Snopes might need special gloves and masks and you know, eye goggles to encounter so much disinformation and not become persuaded by it. But I don’t know that that’s the real model.
So I think some of it is…it’s a taxonomy. Some of the stuff that almost anybody after encountering it it might get them wondering and wanting some more information. That’s partly the worry about deepfakes, that you see something, you feel like your eyes aren’t lying, and alright somebody better explain what I’m seeing. Versus people who were already inclined, for various reasons including just wanting to rationalize what they already believe or want to have happen about the world, to having that smaller group of people then persuadable by some random conspiracy theory. And they’re both very different kinds of dangers, and in fact when we look at platform responses you’d probably want them tailored differently if it’s…you know, what was Lincoln’s quote if, it’s some of the people being fooled all of the time versus all of the people being fooled some of the time. And what those false beliefs might drive them to do.
Ly: So our our forum wrapped on May 12th, and we had—our last sessions were really heavily focused on COVID, of course. It’s top topical on so much of what we’re seeing online as COVID-related or COVID-focused. In our last two sessions platforms, researchers, and others others in our group talked about the challenges that they’ve encountered as they really worked to manage the sheer volume of disinformation surrounding this issue. And then just recently, sustained attention has really shifted to issues of racial inequity, injustice, and police brutality.
So, I think as we saw in the early months of COVID, the pandemic, just that focus on, that sustained attention on a really high-interest issue can pollute the information environment in a way that normal news cycles just don’t, right. Normal cycles, you focus on something and it moves on and it just keep going. As you take stock of the challenges that are mounting in the world at large, and maybe amongst the countering disinformation community as well, are there particular reforms that you hope to see?
Zittrain: Well, I think part of the throughline of the examples you’re talking about is particularly disinformation that could contribute to violence, or to harm, including self-harm in the health context.
Ly: Yeah.
Zittrain: And it makes the stakes real. If you’re thinking about a particular person choosing to look for…you know, something about whether people really landed on the moon and then consuming videos that say they didn’t… You might have one view, a kind of permissive one that’s just says whatever, people upload videos, other people watch them, it’s called the marketplace of ideas. Tempered in the first instance by alright, but which videos is YouTube recommending and how are you saying that’s a neutral choice? Which there’s a lot of debate.
But once you’re talking about alright, I go up to Bing or Google and I’m asking for a poison ivy remedy and what it tells me is to do something that’s like the opposite of what you should do and then you’re gonna end up in the ER? What’s the marketplace of ideas argument around that?
Ly: Yeah.
Zittrain: And it’s not a good one. And so with COVID out there, it’s in fact not even just well, people have to “buyer beware.” Like if you’re just gonna trust anything you see on the Internet that’s your fault.
Well, if it is your fault it still might mean then that you’re going to be transmitting a virus to eight other people and isn’t their fault.
Ly: Yeah.
Zittrain: So…that’s an issue. And when it’s about disinformation that could lead to violence and conflict where people are putting it out exactly for that purpose, it makes it awfully hard to just say this is too thorny a problem to start judging it, I’m not going to wade into it if you’re the platforms. Or if you’re society.
And so, while acknowledging all of the difficulties that come from figuring out who’s supposed to be the truth police here, having no police here is also…the stakes are very real, very immediate, and when the denominator of people involved is in the billions who are tuning into these platforms and you know that a slight tweak to the platform here could greatly change the views of tens of millions of people—
Ly: Yeah.
Zittrain: —there’s not a non-neutral position. There’s just whether you’re gonna be stirring the pot or whether third parties, including state actors, will be stirring the pot.
Ly: I completely agree with you. So, what is on tap for next year? Your particular… What’s on tap—
Zittrain: Well, we have our work cut out for us, right? And so, I think… I mentioned before we’ve kind of taken up other issues like cybersecurity and the ethics and governance of AI—and having solved those, moved on to the next. And I of course say that that tongue in cheek. This problem of disinformation requires, calls out for more than just the one academic year’s worth of kind of focused attention in this program that it’s been given. And there’s a lot of momentum, and I think enough collective feeling within the various groups that the status quo really isn’t working, that it’s worth putting that against the sense that—and…exactly how to solve it other than just keeping on with some of the measures already in place. It’s really calling out for new thinking and new experiments. And I’m also mindful that a lot of the action here, both in understanding the dimension of the problem through access to data about it and what’s out there and what people are doing and how they’re reacting, and in implementing whatever the solutions attempted might look like, that that’s largely in private hands. And figuring out the right way to bridge between those private companies that happen to shape speech so much, and some sense of the public interest and public availability of that data, it is a really important role that our group can play and model and work with for the coming year. So, my sensibility is that we’ll really try, certainly through the November US elections but even beyond, to be sticking with this problem and with the kinds of relationships we’ve forged among us in the different groups we have at the table, and see if we can bring more to the table as we go.
Ly: Thanks so much for joining me today Jonathan.
Zittrain: It’s my pleasure. Thank you, Oumou.
Ly: Thanks.
Further Reference
Medium post for this episode, with introduction and edited text