John Palfrey: I’m going to turn it over to my comoderator of this session and our cohost from the Center for Civic Media, their Executive Director Ethan Zuckerman. As he’s setting up just one note of thanks to Urs in particular, which is one thing I’ve learned a lot from you over the years has been the way of seeing this question of information quality or credibility or truth not so much as a static thing but as a matter of process. A process not just also of evaluating information but the creating of it and the creating of the architectures around it. And it seems to me you’ve really pulled together a lot of the learning from today into that framework and expanded it extremely helpfully. Over to you, Ethan.
Ethan Zuckerman: Sure. Well, I wanted to take advantage of the fact that the time at the front of this room is so scarce. I was going to seize a couple of minutes of it and offer my reflection on what’s going on and then do my best to open this up to get more people around the table.
It’s an interesting moment for me because this is one of the first events that I’m helping organize with Berkman now from the MIT perspective. And I have to say since moving over to MIT after eight years at the Berkman Center, everybody wants to know about culture shock moving between institutions. Everybody I think is convinced that everyone at Harvard is wearing a suit 24⁄7. That we move into like rope sandals and t‑shirts as soon as we take two subway stops further down—everybody wants to know the cultural difference.
But I actually am starting to figure out what the cultural differences actually are. And I think one of the cultural differences is that when we take on the sorts of questions we’re dealing with here, the approach of an organization like the Berkman Center tends to be to try to figure out can we find a systemic way to think about this problem? And I think Urs has just put up a really helpful system that we can start using to think about the various different problems that come up around this general sense of truth and truthiness.
What’s great about these large systems is that they can inform the way that we end up designing tools. And we can try to get a very thorough view of an issue and then figure out how to intervene. That, as I’m finding out, is not a very MIT way of doing things.
The MIT way of doing things, near as I can tell, is to say the system will come to us eventually. What we really need to start with are small experiments. And we should try an experiment that we can try over the course of a week, or a month, or certainly within the context of a master’s thesis. And we should see whether that gets us anywhere. And if it gets us anywhere we should keep hammering on that and eventually we’ll get to the place that we want to be.
The trick with this method, as I’m finding, is that you have to figure out which problems are tractable. So when you’re looking at something as big as these questions of verifiability, truth, truthiness, disinformation, so on and so forth, I find myself now trying to pick apart the questions we talked about this morning from the perspective of tractability. So let me use that to sort of frame a couple of the conversations we’ve had and then a couple of things that haven’t come up, and then see if I can sort of push us forward a little bit into where we go this afternoon.
A few of the people who were around the table here today were at a gathering at the New America foundation a few months back. And it was a conversation about fact-checking. It was a great conference. It was really a good deal of fun. And what was interesting— It’s under Chatham House Rule so I can’t actually credit any of the bright people who said anything there, I can just tell you wonderful things were said by wonderful people.
But one of the wonderful things that was said by a wonderful person was talking about fact-checking using a particularly gory metaphor. And the metaphor was that misinformation gets out there, it’s like being shot. And so far what we know how to do is come in three or four days after the fact and try to bandage up someone’s wounds, right.
So we go through a political speech, we go through a debate. Someone says something that’s blatantly untrue. It circulates in the media for a while. Three or four days later someone comes in with a fact-check and we’re basically trying to staunch the bleeding of all of those various harms that are out there from the disinformation that allow us through motivated reasoning, through cherry picking reasoning, to pick the facts we want to make our arguments.
And the thought was maybe we could do slightly better. Maybe we could get to the point we’ve got a bulletproof vest. This isn’t actually bulletproof, it isn’t why I wear it, but you can imagine sort of the bulletproof vest for truth would try to sit there perhaps in your web browser, perhaps on your TV. And when that bad information came out there, it would jump up, it would block you, you’d still get hit, you might get a bruise, but you wouldn’t bleed. You know, the information would somehow get countered very early on, within it. What I found really interesting about this conversation is that no one took the metaphor any further and sort of said, maybe we could dissuade people from shooting in the first place. You know, could we get people to stop saying obvious mistruths?
And that appears to be in the realm at the moment, in current political climate, current media climate, well within the realm of the intractable. That the notion that somehow calling out disinformation and shaming doesn’t have the power that we once thought it did. And that may change our sense for what are possible interventions within this space. Those interventions may be then trying to figure out how to counter that misinformation before it does the damage that it otherwise would do. But then we need to ask ourselves the question do we want to give up on the function of shaming that early on within it?
When I talk about tractability it’s really about questions like that. It’s trying to figure out where in that line would we like to go. I’ve been a little surprised, given the international experience of some of the people in this room how US—centric and particularly how left/right a lot of this conversation has been today.
And I want to go back to a comment made by my friend and colleague Ivan Sigal, who pointed out that it can be a very different circumstance to sort of think through the process of fact-checking when you actually have facts to start with. When you have reporters on the ground, when you have events that are fairly easily deciphered. For people like me who do a lot of work in the citizen media space, we are still trying to figure out the implications of the Amina Arraf story. Who knows what I’m talking about when I say “Amina Arraf?” It’s better than most rooms but it’s enough that I should actually tell you the story really quickly.
Amina Arraf was an incredibly popular blogger in Syria. An amazing story. Incredibly brave young woman. An out lesbian in Damascus, writing in English, about her experiences living in that city about the early stages of the Syrian resistance. Amazing figure. Major newspapers came, did interviews with her. You know, everyone started reading this blog. It really rose to prominence quite quickly—the only problem behind it was that it was written by a 40 year-old dude from Georgia named Tom McMaster. And he had carefully constructed, over the course of years, an online identity that allowed him to have his “voice heard” because as a middle-aged white guy no one ever took him seriously and obviously he wasn’t very well-represented in the media. So by becoming a Syrian lesbian, he would have the chance to be heard in a way that he wouldn’t be before. And as people started looking at this it turned out that most of the people who had met Amina Arraf also turned out to be white dudes pretending to be lesbians, leading a commentator on the situation to look at this and say it’s “fake lesbians all the way down.”
And the problem with this was not just the construct of fake lesbians sort of reinforcing this guy’s attempt to speak on behalf of the Syrian people. It was that we were so desperate for perspectives from the ground, from Syria at this particular point in time, that media organizations that should have known better were extremely receptive to listening to this particular voice.
Now, again, when we deal with the realm of intractability, we deal with the difficulty of the fact that you have a genocidal regime trying to systematically kill off their people, that’s figured out that killing off journalists is a really good way to keep this going. That’s probably not a problem we can solve within this room. But figuring out how we cross-source, and figuring out how we try to find identities that arise surprisingly quickly and that we should have certain suspicions about, is a place we might find ourselves able to try to design and develop and deploy some tools.
So, for me the bad news of this morning is how many of these problems, for me, probably fall into that intractable side of things. I think a lot of what we’re talking about, whether it’s the influence of money in politics—as much as I love my friend Larry Lessig I’m not tremendously confident that we’re going to strike at the root, certainly by 2012. But it also strikes me that what we’ve gotten this morning is an incredibly hopeful set of tractable questions that have come out of all of this. Little experiments that we can actually try in the world and try to have a sense of whether or not they have an impact.
I look at Kathleen Jamieson, whose experiment with FlackCheck literally says can we take on something that seems totally intractable, which is basically false speech and fearmongering within political advertising, but can we try a clever way to have a point of leverage and actually see if we might have an effect on how many of these ads actually air, or don’t air. We should have some notion by the end of the 2012 cycle about to what extent that works or doesn’t work.
So what I’m really hoping we can start doing for this afternoon is start shifting a little bit from the big questions, and particularly shifting from the intractables, and starting to put forward questions that we might be able to test out. And when I say test out I mean test out experimentally. Where we’re hoping to go tomorrow, on this hack day which we’ll talk a little bit more about in the conclusion, it’s not really a traditional hack day. A traditional hack day is a lot of guys like Gilad Lotan who write code in their sleep sitting down with a data set and trying to build some new tools around it. And we just don’t have enough Gilads to go around for tomorrow.
What we do have, which is an amazing asset, is we have a whole bunch of people coming from some very very different perspectives, who are thinking really hard and deeply about these issues, who can come together and think through some of these design challenges. What is a question that we want to test in this space of truth and truthiness? How would we set up an experiment to test it, either by organizing and trying to conduct something in the real world like an email and a Twitter campaign, or by trying to build some tools that take us there?
So, the way that we’re going to end up starting that conversation, and the direction that I’m hoping that we can start shifting the frame of this discussion, is to try to think through those small, tractable questions. And just to give you a couple examples of ones that have come up this morning. You know, Fil Menczer’s basically asking, are there network signatures that can tell us when someone is a bot and when someone is human? Great questions come around this. Does it matter if you’re a paid political activist and you say the same thing time after time again? Is is actually any different from being a bot? But it is the sort of question that we can put out and test.
You know, when Susan Crawford opens with this amazing story about being able to figure out who killed the newspaper seller in the London riots, it’s an open question about whether there are certain factual questions where we can immediately open it up to crowdsourcing and try to find information that doesn’t exist there.
So the challenge that I want to put forth is let’s take the big ideas, let’s get it down to smaller questions. Let’s start working through the frame of what we’re going to do tomorrow, which is figuring out actually how we test out those questions. So in particular if you have some way of taking the amazing material that’s been put in front of us and putting it in the realm of questions that we want to see answered, this is a great time to come and grab the precious mic time at this conference. So hands up if you want to jump in. Please. And introduce yourself first.
Audience 1: So I want underscore the shaming point that you were making, and just say that we’ve probably underemphasized the role of elites here. It’s much easier to—potentially at least—to stop these things from starting than it is to undo the damage once it’s been made. And in particular, this shaming may have a second order effect where the elites anticipate the shaming and then are less likely to produce the misinformation in the first place. Now, the question, though, is to think about what smaller-scale versions of the shaming could be implemented in a context like tomorrow. So that’s what I would be interested in people’s thoughts on.
Ethan Zuckerman: So there’s a great potential experiment, micro-shaming.
John Palfrey: Right.
Zuckerman: Is there some sort of social intervention we can try, where we can figure out whether shaming is effective even if it doesn’t make it to the front page of The New York Times or if it doesn’t make it onto PolitiFact.
Palfrey: I suspect it’s happening, but anybody putting that in 140 characters and putting it on #truthicon, we may get some answers from the crowd as well.
Kathleen Hall Jamieson: On flackcheck.org we’re giving stinkweeds to reporters who air ads uncorrected, and we’re giving orchids to those who hold consultants and candidates accountable for their misleading statements and ads. We know that people search their own names on the Web. We assume that they’re going to find that they’ve gotten stinkweeds or orchids. And we think as a result that they are less likely to air ads uncorrected and more likely to hold them accountable is a testable hypothesis. Go look at our stinkweeds and orchids, after you’ve emailed your stations.
Zuckerman: And testability on this may have to do with whether you start getting hate mail from reporters coming in and saying, “How do I get rid of that stinkweed next to my name?” Which is often a sign that you’re in the right direction.
Melanie Sloan: Melanie Sloan from CREW. I have to say I’m really skeptical of this whole concept of shaming in general. I think shaming has really lost its power. And you can see that by the fact that everybody in America gets a second act no matter what terrible thing they’ve done. Including like a New York Times reporter who plagiarizes everything. So I find it hard to imagine that people involved in PR… Like, Berman’s been exposed before for stuff and he’s not ashamed in any way, shape, or form. He just does it again. So I don’t actually think— And unless there’s some studies that show that this really works, given how our society has moved in a way that shame seems to be far more ephemeral, that doesn’t seem that useful to me.
Zuckerman: So, you’ll remember—
Palfrey: You started it. I saw a ton of hands go up.
Zuckerman: I put shame on the table as an intractable rather than a tractable. We can certainly go back and forth on this one. Mike.
Audience 4: A couple things. One is I agree with Melanie in terms of at least my world, which is politics, political consulting. Shaming doesn’t work except on a mass scale. You have to get to critical mass. You have to get to a certain level of intensity before shaming works, because just exposing people for being frauds doesn’t do anything to them in the world politics. Unless it’s become so big that they start to be affected by it politically, economically.
Rush Limbaugh’s a classic example. Rush Limbaugh has been doing his schtick for years, saying horrible things about all kinds of people. It didn’t get to critical mass until this last week. And when it got to critical mass the advertisers started going away and he started to have to backtrack. But it took such a level of intensity before that happened.
One other comment I wanted to make that I think is important in all of this whether it’s shaming or anything else. In my view blatant mistruth is less of a problem than the the old saying about… Well, I don’t know. That old saying doesn’t really apply. But I guess my thought is that I worry less about blatant mistruths because most political consultants, most PR people, try to avoid being blatantly wrong on the facts because they know they’ll be exposed fairly fast, either through crowds or through fact-checkers or whatever. It’s the folks who throw out facts that are completely out of context or completely… They may have a fact and have twenty facts contradict it but you know, it doesn’t matter. And that’s a much trickier and maybe close to intractable problem to solve.
Zuckerman: So, I should point out once again that my point wasn’t really meant to be an advocacy of shaming. For all the shaming comments I would mention that ShameCon is actually three weeks from now. If you’ve been invited to that one you should be terribly, terribly embarrassed. So you don’t want to tell anybody about it but you know… My question was merely more this question of trying to figure out what are the levers that we can work with, and I actually think the previous comment pointing out that shame has been pretty ineffective probably puts this more into the intractable side of it, where I think there there may be some agreement on that.
Palfrey: I’m going to call on someone without her hand up but who is well known to many of us the room. Our Dean Martha Minow has actually just walked in and was going to say a word of welcome and thanks. Dean Minow, thank you for coming over.
Martha Minow: [Comments are largely inaudible.]
Palfrey: Dean Minow, thank you for coming over. You’re wonderful to host us. Martha Minow is the Dean of Problem Solving. She has introduced a mandatory problem solving course for all lawyers and so she’s delighted that we’re in that mode. Sorry, carry on.
Audience 5: Speaking about levers, I’m conscious, as this is what they call in the security export world, a dual-purpose tool. But what about advertising? We had a situation in the UK where an incredibly homophobic op-ed piece was published in The Daily Mail by a columnist called Jan Moir, and Twitter mobilized to contact the people that advertised with that newspaper. Shame doesn’t work, but the bottom line might.
Zuckerman: So, another testable intervention and possibly a tested intervention, particularly as people look at the Rush Limbaugh reaction where a great deal of pressure is coming into advertisers via Twitter. And perhaps this is one of the circumstances where the ability to talk back is something that we can test as a method of response.
Other questions, comments, please. And introduce yourself please.
Ari Rabin-Havt: Ari Rabin-Havt from Media Matters. When we look at the world of misinformation, I like to phrase it as we think misinformation is most dangerous when it metastasizes. So, if there’s a bubble of untruth—let’s just use Fox News as the example. You know. I’m from Media Matters. They lie. They lie willingly. They lie knowingly, and they lie forthrightly as part of a strategy. And that’s outlined in internal memos and other documentation.
Where the misinformation we see becomes truly dangerous is when it seeps outside of—when it goes from kind of the right wing echo chamber through Fox News, but then when it gets outside of there. So if there was a test to define how to kind of put a finger in the funnel, in a way, to stop the misinformation from seeping out of kind of the right wing swamps. Just away from Fox. There’s a very popular radio host named Alex Jones who spews all sorts of garbage on a day-to-day basis—911 truther stuff, that kind of stuff. But his stuff stays within his large audience so it doesn’t have a broad cultural impact. So the question is is there a way to stop the cultural impact at the funnel point?
Zuckerman: So, two questions there. One is whether we could figure out when information is crossing from one echo chamber into a broader space. And another question you know…not an easy one, with all of our conversations about fact-checking, whether there’s a possible intervention that one could jump in and then sort of put into play when it looks like something’s leaving one conversation going into a broader conversation. Kai.
Kai Wright: To pick up on what Mike was saying, I wonder if there is in fact some sort of tool or something to build that is not a fact-check but a context check. I mean, I think to hijack our Joe Arpaio example earlier you know, he is an absurd person. He’s an absurd figure, who just two weeks prior to—well, more than two weeks, but a month or so prior to that, the headline in Politico was “Joe Arpaio Racially Profiles, Justice Department.” Which was a variation on the one we saw about Joe Arpaio says Obama’s birth certificate doesn’t exist.
The only reason Joe Arpaio was saying Obama’s birth certificate doesn’t exist right now is because he’s being investigated by the Justice Department, he’s trying to change the subject. So I share Mike’s concern that there’s the issue about specific facts, but we get lost sometimes I think in the debate over a given set of facts to the detriment of debate over the untruthful context. And is there a way to check the context? Which maybe that goes beyond technology, I don’t know. But that I think is actually a greater concern.
Zuckerman: But I think it fits well within this theme of questions that we could try and test, which is to figure out if there’s some way that we could put context into a story so that when Joe Arpaio comes we get some context of where it’s coming from.
Palfrey: Ethan, we’re at time, just to note. And we’ve got like four hands up. Should we maybe take four more quickly, what do you think?
Zuckerman: Yeah, let’s take four quick comments and not react. Let’s go Judith. Let’s go Ellen. Let’s go Dan, and then the gentleman here and that’s where we’re going to get in this one.
Audience 8: Your talk about shaming made me think about studies that’re done that go into to lie detector testing. Because a lie detector doesn’t test if you’re telling a lie. It really tests your own feeling about your telling the lie. So if someone is a sociopath who really has no guilt and no qualms, it certainly doesn’t show up at all. They really can test are you stressed? Are you feeling guilty?
And so I think an interesting path of your shaming piece is this notion of trying to come up with some type of typology or classification of the types of people who are promulgating these lies. Because there’s the the group that’s sort of like the sociopaths of politics, who deeply believe what they’re saying or there’s no compunction about it. There are the politicians who may indeed have some guilty feeling, for whom the shaming would work because they realize they may be doing something wrong but they have the eyes on the prize. There’s the ones that are motivated by money, etc. So understanding those sets of underlying motivations may be the key to understanding different types of useful reactions.
Zuckerman: Thanks, Judith. Ellen.
Audience 9: So, in response to the notion whether there are tools tools that can be built that would you know, sort of look at the provenance of language and where it spreads and how it spreads, we’ve been working with Media Standards Trust in the UK—I don’t know how many people know their journalism.uk site. But this is a site that has a database of press releases, and a database of news stories. And it can track how many journalists are churning the press releases.
So they’ve just open sourced their code thanks to a grant from Sunlight. We’re developing the same site for the US. But more importantly, we’re actually using their code to look at, at the moment, regulatory comments to see how many comments on an EPA rule actually came from a single source, or a double source, or how many sources they came from. So these tools, your guys can help figure out you know, how to improve the kind of stuff that we’re building, but it is possible to do this.
Zuckerman: And speaking of the guys who are building these things…
Dan Schultz: Hi, I’m Dan Schultz. I’m at the MIT Media Lab and Center for Civic Media. So, a couple questions that I have. Priming with self-affirmation has been shown to be effective in helping combat motivated reasoning. And I’m curious how we can implement self-affirmation techniques in the real world. And if there are other forms of priming that might work and make people a little bit more receptive to fact-checks.
I’m also curious in general sort of how much do people value truth and honesty to begin with? And can that value be leveraged to change the dialogues and how this is kind of like looking at shaming flipped on its head. So instead of trying to shame people…
And then third I just wanted to note that— So I’m working on Truth Goggles, and I’ve tried to split— It’s a credibility layer for Internet content, so it’s trying to connect the dots between the content you’re looking at and fact-checks. And I’ve found sort of three tractable, or semi-tractable problems. The first is what does the interface look like? So that’s kind of getting at the self-affirmation/priming questions. The second is where do these facts come from and how do you scale collecting facts? And then the third is how do you find instances of facts in the news in an automated way? So I’m not answering all three as part of the thesis, but I think those are three questions worth asking.
Aaron Naparstek: Hi. I’m Aaron Naparstek. I’m a Loeb Fellow at Harvard. And in 2006 I started a blog called Streetsblog. And you know, I guess my point is in my experience it’s not necessarily that difficult to counter this stuff. I am and was in New York City part of a movement that was really oriented towards trying to reform the New York City Department of Transportation, make things better for pedestrians and cyclists and transit riders in New York City. So a very specific, niche issue. And you know, we were sort of up against eighty years of culture and policy that was aimed at sort of moving motor vehicles through New York City.
And it didn’t take that much to kind of put a new perspective out there. Really it took two journalists working five days a week full-time to help create an entirely new perspective on what streets could be in New York City, that streets could be public spaces and places for bikes and buses that move quickly. And ultimately helped create substantial policy change. And so I mean, I kind of have a hopeful sense of this because of my experience in this niche issue that it doesn’t, when you really start focusing on a niche and kind of professionalize it and move yourself outside of that mainstream media world, I think you can have a lot of impact.
Zuckerman: So that’s a wonderfully helpful intervention. And I think having that sense that tractability may have something to do with how big the scale of the issues are, whether you’re going after the fundamental left/right splits in the United States polity, or whether you’re going after issues where left and right might actually come together and say “less dead bicyclists in New York would be a good thing,” these might be places where we might have the possibility of making some progress.
So, back over to you, John, to introduce our next two moderators?
Palfrey: That’s excellent. Thank you Ethan, and Urs and others. Thank you for this synthesis section.
Further Reference
Truthiness in Digital Media event site