John Palfrey: I'm going to turn it over to my comoderator of this session and our cohost from the Center for Civic Media, their Executive Director Ethan Zuckerman. As he's setting up just one note of thanks to Urs in particular, which is one thing I've learned a lot from you over the years has been the way of seeing this question of information quality or credibility or truth not so much as a static thing but as a matter of process. A process not just also of evaluating information but the creating of it and the creating of the architectures around it. And it seems to me you've really pulled together a lot of the learning from today into that framework and expanded it extremely helpfully. Over to you, Ethan.


Ethan Zuckerman: Sure. Well, I want­ed to take advan­tage of the fact that the time at the front of this room is so scarce. I was going to seize a cou­ple of min­utes of it and offer my reflec­tion on what’s going on and then do my best to open this up to get more peo­ple around the table.

It’s an inter­est­ing moment for me because this is one of the first events that I’m help­ing orga­nize with Berkman now from the MIT per­spec­tive. And I have to say since mov­ing over to MIT after eight years at the Berkman Center, every­body wants to know about cul­ture shock mov­ing between insti­tu­tions. Everybody I think is con­vinced that every­one at Harvard is wear­ing a suit 247. That we move into like rope san­dals and t-shirts as soon as we take two sub­way stops fur­ther down—everybody wants to know the cul­tur­al dif­fer­ence.

But I actu­al­ly am start­ing to fig­ure out what the cul­tur­al dif­fer­ences actu­al­ly are. And I think one of the cul­tur­al dif­fer­ences is that when we take on the sorts of ques­tions we’re deal­ing with here, the approach of an orga­ni­za­tion like the Berkman Center tends to be to try to fig­ure out can we find a sys­temic way to think about this prob­lem? And I think Urs has just put up a real­ly help­ful sys­tem that we can start using to think about the var­i­ous dif­fer­ent prob­lems that come up around this gen­er­al sense of truth and truthi­ness.

What’s great about these large sys­tems is that they can inform the way that we end up design­ing tools. And we can try to get a very thor­ough view of an issue and then fig­ure out how to inter­vene. That, as I’m find­ing out, is not a very MIT way of doing things.

The MIT way of doing things, near as I can tell, is to say the sys­tem will come to us even­tu­al­ly. What we real­ly need to start with are small exper­i­ments. And we should try an exper­i­ment that we can try over the course of a week, or a month, or cer­tain­ly with­in the con­text of a master’s the­sis. And we should see whether that gets us any­where. And if it gets us any­where we should keep ham­mer­ing on that and even­tu­al­ly we’ll get to the place that we want to be.

The trick with this method, as I’m find­ing, is that you have to fig­ure out which prob­lems are tractable. So when you’re look­ing at some­thing as big as these ques­tions of ver­i­fi­a­bil­i­ty, truth, truthi­ness, dis­in­for­ma­tion, so on and so forth, I find myself now try­ing to pick apart the ques­tions we talked about this morn­ing from the per­spec­tive of tractabil­i­ty. So let me use that to sort of frame a cou­ple of the con­ver­sa­tions we’ve had and then a cou­ple of things that haven’t come up, and then see if I can sort of push us for­ward a lit­tle bit into where we go this after­noon.

A few of the peo­ple who were around the table here today were at a gath­er­ing at the New America foun­da­tion a few months back. And it was a con­ver­sa­tion about fact-checking. It was a great con­fer­ence. It was real­ly a good deal of fun. And what was inter­est­ing— It’s under Chatham House Rule so I can’t actu­al­ly cred­it any of the bright peo­ple who said any­thing there, I can just tell you won­der­ful things were said by won­der­ful peo­ple.

But one of the won­der­ful things that was said by a won­der­ful per­son was talk­ing about fact-checking using a par­tic­u­lar­ly gory metaphor. And the metaphor was that mis­in­for­ma­tion gets out there, it’s like being shot. And so far what we know how to do is come in three or four days after the fact and try to ban­dage up someone’s wounds, right.

So we go through a polit­i­cal speech, we go through a debate. Someone says some­thing that’s bla­tant­ly untrue. It cir­cu­lates in the media for a while. Three or four days lat­er some­one comes in with a fact-check and we’re basi­cal­ly try­ing to staunch the bleed­ing of all of those var­i­ous harms that are out there from the dis­in­for­ma­tion that allow us through moti­vat­ed rea­son­ing, through cher­ry pick­ing rea­son­ing, to pick the facts we want to make our argu­ments.

And the thought was maybe we could do slight­ly bet­ter. Maybe we could get to the point we’ve got a bul­let­proof vest. This isn’t actu­al­ly bul­let­proof, it isn’t why I wear it, but you can imag­ine sort of the bul­let­proof vest for truth would try to sit there per­haps in your web brows­er, per­haps on your TV. And when that bad infor­ma­tion came out there, it would jump up, it would block you, you’d still get hit, you might get a bruise, but you wouldn’t bleed. You know, the infor­ma­tion would some­how get coun­tered very ear­ly on, with­in it. What I found real­ly inter­est­ing about this con­ver­sa­tion is that no one took the metaphor any fur­ther and sort of said, maybe we could dis­suade peo­ple from shoot­ing in the first place. You know, could we get peo­ple to stop say­ing obvi­ous mis­truths?

And that appears to be in the realm at the moment, in cur­rent polit­i­cal cli­mate, cur­rent media cli­mate, well with­in the realm of the intractable. That the notion that some­how call­ing out dis­in­for­ma­tion and sham­ing doesn’t have the pow­er that we once thought it did. And that may change our sense for what are pos­si­ble inter­ven­tions with­in this space. Those inter­ven­tions may be then try­ing to fig­ure out how to counter that mis­in­for­ma­tion before it does the dam­age that it oth­er­wise would do. But then we need to ask our­selves the ques­tion do we want to give up on the func­tion of sham­ing that ear­ly on with­in it?

When I talk about tractabil­i­ty it’s real­ly about ques­tions like that. It’s try­ing to fig­ure out where in that line would we like to go. I’ve been a lit­tle sur­prised, giv­en the inter­na­tion­al expe­ri­ence of some of the peo­ple in this room how US—centric and par­tic­u­lar­ly how left/right a lot of this con­ver­sa­tion has been today.

And I want to go back to a com­ment made by my friend and col­league Ivan Sigal, who point­ed out that it can be a very dif­fer­ent cir­cum­stance to sort of think through the process of fact-checking when you actu­al­ly have facts to start with. When you have reporters on the ground, when you have events that are fair­ly eas­i­ly deci­phered. For peo­ple like me who do a lot of work in the cit­i­zen media space, we are still try­ing to fig­ure out the impli­ca­tions of the Amina Arraf sto­ry. Who knows what I’m talk­ing about when I say Amina Arraf?” It’s bet­ter than most rooms but it’s enough that I should actu­al­ly tell you the sto­ry real­ly quick­ly.

Amina Arraf was an incred­i­bly pop­u­lar blog­ger in Syria. An amaz­ing sto­ry. Incredibly brave young woman. An out les­bian in Damascus, writ­ing in English, about her expe­ri­ences liv­ing in that city about the ear­ly stages of the Syrian resis­tance. Amazing fig­ure. Major news­pa­pers came, did inter­views with her. You know, every­one start­ed read­ing this blog. It real­ly rose to promi­nence quite quickly—the only prob­lem behind it was that it was writ­ten by a 40 year-old dude from Georgia named Tom McMaster. And he had care­ful­ly con­struct­ed, over the course of years, an online iden­ti­ty that allowed him to have his voice heard” because as a middle-aged white guy no one ever took him seri­ous­ly and obvi­ous­ly he wasn’t very well-represented in the media. So by becom­ing a Syrian les­bian, he would have the chance to be heard in a way that he wouldn’t be before. And as peo­ple start­ed look­ing at this it turned out that most of the peo­ple who had met Amina Arraf also turned out to be white dudes pre­tend­ing to be les­bians, lead­ing a com­men­ta­tor on the sit­u­a­tion to look at this and say it’s fake les­bians all the way down.

And the prob­lem with this was not just the con­struct of fake les­bians sort of rein­forc­ing this guy’s attempt to speak on behalf of the Syrian peo­ple. It was that we were so des­per­ate for per­spec­tives from the ground, from Syria at this par­tic­u­lar point in time, that media orga­ni­za­tions that should have known bet­ter were extreme­ly recep­tive to lis­ten­ing to this par­tic­u­lar voice.

Now, again, when we deal with the realm of intractabil­i­ty, we deal with the dif­fi­cul­ty of the fact that you have a geno­ci­dal regime try­ing to sys­tem­at­i­cal­ly kill off their peo­ple, that’s fig­ured out that killing off jour­nal­ists is a real­ly good way to keep this going. That’s prob­a­bly not a prob­lem we can solve with­in this room. But fig­ur­ing out how we cross-source, and fig­ur­ing out how we try to find iden­ti­ties that arise sur­pris­ing­ly quick­ly and that we should have cer­tain sus­pi­cions about, is a place we might find our­selves able to try to design and devel­op and deploy some tools.

So, for me the bad news of this morn­ing is how many of these prob­lems, for me, prob­a­bly fall into that intractable side of things. I think a lot of what we’re talk­ing about, whether it’s the influ­ence of mon­ey in politics—as much as I love my friend Larry Lessig I’m not tremen­dous­ly con­fi­dent that we’re going to strike at the root, cer­tain­ly by 2012. But it also strikes me that what we’ve got­ten this morn­ing is an incred­i­bly hope­ful set of tractable ques­tions that have come out of all of this. Little exper­i­ments that we can actu­al­ly try in the world and try to have a sense of whether or not they have an impact.

I look at Kathleen Jamieson, whose exper­i­ment with FlackCheck lit­er­al­ly says can we take on some­thing that seems total­ly intractable, which is basi­cal­ly false speech and fear­mon­ger­ing with­in polit­i­cal adver­tis­ing, but can we try a clever way to have a point of lever­age and actu­al­ly see if we might have an effect on how many of these ads actu­al­ly air, or don’t air. We should have some notion by the end of the 2012 cycle about to what extent that works or doesn’t work.

So what I’m real­ly hop­ing we can start doing for this after­noon is start shift­ing a lit­tle bit from the big ques­tions, and par­tic­u­lar­ly shift­ing from the intracta­bles, and start­ing to put for­ward ques­tions that we might be able to test out. And when I say test out I mean test out exper­i­men­tal­ly. Where we’re hop­ing to go tomor­row, on this hack day which we’ll talk a lit­tle bit more about in the con­clu­sion, it’s not real­ly a tra­di­tion­al hack day. A tra­di­tion­al hack day is a lot of guys like Gilad Lotan who write code in their sleep sit­ting down with a data set and try­ing to build some new tools around it. And we just don’t have enough Gilads to go around for tomor­row.

What we do have, which is an amaz­ing asset, is we have a whole bunch of peo­ple com­ing from some very very dif­fer­ent per­spec­tives, who are think­ing real­ly hard and deeply about these issues, who can come togeth­er and think through some of these design chal­lenges. What is a ques­tion that we want to test in this space of truth and truthi­ness? How would we set up an exper­i­ment to test it, either by orga­niz­ing and try­ing to con­duct some­thing in the real world like an email and a Twitter cam­paign, or by try­ing to build some tools that take us there?

So, the way that we’re going to end up start­ing that con­ver­sa­tion, and the direc­tion that I’m hop­ing that we can start shift­ing the frame of this dis­cus­sion, is to try to think through those small, tractable ques­tions. And just to give you a cou­ple exam­ples of ones that have come up this morn­ing. You know, Fil Menczer’s basi­cal­ly ask­ing, are there net­work sig­na­tures that can tell us when some­one is a bot and when some­one is human? Great ques­tions come around this. Does it mat­ter if you’re a paid polit­i­cal activist and you say the same thing time after time again? Is is actu­al­ly any dif­fer­ent from being a bot? But it is the sort of ques­tion that we can put out and test.

You know, when Susan Crawford opens with this amaz­ing sto­ry about being able to fig­ure out who killed the news­pa­per sell­er in the London riots, it’s an open ques­tion about whether there are cer­tain fac­tu­al ques­tions where we can imme­di­ate­ly open it up to crowd­sourc­ing and try to find infor­ma­tion that doesn’t exist there.

So the chal­lenge that I want to put forth is let’s take the big ideas, let’s get it down to small­er ques­tions. Let’s start work­ing through the frame of what we’re going to do tomor­row, which is fig­ur­ing out actu­al­ly how we test out those ques­tions. So in par­tic­u­lar if you have some way of tak­ing the amaz­ing mate­r­i­al that’s been put in front of us and putting it in the realm of ques­tions that we want to see answered, this is a great time to come and grab the pre­cious mic time at this con­fer­ence. So hands up if you want to jump in. Please. And intro­duce your­self first.


Audience 1: So I want underscore the shaming point that you were making, and just say that we've probably underemphasized the role of elites here. It's much easier to—potentially at least—to stop these things from starting than it is to undo the damage once it's been made. And in particular, this shaming may have a second order effect where the elites anticipate the shaming and then are less likely to produce the misinformation in the first place. Now, the question, though, is to think about what smaller-scale versions of the shaming could be implemented in a context like tomorrow. So that's what I would be interested in people's thoughts on.

Ethan Zuckerman: So there's a great potential experiment, micro-shaming.

John Palfrey: Right.

Zuckerman: Is there some sort of social intervention we can try, where we can figure out whether shaming is effective even if it doesn't make it to the front page of The New York Times or if it doesn't make it onto PolitiFact.

Palfrey: I suspect it's happening, but anybody putting that in 140 characters and putting it on #truthicon, we may get some answers from the crowd as well.

Kathleen Hall Jamieson: On flackcheck.org we're giving stinkweeds to reporters who air ads uncorrected, and we're giving orchids to those who hold consultants and candidates accountable for their misleading statements and ads. We know that people search their own names on the Web. We assume that they're going to find that they've gotten stinkweeds or orchids. And we think as a result that they are less likely to air ads uncorrected and more likely to hold them accountable is a testable hypothesis. Go look at our stinkweeds and orchids, after you've emailed your stations.

Zuckerman: And testability on this may have to do with whether you start getting hate mail from reporters coming in and saying, "How do I get rid of that stinkweed next to my name?" Which is often a sign that you're in the right direction.

Melanie Sloan: Melanie Sloan from CREW. I have to say I'm really skeptical of this whole concept of shaming in general. I think shaming has really lost its power. And you can see that by the fact that everybody in America gets a second act no matter what terrible thing they've done. Including like a New York Times reporter who plagiarizes everything. So I find it hard to imagine that people involved in PR… Like, Berman's been exposed before for stuff and he's not ashamed in any way, shape, or form. He just does it again. So I don't actually think— And unless there's some studies that show that this really works, given how our society has moved in a way that shame seems to be far more ephemeral, that doesn't seem that useful to me.

Zuckerman: So, you'll remember—

Palfrey: You started it. I saw a ton of hands go up.

Zuckerman: I put shame on the table as an intractable rather than a tractable. We can certainly go back and forth on this one. Mike.

Audience 4: A couple things. One is I agree with Melanie in terms of at least my world, which is politics, political consulting. Shaming doesn't work except on a mass scale. You have to get to critical mass. You have to get to a certain level of intensity before shaming works, because just exposing people for being frauds doesn't do anything to them in the world politics. Unless it's become so big that they start to be affected by it politically, economically.

Rush Limbaugh's a classic example. Rush Limbaugh has been doing his schtick for years, saying horrible things about all kinds of people. It didn't get to critical mass until this last week. And when it got to critical mass the advertisers started going away and he started to have to backtrack. But it took such a level of intensity before that happened.

One other comment I wanted to make that I think is important in all of this whether it's shaming or anything else. In my view blatant mistruth is less of a problem than the the old saying about… Well, I don't know. That old saying doesn't really apply. But I guess my thought is that I worry less about blatant mistruths because most political consultants, most PR people, try to avoid being blatantly wrong on the facts because they know they'll be exposed fairly fast, either through crowds or through fact-checkers or whatever. It's the folks who throw out facts that are completely out of context or completely… They may have a fact and have twenty facts contradict it but you know, it doesn't matter. And that's a much trickier and maybe close to intractable problem to solve.

Zuckerman: So, I should point out once again that my point wasn't really meant to be an advocacy of shaming. For all the shaming comments I would mention that ShameCon is actually three weeks from now. If you've been invited to that one you should be terribly, terribly embarrassed. So you don't want to tell anybody about it but you know… My question was merely more this question of trying to figure out what are the levers that we can work with, and I actually think the previous comment pointing out that shame has been pretty ineffective probably puts this more into the intractable side of it, where I think there there may be some agreement on that.

Palfrey: I'm going to call on someone without her hand up but who is well known to many of us the room. Our Dean Martha Minow has actually just walked in and was going to say a word of welcome and thanks. Dean Minow, thank you for coming over.

Martha Minow: [Comments are largely inaudible.]

Palfrey: Dean Minow, thank you for coming over. You're wonderful to host us. Martha Minow is the Dean of Problem Solving. She has introduced a mandatory problem solving course for all lawyers and so she's delighted that we're in that mode. Sorry, carry on.

Audience 5: Speaking about levers, I'm conscious, as this is what they call in the security export world, a dual-purpose tool. But what about advertising? We had a situation in the UK where an incredibly homophobic op-ed piece was published in The Daily Mail by a columnist called Jan Moir, and Twitter mobilized to contact the people that advertised with that newspaper. Shame doesn't work, but the bottom line might.

Zuckerman: So, another testable intervention and possibly a tested intervention, particularly as people look at the Rush Limbaugh reaction where a great deal of pressure is coming into advertisers via Twitter. And perhaps this is one of the circumstances where the ability to talk back is something that we can test as a method of response.

Other questions, comments, please. And introduce yourself please.

Ari Rabin-Havt: Ari Rabin-Havt from Media Matters. When we look at the world of misinformation, I like to phrase it as we think misinformation is most dangerous when it metastasizes. So, if there's a bubble of untruth—let's just use Fox News as the example. You know. I'm from Media Matters. They lie. They lie willingly. They lie knowingly, and they lie forthrightly as part of a strategy. And that's outlined in internal memos and other documentation.

Where the misinformation we see becomes truly dangerous is when it seeps outside of—when it goes from kind of the right wing echo chamber through Fox News, but then when it gets outside of there. So if there was a test to define how to kind of put a finger in the funnel, in a way, to stop the misinformation from seeping out of kind of the right wing swamps. Just away from Fox. There's a very popular radio host named Alex Jones who spews all sorts of garbage on a day-to-day basis—911 truther stuff, that kind of stuff. But his stuff stays within his large audience so it doesn't have a broad cultural impact. So the question is is there a way to stop the cultural impact at the funnel point?

Zuckerman: So, two questions there. One is whether we could figure out when information is crossing from one echo chamber into a broader space. And another question you know…not an easy one, with all of our conversations about fact-checking, whether there's a possible intervention that one could jump in and then sort of put into play when it looks like something's leaving one conversation going into a broader conversation. Kai.

Kai Wright: To pick up on what Mike was saying, I wonder if there is in fact some sort of tool or something to build that is not a fact-check but a context check. I mean, I think to hijack our Joe Arpaio example earlier you know, he is an absurd person. He's an absurd figure, who just two weeks prior to—well, more than two weeks, but a month or so prior to that, the headline in Politico was "Joe Arpaio Racially Profiles, Justice Department." Which was a variation on the one we saw about Joe Arpaio says Obama's birth certificate doesn't exist.

The only reason Joe Arpaio was saying Obama's birth certificate doesn't exist right now is because he's being investigated by the Justice Department, he's trying to change the subject. So I share Mike's concern that there's the issue about specific facts, but we get lost sometimes I think in the debate over a given set of facts to the detriment of debate over the untruthful context. And is there a way to check the context? Which maybe that goes beyond technology, I don't know. But that I think is actually a greater concern.

Zuckerman: But I think it fits well within this theme of questions that we could try and test, which is to figure out if there's some way that we could put context into a story so that when Joe Arpaio comes we get some context of where it's coming from.

Palfrey: Ethan, we're at time, just to note. And we've got like four hands up. Should we maybe take four more quickly, what do you think?

Zuckerman: Yeah, let's take four quick comments and not react. Let's go Judith. Let's go Ellen. Let's go Dan, and then the gentleman here and that's where we're going to get in this one.

Audience 8: Your talk about shaming made me think about studies that're done that go into to lie detector testing. Because a lie detector doesn't test if you're telling a lie. It really tests your own feeling about your telling the lie. So if someone is a sociopath who really has no guilt and no qualms, it certainly doesn't show up at all. They really can test are you stressed? Are you feeling guilty?

And so I think an interesting path of your shaming piece is this notion of trying to come up with some type of typology or classification of the types of people who are promulgating these lies. Because there's the the group that's sort of like the sociopaths of politics, who deeply believe what they're saying or there's no compunction about it. There are the politicians who may indeed have some guilty feeling, for whom the shaming would work because they realize they may be doing something wrong but they have the eyes on the prize. There's the ones that are motivated by money, etc. So understanding those sets of underlying motivations may be the key to understanding different types of useful reactions.

Zuckerman: Thanks, Judith. Ellen.

Audience 9: So, in response to the notion whether there are tools tools that can be built that would you know, sort of look at the provenance of language and where it spreads and how it spreads, we've been working with Media Standards Trust in the UK—I don't know how many people know their journalism.uk site. But this is a site that has a database of press releases, and a database of news stories. And it can track how many journalists are churning the press releases.

So they've just open sourced their code thanks to a grant from Sunlight. We're developing the same site for the US. But more importantly, we're actually using their code to look at, at the moment, regulatory comments to see how many comments on an EPA rule actually came from a single source, or a double source, or how many sources they came from. So these tools, your guys can help figure out you know, how to improve the kind of stuff that we're building, but it is possible to do this.

Zuckerman: And speaking of the guys who are building these things…

Dan Schultz: Hi, I'm Dan Schultz. I'm at the MIT Media Lab and Center for Civic Media. So, a couple questions that I have. Priming with self-affirmation has been shown to be effective in helping combat motivated reasoning. And I'm curious how we can implement self-affirmation techniques in the real world. And if there are other forms of priming that might work and make people a little bit more receptive to fact-checks.

I'm also curious in general sort of how much do people value truth and honesty to begin with? And can that value be leveraged to change the dialogues and how this is kind of like looking at shaming flipped on its head. So instead of trying to shame people…

And then third I just wanted to note that— So I'm working on Truth Goggles, and I've tried to split— It's a credibility layer for Internet content, so it's trying to connect the dots between the content you're looking at and fact-checks. And I've found sort of three tractable, or semi-tractable problems. The first is what does the interface look like? So that's kind of getting at the self-affirmation/priming questions. The second is where do these facts come from and how do you scale collecting facts? And then the third is how do you find instances of facts in the news in an automated way? So I'm not answering all three as part of the thesis, but I think those are three questions worth asking.

Aaron Naparstek: Hi. I'm Aaron Naparstek. I'm a Loeb Fellow at Harvard. And in 2006 I started a blog called Streetsblog. And you know, I guess my point is in my experience it's not necessarily that difficult to counter this stuff. I am and was in New York City part of a movement that was really oriented towards trying to reform the New York City Department of Transportation, make things better for pedestrians and cyclists and transit riders in New York City. So a very specific, niche issue. And you know, we were sort of up against eighty years of culture and policy that was aimed at sort of moving motor vehicles through New York City.

And it didn't take that much to kind of put a new perspective out there. Really it took two journalists working five days a week full-time to help create an entirely new perspective on what streets could be in New York City, that streets could be public spaces and places for bikes and buses that move quickly. And ultimately helped create substantial policy change. And so I mean, I kind of have a hopeful sense of this because of my experience in this niche issue that it doesn't, when you really start focusing on a niche and kind of professionalize it and move yourself outside of that mainstream media world, I think you can have a lot of impact.

Zuckerman: So that's a wonderfully helpful intervention. And I think having that sense that tractability may have something to do with how big the scale of the issues are, whether you're going after the fundamental left/right splits in the United States polity, or whether you're going after issues where left and right might actually come together and say "less dead bicyclists in New York would be a good thing," these might be places where we might have the possibility of making some progress.

So, back over to you, John, to introduce our next two moderators?

Palfrey: That's excellent. Thank you Ethan, and Urs and others. Thank you for this synthesis section.

Further Reference

Truthiness in Digital Media event site


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.