Oumou Ly: Welcome to The Breakdown. My name is Oumou. I’m a staff fellow on the Berkan Klein Center’s Assembly Disnformation program. Our topic of discussion today is CDA 230, section 230 of the Communications Decency Act, otherwise known as the twenty-six words that created the Internet. Today I’m joined by Daphne Keller from the Stanford Cyber Policy Center. So, thank you for being with us today Daphne, I appreciate it. Especially in helping us unpack what has turned out to be such a huge and maybe consequential issue for the November election and certainly for technology platforms and all of us who care and think about disinformation really critically.
One of the first questions I have for you is kind of a basic one. Can you tell us a little bit about CDA 230 and why it’s referred to as the twenty-six words that started the Internet?
Daphne Keller: Sure. so first I strongly recommend Jeff Kosseff’s book, which coined that “twenty-six words” phrase. It is a great history of CDA 230 and it’s very narrative, you know. It just sort of explains what was going on in the cases and what was going on in Congress. So, it’s not just something for legal nerds and lawyers. It’s a really useful reference.
But maybe just to explain CDA 230’s role, I’ll pull back a little bit to the big picture of intermediary liability law in the US generally. So intermediary liability law is the law that tells platforms what legal responsibility they have for the speech and content posted by their users. And US law falls into three buckets. There’s a big bucket which is about copyright, and there the line point is the Digital Millennium Copyright Act, the DMCA. And it has this very choreographed notice and takedown process. And through Harvard’s Lumen database actually, there’s been just amazing documentation of how that process is abused, how much erroneous overremoval it leads to, kind of just what happens in that kind of system. That’s one big bucket.
The other big bucket that doesn’t get a lot of attention is federal criminal law. There’s no special immunity for platforms for federal criminal law, and so if what you’re talking about is things like child sexual abuse material, material support of terrorism…those things, the regular law applies. There is no immunity under CDA 230 or anything else.
And then the last big bucket, the one we’re here to talk about today is CDA 230, which was enacted in 1996 as part of a big package of legislation, some of which was subsequently struck down by the Supreme Court, leaving CDA 230 standing as the law of the land. And it’s actually a really simple law, even though it’s so widely misunderstood that there’s now a Twitter account, Bad Section 230 Takes, just to retweet all the misrepresentations of it that come along.
But what it says is first, platforms are not liable for their users’ speech. Again, for the category of claims that are covered. So this isn’t about terrorism, child sex abuse material, etc. But for things like state law defamation claims, platforms are not liable for their users’ speech.
And the second thing it says is also, platforms are not liable for acting in good faith to moderate content, so to enforce their own policies against content they consider objectionable. And that second problem was very much part of what Congress was trying to accomplish with this law. They wanted to make sure that platforms could adopt what we now think of as terms of service or community guidelines, and could enforce rules against hateful speech, or bullying, or pornography, or just the the broad range of human behavior that most people don’t want to see on platforms.
And the key thing that Congress realized, because they had experience with a couple of cases that happened at the time, was that if you want platforms to moderate, you need to give them both of those immunities. You can’t just say, “You’re free to moderate, go do it,” you have to also say, “And, if you undertake to moderate but you miss something and there’s you know, defamation still on the platform or whatever, the fact that you tried to moderate won’t be held against you.”
And this was really important to Congress because there’d just been a case where a platform that tried to moderate was tagged as acting like an editor or a publisher and therefore facing potential liability.
So that’s the core of CDA 230, and I can talk more if it’s helpful about sort of the things people get confused about like the widespread belief that platforms are somehow supposed to be “neutral,” which is—
Ly: Well yeah! Would you please say a few words about that, yes.
Keller: Yeah. So, I mean Congress had this intention to get platforms to moderate. They did not want them to be neutral, they wanted the opposite.
Ly: Right. Exactly right.
Keller: Yeah. But I think a lot of people find it intuitive to say well it must be that platforms have to be neutral. And I think that intuition comes from a pre-Internet media environment, where kind of everything was either a common carrier like a telephone just interconnecting everything and letting everything flow freely, or it was like NBC News or The New York Times. It was heavily edited, and the editor clearly was responsible for everything that the reporters put in there. And those two models kind of don’t work for the Internet. If we still had just those two models today we would still have only a very tiny number of elites with access to the microphone. And everybody else would still not have the ability to broadcast our voices on things like Twitter or YouTube, or whatever that we have today.
And I think that’s not what anybody wants. What people generally want is they do want to be able to speak on the Internet without platform lawyers checking everything they say before it goes live. We want that. And we also generally also want platforms to moderate. We want them to take down offensive or obnoxious or hateful or dangerous but legal speech. And so 230 is the law that allows both of those things to happen at once.
Ly: Daphne, can you talk a little bit about the two different types of immunity that are outlined under CDA 230; we call them shorthand (c)(1) and (c)(2)?
Keller: Sure. So, in the super shorthand, (c)(1) is immunity for leaving content up, and (c)(2) is immunity for taking contents down. So, most of the litigation that we’ve seen historically under the CDA is about (c)(1). Its cases—often you know, really disturbing cases where something terrible happened to someone on the Internet, and speech defaming them was left up or speech threatening them was left up, or they continued to face things that were illegal. So those are case about (c)(1). If the platform leaves that stuff up, are they liable?
The second prong, (c)(2), just hasn’t had nearly as much attention over the years until now. But that’s the one that says platforms can choose their own content moderation policy. That they’re not liable for choosing to take down content they deem “objectionable” as long as they are acting in “good faith.” And that’s the problem, it does have this good faith requirement. And part of what the executive order is trying to do is say, “Oh, well you have to meet the good faith requirement to get any of the immunities,” you know. If someone can show that you are not acting in good faith, then you lose this much more economically consequential immunity under (c)(1) for content that’s on your platform that’s illegal.
And sort of the biggest concern I think for many people there is, if this economically essential immunity is dependent on some government agency determining whether you acted in good faith, that introduces just a ton of room for politics, because my idea of what’s good faith won’t be your idea of what’s good faith won’t be Attorney General Barr’s idea of what’s good faith. And so having something where political appointees in particular get to decide what constitutes good faith and then all of your immunities hang in the balance? is really frightening for companies.
And interestingly, today we see Republicans calling for a “fairness doctrine for the Internet,” calling for a requirement of good faith or fairness in content moderation, but for a generation it was you know, literally part of the GOP platform every year to oppose the fairness doctrine that was enforced for broadcast by the FCC. You know, President Reagan said it was unconstitutional. This was just like a core conservative critique of big government suppressing speech for…decades, and now it has become their critique, and they’re asking for state regulation of platforms.
Ly: That is so interesting to me. Both that and the fact that you know…CDA 230 in so many ways is what allows Donald Trump’s Twitter account to stay up. So it’s really really interesting that the GOP has decided to rail against it.
Keller: It’s fascinating.
Ly: So just recently the president signed an executive order concerning CDA 230 pretty directly. There was sort of an episode on social media where the president sent out a tweet, it was then labeled by Twitter—fact-checked in way. Can you talk a little bit about what the executive order does?
Keller: Sure. So I think… I wanted to start at a super high level with the executive order… In the day or so after it came out, I had multiple people from around the world reach out to me and be like, “This is like what happened in Venezuela when Chavez started shutting down the radio stations.” You know, it’s just sort of…it has this residence of like—
Ly: It has that feel. Yeah.
Keller: —there’s a political leader trying to punish speech platforms for their editorial policies. And that— You know, before you even get into the weeds, that high-level impact of it is really important to pay attention to. And that is the reason why CDT, the Center for Democracy and Technology in DC, has filed a First Amendment case saying this whole thing just can’t stand and…we’ll see what happens with that case.
So then there are also in the executive order four other things that might be big deals. So what is that DOJ is instructed to draft legislation to change 230. So, eventually that will come along and presumably it will track the very long list of ideas that are in the DOJ report that came out this week.
A second is it instructs federal agencies to interpret 230 in the way that the executive order does, this way that I think is not supported by the statute, that takes the “good faith” requirement and kinda…applies it in places it’s not written in the statute. Nobody’s quite sure what that means. Because there just aren’t that many situations where federal agencies…care? about 230, but we’ll see what comes out of that.
A third is that Attorney General Barr of the DOJ is supposed to convene state attorneys general to look at a long list of complaints. And this is like, if you look at it if you’re an Internet policy nerd it’s just all the hot-button issues. Sort of like, are fact-checkers biased? Can algorithmic moderation be biased and—well, it can; how can you regulate that? You know, you will recognize these things if you look at the list.
And then the fourth one, and and this is one that I think deserves a lot of attention, is that DOJ is supposed to review whether platforms—particular platforms are “problematic vehicles for government speech due to viewpoint discrimination,” and then based on that look into whether they can carry federally-funded ads. I think for most platforms the ad dollars part is not that big a deal, but being on a federal government block list of you know, platforms with disapproved editorial policies…just like, has this McCarthyist feel.
Ly: Can you talk a little bit about the role of CDA in relation to the business models that the platforms run?
Keller: Sure. So, broadly speaking, the Internet could not exist the way we know it without something like CDA 230. And that’s not just about the Facebooks of the world, that’s about everything all up and down the technical stack. You know, DNS providers. Cloudflare. Amazon Web Services and other backend web hosting. And also tons of little companies, you know. The knitting blog that permits comments, or the farm equipment seller that has user feedback. All of those are possible because of CDA 230. And if you pull CDA 230 out of the picture, it’s just very hard to imagine the counterfactual of how American Internet technology and companies would’ve evolved. They would’ve evolved somehow, you know. And presumably the counterfactual is we would have something like what the EU has, which boils down to a notice and takedown model for every kind of legal claim? But they… You know, they barely have an Internet economy for these kinds of companies. There’s a reason that things developed the way that they did.
Ly: Yeah. Do you think that there’s any… Maybe not what you think, because I’m sure that we can all agree this is likely to be the case. If the liability [?] that 230 offers platforms is removed, how would that change the way that platform approach content moderation?
Keller: Well, I think a lot of little companies would just get out of the business entirely. And so there’s an advocacy group in DC called Engine, which represents startups and small companies. And they put together a really interesting two-pager on the actual cost of defending even frivolous claims in a world with CDA 230 and in a world without CDA 230, and it’s basically you know, you’re looking at 10 to $30,000 in the best-case scenario for a case that goes away very very quickly, even now. And that’s not a cost that small companies want to incur. And investors…you know, there are all these surveys of investors saying, “I don’t want to invest in new platforms to challenge today’s incumbents if they’re in a state of legal uncertainty where they could be liable for something at any time.”
So I think you just eliminate a big swath of the parts of… Of both the existing parts of the Internet that policymakers don’t pay any attention to. Like, make the very very vulnerable and some of them go away, and that’s troubling. And you create a lot of problems for any newcomers who would actually challenge today’s incumbents and try to rival them in serious user-generated content hosting services.
For the big platforms, you know for Facebook, for YouTube…they’ll survive somehow. You know, they change their business model. They probably… The easiest thing to do is to use our terms of service to prohibit a whole lot more, and then just like take down a huge swath so you’re not facing much legal risk.
Ly: It’s hard to imagine living in that kind of a world.
Keller: It is. It is.
Ly: Yeah. Thank you so much for joining me today, Daphne. This was a great and enlightening conversation and I’m sure our viewers will enjoy it.
Keller: Thank you for having me.
Medium post for this episode, with introduction and edited text