So, academia’s characterized by argument, by cut and thrust, by disagreement. But there’s a very special kind of disagreement that’s a kind of disagreement that’s hard to make progress from, and That’s denialism. This is manufactured controversy from people who benefit from making it seem like there’s controversy about something for which there is no actual controversy among practitioners.
The canonical example of this, of course, is smoking. The cancer denial movement, through which high‐paid consultants to the tobacco industry spent decades casting doubt over whether or not there was a causal link between lung cancer and tobacco was fantastically profitable and became the template for all of the denial movements that followed.
The next movement that really caught fire was of course the AIDS denial movement, which had incredibly grave consequences, consequences that probably rival cancer denial for the kinds of fallout that we experience from it. One of the loci of this is a guy named Matthias Rath, who was a German doctor who ran a very profitable vitamin business in the EU. And he claimed that AIDS wasn’t caused by HIV, rather by a vitamin deficiency that could be treated with his products. So he ran these full‐page ads that said “why should South Africans be poisoned with AZT?” all through South Africa, to encourage people not to take antiretrovirals and instead to take vitamins and nostrums.
And as if this wasn’t bad enough, he had the ear of the out of the president, Thabo Mbeki. And through him and through the health minister, they effectively used vitamins instead of antiretrovirals for several years to treat HIV, which lead to as many as three hundred thousand people dying. And during this period, the proportion of South Africans who are HIV‐positive rose from about 1% to about 25%.
And of course one of the elements of denial is that when people call you out on denial, you have to be able to silence them. This is where the forbidden research came in. When Ben Goldacre, an epidemiologist and scientist published on this in the Guardian newspaper, Rath sued the guardian and they spent about three hundred thousand pounds winning the right to publish their story about the fallout from AIDS denial.
AIDS denial begat climate denial, which is alive and well today, the non‐controversy about anthropogenic climate change. And then the kind of denial that’s maybe particularly relevant to an MIT audience, and the kind of denial I’m going to talk about mostly today, is Turing‐completeness denial.
So, we only really know how to make one kind of computer. That’s the computer that can run all the programs that we can express symbolically. But for lots of reasons, people would like it to be possible to make computers that can only run programs that don’t make you sad. That would be great. It would be awesome if we could make printers that couldn’t also be vectors for malware. But the reason we haven’t done that isn’t because the nerds are refusing to cooperate with the forces of right and justice. It’s because this is the computer we know how to make.
One of the canonical examples of Turing‐completeness denial is digital rights management, this idea that if you want to stop people from running programs that make copies of files that you wouldn’t like them to copy on their computer, you can encrypt the file and send it to them, and also send them the key, but ask their computer not to let them know what the key is. The technical term for this in security circles is “wishful thinking.” We don’t keep even really good safes in bank robbers’ living rooms. Not because safes don’t work but because you can’t give your adversary the key and then hope that they can’t figure out where you hid it. Especially when your adversary might be a grad student with nothing to do this weekend and a bunch of undergrads hanging around like a bad smell, in a lab with an electro‐tunneling microscope that’s going idle.
But of course, if Turing denial isn’t just about DRM, one of the most virulent manifestations of it in the last two years has been cryptography denial, something that we had put behind us in the 1990s during the Clipper chip debate, but which has resurfaced. This is the idea that we can make cryptography that works perfectly well except when it needs to catastrophically fail, at which point it will catastrophically fail but continue to work for all the people it shouldn’t catastrophically fail for.
And of course the other kind of Turing‐completeness denial that we have is privacy denial, the idea that if you have nothing to hide you have nothing to fear. That secrecy is the same as privacy. That because I know what you do when you go into the toilet it shouldn’t be your right to close the door. That kind of denial has really caught fire in the last fifteen years, twenty years of surveillance capitalism rising to its current preeminence.
And the thing about denial is it begets nihilism. Denial matters because the things that are being denied (the potential harms of privacy, anthropogenic climate change, AIDS, cancer) those things are real. And the non‐solutions that arise when you deny them, those non‐solutions don’t solve these problems that are real and getting worse because they’re not being addressed through our policy because we can’t address them because we’re in denial about them.
So for many years people, who were worried about the risk of cancer smoked light cigarettes as though there was a kind of cigarette that didn’t give you cancer. It’s not true. In case you were wondering, there isn’t a kind of cigarette that doesn’t give you cancer. There were people who went on having unprotected sex without taking antiretrovirals, because they’d said that AIDS was not a sexually transmissible illness but rather a vitamin deficiency. So if it’s a vitamin deficiency then it doesn’t matter if you continue having unprotected sex. Recall South Africa’s rate of infection went from 1% to 25% during its period of official denial. Or you know, we insist that there are ways that we can build on floodplains or continue to emit a lot of carbon, and that this somehow won’t cause lots of catastrophic problems down the way.
In the realm of DRM, we insist that the reason that artists aren’t getting their share of the income that’s being generated for their works isn’t that they have bad relationships with the firms that monetize their work, it’s that their audience has failed to watch their TV shows in the right way, or listen to their music in the right way. And we insist that somewhere out there is a tool that will force people to listen to music in the right way, and that that will somehow put money in the pockets of artists as opposed to, for example, organizing to insist that they get better contractual arrangements with their publishers, labels, and studios.
Or in the realm of crypto denial, we build out infrastructure that has known holes in it, that we have put holes in, so that people can access it lawfully, so that the backdoors can be used by law enforcement. And then this stuff which is then out in the field can’t ever be remediated because it’s sitting in these remote locations where we have a hard time patching it. You may have seen that yesterday CERT published an advisory on vulnerabilities in baseband radios in mobile devices. There are literally billions of these devices. They’re in the field. We will never ever patch all of them. If you can develop an attack against those baseband radios, you can bypass the operating system of the mobile device to access its data and implant malware on it. Even hardened devices, devices that use strong cryptography, devices that are keyed to anticipate attack. They are unprotected on the baseband side.
And we encourage the formation of businesses based on siphoning off ever‐larger caches of our sensitive information on more and more insane and improbable bets that someday we’ll figure out how to turn all of that into giant amounts of money. It’s the old lady who swallowed the fly problem, right? Once you accept that we need to solve this problem by smoking lighter cigarettes, by taking more vitamins, then it begets another problem. You must not be taking the right vitamins. You must not be smoking light enough cigarettes. You you must not be trying hard enough to lock down hardware so that users can’t reconfigure it.
So, the problem is still there. The solution hasn’t worked. And the denial movement won’t admit it, because to admit it would be to admit that they were wrong. Instead we pass a law that says disclosing vulnerabilities in DRM is a felony punishable by five years in prison and a five hundred‐thousand dollar fine. Because although we know that the DRM can be broken, we assume that we can just silence the people who discover those flaws. And this makes things a lot worse, right? If you’re not allowed to tell people about flaws in systems that they rely on, it doesn’t mean that those flaws won’t get weaponized and used against them. It just means that they’ll never know about it until it’s too late. Everyone should have the absolute right to know whether or not the technology they rely on is working.
And we also create these gigantic terms of service that say that privacy isn’t a problem because by standing in the vicinity of a thing that’s siphoning off your personal information, you’ve agreed that it’s allowed to take all your information and also wear your underwear and make long distance calls and eat all the food in your fridge and punch your grandmother. And therefore there is no privacy problem, because you’ve agreed that there is no problem.
So we spend more money, we take more measures, we waste more of everyone’s time, and then we end up with it’s starting to feel like it’s too much trouble to even bother with. It’s a fact of life. Sure, cigarettes are gonna kill me someday, but what the hell, it’s too late now. There’s already so much carbon in the atmosphere, why should we stop driving? The entertainment industry is going to insist on digital rights management no matter what we do; why shouldn’t we just accommodate them and put it in all of our technology? I’m gonna leak my data no matter what, so I might as well join Facebook and get invited to some parties on the way to the information apocalypse.
And so it creates this idea that there’s no future. That you might as well just give up. But there is an alternative. Because at a certain point, no matter how much denial and FUD there is, the problem becomes undeniable, right? Even though we can’t agree on the cause, we can agree that there is a problem. So with privacy, for example, the US government says that the Computer Fraud and Abuse Act makes violating terms of service a felony. And as Ethan just described, this means that we can’t investigate in depth how services gather information and use it, because in order to do so we have to violate their terms of service. And since the terms of service have the power of law, we risk going to jail just to find out what’s going on.
Or the Digital Millennium Copyright Act, the 1998 statute that has lots and lots of clauses, but Section 1201 makes it a crime according to the US government to investigate systems that have digital rights management in them and to divulge their flaws, to divulge their workings.
Digital rights management thus becomes a kind of attractive nuisance. Because once you add a skin of digital rights management around your technology, you can sue anyone who breaks it even for lawful purposes. Which means that it ends up metastasizing into all kinds of things. We have light bulbs with digital rights management. And Philips, who make the Hue product, last year briefly introduced a firmware update that caused their light sockets to reject non‐Philips light bulbs. And since you had to bypass their DRM in order to remediate this, it became a felony to plug a light bulb of your choosing into your light socket, briefly, (or a potential felony) until Phillips, cowed by the outrage, rolled that back. But who, faced with this opportunity to add restriction to their technology that then allows them to dictate how people can use it, to make it as profitable as possible for them, who wouldn’t choose to adopt DRM? What what industry wouldn’t thank the government for that gift and take it on board?
So now we see digital rights management in technology as diverse as pacemakers, thermostats, cars, medical diagnostic equipment, baby monitors, insulin pumps, cat litter boxes, smart light bulbs, and these babies, the Internet of Things rectal thermometer. You literally have digital rights management up the ass at this point.
The privacy and security implications of all of these devices being off‐limits to investigation, to security auditing, and to disclosure… That’s figuratively thermonuclear, but it’s literally potentially lethal for you not to be able to know how these systems are working and whether or not they have flaws in them.
Now, at a certain moment, because these problems become so visible to us, we hit a kind of moment of peak indifference. The moment when the people who care about this stuff, the number of people who care about it, is never going to go down. That’s not the moment at which the tide changes in the policy debate, but it’s the moment at which the activist tactic changes. Because although your job may have been for twenty years to convince people that this stuff mattered, all of a sudden your job becomes convincing people that there’s something that they can do about it, because people have agreed that this stuff matters.
So, when enough of us have watched a loved one die of lung cancer, when climate refugees can no longer be ignored because they’re literally washing up on your shore, when data breaches destroy the lives of millions of people every week. For example, the Office of Personnel Management leaked over twenty million records of people who had applied for security clearance in the United States a little over a year ago. Suddenly the privacy debate in those circles changed.
I went to this Rand war game exercise about information breaches, and all of the cops and spooks in the room, every time someone proposed a solution that involved allowing lots of information into the public domain and allowing it to be handled by unvetted parties, they rejected those solutions out of hand as though they were completely unfit for purpose. And I couldn’t figure out why until one of them said “Office of Personnel Management” and I said, “Oh yeah, right. You had to sit down with a government official and tell them everything that could be used to blackmail you as a condition of your security clearance. So they know about your mom’s suicide attempt and the fact that your brother is in the closet and the fact that you’re HIV positive and you haven’t disclosed it your coworkers. And all of that information was breached probably to the Chinese government last year. Of course you now care about privacy.”
And they’re not the only ones. It’s not just people whose data gets leaked this way. People have their devices breached by voyeurs who spy on them. You may have heard of remote access trojan, or RATing. This is when you break into someone’s laptop, spy on them using their laptop camera, capture images of their incidental nudity along with their key strokes when they key in their passwords for social media. And then you combine those two things. “I will publish these incidental nude images on your social media channels,” to blackmail them into performing live sex acts on camera.
When RATers get arrested, they don’t just have one or two victims. They often have hundreds. The FBI raided a hundred RATers last year. The most prolific had four hundred victims—many of them are underage people—all over the world. There is a very publicized case in Canada where a young woman committed suicide after being hounded by a RATer. She was a teenager and the RATer was releasing her information into her social media channel, where her fellow students could see it, which led to her being bullied and killing herself.
So at a certain point, people are find themselves unable to ignore these problems anymore. They find that their cars are being hijacked by networked attacks, the disclosure of which can lead to felony prosecution. Or that enough information has been pieced together through data breaches to get a duplicate deed for their house, and then sell their house out from under them while they’re out of town. This happened both in New York and London in the run‐up to Christmas last year.
When people realize that it’s a felony to reconfigure their devices to do what they want or to give them maximum value— Farmers are all up in arms about John Deere, which uses digital rights management to lock up its diagnostic information on the tractor, including the information that’s generated when you drive your tractor around the back forty and collect the soil density data at a fine degree of resolution, which you could then use to broadcast your seed automatically. Except John Deere won’t give you that information. They sell it to you along with a bundle of seed from partners like Monsanto. And to remove that information from the tractor on your own without their say so risks DMCA prosecution. And so farmer magazines are now worried about the DMCA and about DRM.
So, at that moment when everybody is suddenly caring about this stuff, that’s the moment at which nihilism can be averted. It’s the moment in which nihilism must be averted if you’re going to make a change. Peak indifference is the moment when you stop convincing people to care about an issue, and start convincing them to do something about it. To quit smoking, to call for emissions reduction, to install crypto on their devices, to jailbreak everything. It’s the moment when you tell them the names of the people who personally benefited from their immiseration, and you tell them where they live. The people who deliberately created this false controversy that made it impossible to effectively address these problems. That’s the moment when if you catch it you can move people from indifference to making a difference.
But you need principles if you’re going to make it happen, as the esteemed computer scientist Alexander Hamilton once said, “If you stand for nothing, what will you fall for?” Just because some rules are bad, it doesn’t follow that rules themselves are bad. You need to have principles that guide your work and a way to defend them against everyone, including future versions of yourself, who might someday weaken or waiver and your commitment to those principles, and a way to keep those principles up to date.
So, we have a really good example of this in our community. It’s the GNU Linux licensing regime, the GPL, which I’m sure you’re all familiar with. The free software movement has these principles, this principle that computers should serve people rather than enslaving them. And it has as a way of implementing them these 3 ideas (Actually four ideas. There’s a zero in there.) That you should be able to run code. That you should be able to understand your code. That you should be able to improve your code. And that you should be able to share what you’ve learned in improving your code with other people.
And it has a tactic for making that stick, the GPL, their software license. Once you license code under the GPL, there’s no backsies. You can’t revoke that license. Which means that if you start your business full of high‐minded ideals about how you’re going to change the world by opening up computers and software, that no matter how desperate things become; no matter how many people’s mortgages are on the line because you can’t make payroll; no matter how your acquisition suitor wants you to change things; no matter how angry your investors get at you, you can never de‐GPL your code. And in fact, if you make it known that your code can never be GPL’d, there’s a good chance no one’s ever going to ask you to de‐GPL it. It actually changes the characteristics of what people pressurize you to do.
There’s a name for this. It’s called the “Ulysses pact.” This is named after Ulysses, who lashed himself to the mast when he was going into siren‐infested waters, to make sure that when the siren sang he wouldn’t jump into the sea. When you license your code under the GPL on day one, you’re doing something equivalent to throwing away your Oreos on the day one of your diet. You do that not because you’re weak‐willed and won’t be able to resist the siren song of Oreos, but because you are strong-willed enough to know that there will come a day when your will will waver. And so you bind yourself now to not taking a bad course of action in the future.
Now, the early pioneers of the Internet wanted to build something decentralized, open, and free. But as we’ve learned, we ended up building history’s biggest surveillance device, so advanced that governments facing social unrest sometimes leave the Internet on instead of turning it off, because the best way to control a population in revolt is to know exactly what they’re doing through their devices
And no one is the villain of their own story. The net pioneers who made the compromises that made the Internet what it is today, they instead of deciding to sell out, made a tiny compromise. And because we’re only really capable of detecting relative differences, they made another little compromise, and another little compromise, each one of which felt very small, but we ended up where we are today.
And there’s this great project under way today to re‐decentralize the Internet. But if we’re going to do that we also need to figure out how to prevent it from de‐re‐decentralizing. We need rules to guard us from our future selves and the moments of weakness that we’ll have. The rules that we make today when we’re pirates to guard us from the admirals that some of us will inevitably become.
So I propose a couple of rules. The first one is that computers should always obey their owners. When a computer gets a message from a remote party and that contradicts what the person who owns the computer wants it to do, the person should always win. And the second one is that true facts about computers should always be legal to disclose, especially about their security vulnerabilities.
So how do we make those stick? Well, we can build them into our license terms and conditions of membership into our consortia. We can make them conditions of regulatory approval. We can say that if the FDA’s going to bless your medical implant, they have to bind the company that makes it not to invoke laws that prevent disclosure or owner override over their devices. We can incorporate them into the definition of open standards.
Your own rule‐breaking needs to have principles like these, these simple minimum viable abroad agreements. The rules for rule‐breaking. The principles so hardline that they call you an extremist. In fact, if they’re not calling you an extremist, you’re probably not doing it right. And you will need Ulysses pacts. You’ll need tools to stop you from becoming compromised when you get old and tired.
The werewolf’s sin is not turning into a werewolf, it’s failing to lock himself in the basement when the full moon comes. Your trick will not be to stay pure. Your trick will be to anticipate the moments of weakness in the future and to make sure that you can guard yourself against them.
I think Kit has an announcement to make. So, as I was coming up, I was wondering whether or not I’d be able to say this, but I think Kit can say this.
Kit Walsh: I’m Kit Walsh. I’m a staff attorney at EFF. And I would like to announce that one of the laws that Cory mentioned, Section 1201 of the Digital Millennium Copyright Act, which prevents security research, it prevents you from accessing media in order to remix it, recast it— That we have principles that govern what sorts of rules we have, and they’re in the Constitution. And this morning, we filed a case against the government, challenging the constitutionality of Section 1201 of the DMCA as contrary to the First Amendment. We brought that case on behalf of security researcher Matt Green and technologist Bunnie Huang and his company AlphaMax, but really on behalf of the entire public and everyone who wants to make lawful uses of copyrighted works for research and expression. Thanks.
Doctorow: Thank you. So, that’s our next several years taken care of. We’re going to be working on changing the law. And not just changing it here, but because all of the countries around the world have been arm‐twisted into adopting their own versions of Section 1201 of the DMCA, things will be ripe for our allies and colleagues around the world to think about revoking it there, too. EFF has a project called the Apollo 1201 Project, whose goal is to end all DRM in the world within a decade, and this is our opening salvo. Thank you all very much.
Session liveblog by Willow Brugh et al.