Good morning, everyone. Nice to be here. So, what I want to talk to today is Internet giants and human rights. It’s a research project that I’m currently working on where I’m looking at what does it mean for human rights protection that we have large corporate interests—the Googles, the Facebooks of our time—that control and govern a large part of the online infrastructure? I want to go through four main themes, hopefully in twenty minutes max so we have at least five or ten minutes for discussion and questions.
First I’ll say a few words about the power of these companies, the role that they play. Then I’ll talk a bit about the challenge from a human rights law perspective of having them held accountable under human rights law. Then I’ll tell you some of my findings from my current research, based on empirical studies of Google and Facebook where I’m looking at the sensemaking within the companies. How do they themselves see their role vis-à-vis human rights? And how is this sensemaking translated into their policies, their specific product features, their governance structures. And then a few words at the end about challenges and ways forward. And maybe I would also love to hear some of your comments on that last point.
The vast majority of us will increasingly find ourselves living, working, and being governed in two worlds at once.
Eric Schmidt & Jared Cohen, The New Digital Age, 2013 [presentation slide]
So if we start with the powers that these actors have, this is a quote from two Google executives. “The vast majority of us will increasingly find ourselves living, working, and being governed in two worlds at once.” I think that’s a pretty strong quote. What Eric Schmidt of the Google, and Cohen, a chief executive of Google Ideas in New York are basically saying here is that in the future we’ll basically be governed by two parties. I mean, coming from being a former public diplomat a long time ago and now in the human rights field, this is relatively provocative to me while at the same time I also understand why he’s saying it. And the more I talk to these companies, I get a sense of how they see these issues. So this was to give you one appetizer.
Although digital infrastructure frees speakers from dependence on older media gatekeepers, it does so through the creation of new intermediaries that offer both states and private parties new opportunities for control and surveillance.
Jack M. Balkin, “Old-School/New-School Speech Regulation”, 2014 [presentation slide]
Then if we go into academia we have Jack Balkin, an American legal scholar who’s stating here that he thinks it’s important that we remember— We’ve had such a strong narrative of freedom related to the Internet— The freedom, the way that it has liberated us from classical gatekeepers. And that has been the narrative for a long time. And increasingly we are recognizing that we are subjected to structures of control just by different parties and different means. And one of these very strong structures of control are the private platforms where we basically conduct a lot of our public life.
And the thing about these new platforms, these new infrastructures of control— One of the things I think that’s interesting is that it’s the same infrastructure that gives us the freedom that we so much cherish to express ourselves, to search information, to find like-minded [people], to counter governments, etc. That structure is exactly the same structure that also provides new means of basically mapping us, surveilling us, retaining an unprecedented amount of data about us. So there is really very little means of opting out. We are in structures that are both found to be liberating but at the same time also entail new means of control.
As information becomes crucial to every aspect of everyday life, control over information (or lack thereof) may affect our ability to participate in modern life as independent, autonomous human beings
Niva Elkin-Koren 2012 [presentation slide]
One last quote also from a legal scholar, Niva Elkin-Koren, that basically states here that it’s important to remember that as information becomes crucial to every aspect of our life—I guess it always has been. But the condition under which we process and deal with information, those conditions have changed, and the control over those structures may influence the way we are able to participate in modern life.
To put it a bit differently, if a lot of decisions about us are being taken within structures where we don’t have access, where we’re not able to see the points that make up decisions that affect our lives, then it’s basically a different kind of democratic society or open society from what we’re used to.
Now, if we turn to Google and Facebook for the matter of simplicity. It could be a number of other Internet giants, but I’m focusing on those two, and they are
very powerful. In terms of economic power, in February this year Google was the most highly-valued company in the world. It has lost that position again now to Apple, but it’s just to say that it’s up there among the three most-valued [companies] in the world. I think the net value at the moment is around 550 billion US dollars. That’s a company that was only founded in ’98.
Facebook in comparison is from 2004. They’ve only been on the stock exchange for four years. And Mark Zuckerberg is now the sixth-richest person in the world. So we are talking about an immense amount of wealth. I went to visit both of their headquarters in December last year, and I think it was only when I was there in those physical surroundings that I really grasped just how rich they are, just how many people they employ, and more importantly that all this money is basically generated from advertising. I mean, it’s generated from providing free services. That’s really amazing I think to think about, and still mind-boggling.
In terms of political power, Google has the strongest lobby presence of all companies in Washington DC now. They spend twenty million US dollars a year on lobbying alone in the US and Europe. This is not to put out a strong criticizing mark on Google. That’s not my main message here. My main message here is to recognize, or to have us all recognize, that there is huge money involved and that there is a huge link to political power, otherwise they wouldn’t pay such strong lobby attendance to major capitals.
Also, if we look at the flow of executives within these tech spaces and the government, it’s basically people are floating around from from the US State Department, from state departments in Europe, and to these companies. So also on staff level, there’s really a great flow. There’s a close link between the political power [?] and these companies.
In terms of social power, they have a huge social power because they have so many users. Basically, the vast majority of us are using their services every day. And by and large that user base is pretty uncritical. We don’t have a huge user/consumer movement or anything like that. We have campaigns here and there, but generally and especially when you move outside countries like Germany or France that are a bit more critical than the average, that’s not the narrative. When you’re in the US, there’s not a very critical narrative, and in many other parts of the world as well.
Finally, I put down technical power, because when you have so much wealth and so much of that wealth goes into engineering, into artificial intelligence, into robotics, into algorithm development, etc., of course they also have a huge say in how the future of tech development looks and how it’s put to use.
So this was to give you a picture of these are not just some companies. They really have huge powers and huge influences. And then normally we would think that with great power comes great responsibility. But the tricky thing here is that the human rights treaties that were set out after the Second World War to basically protect citizens from abuse of power are all formulated with the state in mind. They were formulated, they were drafted, they were were subscribed to at a time where we were imagining power abuse or potential power abuse as abuse by the state. Private companies are not bound by human rights law. So they might have taken up human rights a lot in their internal discourse. They are part of a lot of voluntary initiatives. And they do good stuff with regard to human rights, also. But they are not bound by human rights law. You cannot put a private company before a human rights court. It’s all in the voluntary zone between the legal standards and then corporate social responsibility that’s a more normative baseline.
The strongest standard-setting document that we have in the field is something called the UN Guiding Principles on Business and Human Rights that was drafted by a Harvard professor, John Ruggie, in 2011 that’s the main standard-setting that document. It’s been widely appraised and adopted across the field. And it speaks to the company responsibility to respect human rights and makes the point that all companies should take proactive measures to basically mitigate any negative impact they may have on human rights. So they should basically assess all their business conduct and see is there any of the stuff we’re doing in the way our processes, our products, the way we treat our staff, the way that we work in a local community, etc., that may have a negative human rights impact, and if so try to mitigate that impact. That’s basically the core message with regard to companies. But it’s not binding. It’s not binding. It’s a recommendation. It’s widely appraised but it’s still a recommendation.
And then I’ve also listed probably the most relevant industry initiative related to the tech companies, the Global Network Initiative that was founded by The Berkman Center for Internet and Society. A few tech companies—in the beginning was only like three or four, five— I think they are eight now, but all the main ones are in there. And they’ve also set out a number of baselines and recommendations with regard to how they should ensure that their practices are human rights compliant. However, as I will come back to, there are real limitations to the way that they think about and implement human rights within the Global Network Initiative.
Then now if we go over to some of the empirical stuff I’ve done, when I started with this research I had a promise from the two companies that I would get access to talk to key policy people within. However, it has proved quite difficult to get access. It’s been a challenge that could deserve a talk on its own. I won’t go into that here, but I’ve managed in the end to do around twenty interviews, more or less fifty/fifty between Google and Facebook; a bit more Google interviews. I’ve also analyzed around twenty talks in the public domain. So that’s the good thing about our age, that you can actually find a lot of the corporate executives and other staff talking about these issues at places such as this. And then afterwards you can basically listen to it. And often they are actually more frank in panel discussions and stuff like [that] than when you have them on a to-hand basis. So that has also been very useful. And finally I’ve attended various policy events around these spaces and also been able to carry out conversations there.
And as I mentioned initially, my idea has been to understand, to get a bit away from the naming and shaming discourse. To try to get within, try to understand almost from an ethnographic perspective how do they understand and work with human rights. What is their sensemaking around these issues? Why is there such a strong disconnect between the way we in my privacy community or my human rights community or a lot of other communities that I know of, think about these corporate actors and human rights, and then the way they think about themselves? What’s going on, what’s the beef? And how does that understanding then influence the way they work?
So, since we don’t have that much time I will go straight to some of the main conclusions that I’ve found. First of all, there is a strong presumption within these companies of doing good. And that actually makes a critical discourse a bit difficult because they have a strong belief that they are basically liberators. They are very much anchored in the narrative of good-doers. And this is not to say that the Google and the Facebookers are not doing good. They also have great potentials in many respects. But it’s just that whereas with other older and more established companies, there’s a more you could say mature— There’s a different kind of recognition of that as a company. There are various aspects where you might have problematic issues in the communities where you operate. It seems that within this sector it’s really difficult to have that critical discourse. The presumption of being good-doers is so dominant.
Also, there’s a strong sense of being transformative with the use of technology really being on the forefront, and all the time pushing the limits of what technology can do. And that means, for example, that if you raise privacy-critical issues, for example in relation to some of Facebook’s practices, one response that you will often encounter would be that well, we need to push the use of the technology all the time; that’s our role. And then there’s always this sort of reluctance to new practices, new changes. But gradually this whole practice of using technology, of using social networks, is evolving. And we are part of that, and our role is to push the user all the time.
So a sense of being at the forefront, of being very transformative. Yet when it comes to human rights, there’s actually a very conservative approach. And by that I mean that there is a sense that the human rights threats mainly stem from governments. Human rights threats are something that we like to talk about in relation to governments in countries that we don’t approve of. The easy cases, so to speak. The China, the Cuba, the North Korea, etc. There are many of these countries, and they can be very rightly so be criticized. But it’s just too simplistic to say that human rights problems and challenges only occur in these places. And especially when we talk about companies that have such a strong impact on users’ human rights, it’s important to have a recognition of the role they may play, their own negative impact. And that recognition is not really there. It’s purely only about governments. It’s about pushing back, holding back, against repressive government behavior.
So in other words, the Ruggie guidelines that I spoke about earlier, the UN guiding principles on human rights and business that speak to the need to assess all your business practices from that perspective, that is being translated into something that looks at business practices in relation to government requests. So there would be a due diligence procedure if a government requests the company to shut down a service. But whereas they take decisions in relation to for example their terms of service enforcement or their community standards, there wouldn’t be the the same type of assessment. That wouldn’t be perceived as a human rights, as a freedom of expression issue.
So if we zoom in a bit on some of the findings in relation to freedom of expression and privacy that I’ve focused on mostly because they are the two human rights that I think are most urgently needed to address. There are also other human rights for sure that would be relevant, but these have been my focus. So, a strong free speech identity in both companies. I mean, they’re born out of the US West Coast, not surprisingly. They think highly about free speech and they see themselves as true free speech liberators and playing a crucial role in that regard. Have a strong pride in pushing back against government requests. Also issuing transparency reports where you can see how many times they have accepted or accommodated a government request and under which conditions.
At the same time, the enforcement of their own community standards— I’ve called it community standards here; it’s called a bit differently depending on the service. So on Facebook it would be community standards, on YouTube it would be community guidelines, on Google search it’s a more narrow regime, so there are variations. But for simplicity here, I speak about community standards as the kind of terms of service enforcement that the platforms do. The volume of things that are removed here are many many many times bigger than government requests.
Facebook told me recently that they have, I think it was one million items flagged each day, each day, by users who think that Facebook should look at this specific piece of content and potentially remove it. Yet the processes whereby these decisions are made, whereby their staff or outsourced staff look at the request, the decisions they make, the criteria for making this. How much content is removed for which reasons, which content is not removed, all that takes place in a complete black box seen from the outside perspective. It’s simply not possible to get access to that data.
So you have a huge amount of content that’s being regulated in processes that are completely closed to the outside. And more importantly, they are not seen as freedom of expression issues. They are seen as a private company basically enforcing its rules of engagement. And from a strictly legal perspective, rightly so. Because very strictly legally speaking, freedom of expression is about government restrictions in content on the Internet. And even though I think most people, also human rights lawyers, would agree that of course it has a freedom of expression implication, how much content a major platform removes, you cannot bring it before a court as a freedom of expression issue unless you could really prove that there were no alternative means of expressing that content.
I’ll have to run a bit with the time, I see. So, very high volume. A mix of norms. By that I mean that the norms that decide which content is removed and which is not is a pure mix of something that’s legal standards and something that’s more “stuff that we don’t want” because of other reasons. Not because it’s illegal but because it’s found inappropriate or unwanted or goes against the community norms. And it’s based on what they called a “neighborwatch program,” which basically means that we as users are the ones that are flagging the content that we find problematic, and then the service on the other side makes decisions on what to do with that content. From the freedom of expression perspective, that’s also pretty problematic because freedom of expression is precisely meant to protect those expressions that the community might not like but nevertheless deserve to be there.
Okay I’ll rush through some of the findings in relation to privacy. So, the taken-for-granted context of these companies is what they call the personal information economy. That’s a new type of economy that’s basically based on personal data as the key source of income. I mean think about it, all that wealth basically comes from targeted advertisements based on all of the things known about the users. That’s what creates the wealth. That’s the personal information economy. That’s the taken-for-granted context. That’s not something that’s questioned.
And that basically means— So, when you pose questions about that, the answer will be, “Well, it’s a free service, right? Someone has to pay. So, the advertisers pay that so we can provide a free service to the users.” And up till now, alternative business models, for example where users paid something, a monthly rate or something, hasn’t really been in the discourse. The pre-setting is a free service and the personal information economy. And that means that when you talk about privacy, they will list all these increasing measures whereby users can control their privacy settings. And there are increasing means of controlling your privacy settings, but privacy control within this context basically means that you can control how you share information with other users. It’s what I call “front stage privacy control.”
So, I can control which users are to see which information about me to some extent, but the back stage privacy control, the flow that goes on behind my back between the company and between other affiliated partners of the company, that’s not framed as a privacy issue. That’s the business model. So you have the business model that’s the back stage privacy handling, and then you have privacy as front stage user control, the way that we can navigate our information between others like ourselves using the service. That’s really important to understand, because that basically means that privacy is not limits on data collection, which is a key principle in European data protection.
Okay, I’ll finish up. I just listed some of the key challenges. One, the business model, that I really think we need to question and to challenge and to discuss with these partners.
The corporate-state nexus, I haven’t addressed that very much today but basically the interchange of data between state powers and corporate powers that we know so very little of, still.
Then there is— I mean, all these major actors, they are US companies. And there is a sense, at least from the people I’ve spoken to, of “European privacy,” of Europeans being overly concerned with privacy in a way that’s a bit incomprehensible to most Americans, at least the ones I’ve spoken to. Because it’s just a very different conception of privacy. And the way that many Europeans have privacy is something that’s really essentially linked to our identity and autonomy. It’s quite different from a US perspective, and I also think we need to get that up on the table and speak to that more openly and address that more openly. Because with these global data flows, these underlying presumptions, these underlying zones of contestation need to be addressed if we are ever to get some kind of more global agreement on these issues.
Then we have the consenting users. Data protection is basically based on user consent in the European model. And practically all users consent as a premise for using these services. That also puts some limits on what we can then demand afterwards in terms of data protection.
Then there’s the very state-centered approach to human rights that are found within these corporate entities. And finally, what I call the black box. The black box of internal procedures around especially content regulation, that is almost treated as trade secrets, means that we can’t we can’t really get into a dialogue on that.
Okay, I think I’ll finish here. Thank you.
Further Reference
Session page at the re:publica 2016 site