Good morn­ing, every­one. Nice to be here. So, what I want to talk to today is Internet giants and human rights. It’s a research project that I’m cur­rent­ly work­ing on where I’m look­ing at what does it mean for human rights pro­tec­tion that we have large cor­po­rate interests—the Googles, the Facebooks of our time—that con­trol and gov­ern a large part of the online infra­struc­ture? I want to go through four main themes, hope­ful­ly in twen­ty min­utes max so we have at least five or ten min­utes for dis­cus­sion and questions.

First I’ll say a few words about the pow­er of these com­pa­nies, the role that they play. Then I’ll talk a bit about the chal­lenge from a human rights law per­spec­tive of hav­ing them held account­able under human rights law. Then I’ll tell you some of my find­ings from my cur­rent research, based on empir­i­cal stud­ies of Google and Facebook where I’m look­ing at the sense­mak­ing with­in the com­pa­nies. How do they them­selves see their role vis-à-vis human rights? And how is this sense­mak­ing trans­lat­ed into their poli­cies, their spe­cif­ic prod­uct fea­tures, their gov­er­nance struc­tures. And then a few words at the end about chal­lenges and ways for­ward. And maybe I would also love to hear some of your com­ments on that last point. 

The vast major­i­ty of us will increas­ing­ly find our­selves liv­ing, work­ing, and being gov­erned in two worlds at once.
Eric Schmidt & Jared Cohen, The New Digital Age, 2013 [pre­sen­ta­tion slide]

So if we start with the pow­ers that these actors have, this is a quote from two Google exec­u­tives. The vast major­i­ty of us will increas­ing­ly find our­selves liv­ing, work­ing, and being gov­erned in two worlds at once.” I think that’s a pret­ty strong quote. What Eric Schmidt of the Google, and Cohen, a chief exec­u­tive of Google Ideas in New York are basi­cal­ly say­ing here is that in the future we’ll basi­cal­ly be gov­erned by two par­ties. I mean, com­ing from being a for­mer pub­lic diplo­mat a long time ago and now in the human rights field, this is rel­a­tive­ly provoca­tive to me while at the same time I also under­stand why he’s say­ing it. And the more I talk to these com­pa­nies, I get a sense of how they see these issues. So this was to give you one appetizer.

Although dig­i­tal infra­struc­ture frees speak­ers from depen­dence on old­er media gate­keep­ers, it does so through the cre­ation of new inter­me­di­aries that offer both states and pri­vate par­ties new oppor­tu­ni­ties for con­trol and surveillance.
Jack M. Balkin, Old-School/New-School Speech Regulation”, 2014 [pre­sen­ta­tion slide]

Then if we go into acad­e­mia we have Jack Balkin, an American legal schol­ar who’s stat­ing here that he thinks it’s impor­tant that we remem­ber— We’ve had such a strong nar­ra­tive of free­dom relat­ed to the Internet— The free­dom, the way that it has lib­er­at­ed us from clas­si­cal gate­keep­ers. And that has been the nar­ra­tive for a long time. And increas­ing­ly we are rec­og­niz­ing that we are sub­ject­ed to struc­tures of con­trol just by dif­fer­ent par­ties and dif­fer­ent means. And one of these very strong struc­tures of con­trol are the pri­vate plat­forms where we basi­cal­ly con­duct a lot of our pub­lic life.

And the thing about these new plat­forms, these new infra­struc­tures of con­trol— One of the things I think that’s inter­est­ing is that it’s the same infra­struc­ture that gives us the free­dom that we so much cher­ish to express our­selves, to search infor­ma­tion, to find like-minded [peo­ple], to counter gov­ern­ments, etc. That struc­ture is exact­ly the same struc­ture that also pro­vides new means of basi­cal­ly map­ping us, sur­veilling us, retain­ing an unprece­dent­ed amount of data about us. So there is real­ly very lit­tle means of opt­ing out. We are in struc­tures that are both found to be lib­er­at­ing but at the same time also entail new means of control.

As infor­ma­tion becomes cru­cial to every aspect of every­day life, con­trol over infor­ma­tion (or lack there­of) may affect our abil­i­ty to par­tic­i­pate in mod­ern life as inde­pen­dent, autonomous human beings
Niva Elkin-Koren 2012 [pre­sen­ta­tion slide]

One last quote also from a legal schol­ar, Niva Elkin-Koren, that basi­cal­ly states here that it’s impor­tant to remem­ber that as infor­ma­tion becomes cru­cial to every aspect of our life—I guess it always has been. But the con­di­tion under which we process and deal with infor­ma­tion, those con­di­tions have changed, and the con­trol over those struc­tures may influ­ence the way we are able to par­tic­i­pate in mod­ern life.

To put it a bit dif­fer­ent­ly, if a lot of deci­sions about us are being tak­en with­in struc­tures where we don’t have access, where we’re not able to see the points that make up deci­sions that affect our lives, then it’s basi­cal­ly a dif­fer­ent kind of demo­c­ra­t­ic soci­ety or open soci­ety from what we’re used to.

Now, if we turn to Google and Facebook for the mat­ter of sim­plic­i­ty. It could be a num­ber of oth­er Internet giants, but I’m focus­ing on those two, and they are

very pow­er­ful. In terms of eco­nom­ic pow­er, in February this year Google was the most highly-valued com­pa­ny in the world. It has lost that posi­tion again now to Apple, but it’s just to say that it’s up there among the three most-valued [com­pa­nies] in the world. I think the net val­ue at the moment is around 550 bil­lion US dol­lars. That’s a com­pa­ny that was only found­ed in 98.

Facebook in com­par­i­son is from 2004. They’ve only been on the stock exchange for four years. And Mark Zuckerberg is now the sixth-richest per­son in the world. So we are talk­ing about an immense amount of wealth. I went to vis­it both of their head­quar­ters in December last year, and I think it was only when I was there in those phys­i­cal sur­round­ings that I real­ly grasped just how rich they are, just how many peo­ple they employ, and more impor­tant­ly that all this mon­ey is basi­cal­ly gen­er­at­ed from adver­tis­ing. I mean, it’s gen­er­at­ed from pro­vid­ing free ser­vices. That’s real­ly amaz­ing I think to think about, and still mind-boggling.

In terms of polit­i­cal pow­er, Google has the strongest lob­by pres­ence of all com­pa­nies in Washington DC now. They spend twen­ty mil­lion US dol­lars a year on lob­by­ing alone in the US and Europe. This is not to put out a strong crit­i­ciz­ing mark on Google. That’s not my main mes­sage here. My main mes­sage here is to rec­og­nize, or to have us all rec­og­nize, that there is huge mon­ey involved and that there is a huge link to polit­i­cal pow­er, oth­er­wise they would­n’t pay such strong lob­by atten­dance to major capitals. 

Also, if we look at the flow of exec­u­tives with­in these tech spaces and the gov­ern­ment, it’s basi­cal­ly peo­ple are float­ing around from from the US State Department, from state depart­ments in Europe, and to these com­pa­nies. So also on staff lev­el, there’s real­ly a great flow. There’s a close link between the polit­i­cal pow­er [?] and these companies.

In terms of social pow­er, they have a huge social pow­er because they have so many users. Basically, the vast major­i­ty of us are using their ser­vices every day. And by and large that user base is pret­ty uncrit­i­cal. We don’t have a huge user/consumer move­ment or any­thing like that. We have cam­paigns here and there, but gen­er­al­ly and espe­cial­ly when you move out­side coun­tries like Germany or France that are a bit more crit­i­cal than the aver­age, that’s not the nar­ra­tive. When you’re in the US, there’s not a very crit­i­cal nar­ra­tive, and in many oth­er parts of the world as well.

Finally, I put down tech­ni­cal pow­er, because when you have so much wealth and so much of that wealth goes into engi­neer­ing, into arti­fi­cial intel­li­gence, into robot­ics, into algo­rithm devel­op­ment, etc., of course they also have a huge say in how the future of tech devel­op­ment looks and how it’s put to use.

So this was to give you a pic­ture of these are not just some com­pa­nies. They real­ly have huge pow­ers and huge influ­ences. And then nor­mal­ly we would think that with great pow­er comes great respon­si­bil­i­ty. But the tricky thing here is that the human rights treaties that were set out after the Second World War to basi­cal­ly pro­tect cit­i­zens from abuse of pow­er are all for­mu­lat­ed with the state in mind. They were for­mu­lat­ed, they were draft­ed, they were were sub­scribed to at a time where we were imag­in­ing pow­er abuse or poten­tial pow­er abuse as abuse by the state. Private com­pa­nies are not bound by human rights law. So they might have tak­en up human rights a lot in their inter­nal dis­course. They are part of a lot of vol­un­tary ini­tia­tives. And they do good stuff with regard to human rights, also. But they are not bound by human rights law. You can­not put a pri­vate com­pa­ny before a human rights court. It’s all in the vol­un­tary zone between the legal stan­dards and then cor­po­rate social respon­si­bil­i­ty that’s a more nor­ma­tive baseline.

The strongest standard-setting doc­u­ment that we have in the field is some­thing called the UN Guiding Principles on Business and Human Rights that was draft­ed by a Harvard pro­fes­sor, John Ruggie, in 2011 that’s the main standard-setting that doc­u­ment. It’s been wide­ly appraised and adopt­ed across the field. And it speaks to the com­pa­ny respon­si­bil­i­ty to respect human rights and makes the point that all com­pa­nies should take proac­tive mea­sures to basi­cal­ly mit­i­gate any neg­a­tive impact they may have on human rights. So they should basi­cal­ly assess all their busi­ness con­duct and see is there any of the stuff we’re doing in the way our process­es, our prod­ucts, the way we treat our staff, the way that we work in a local com­mu­ni­ty, etc., that may have a neg­a­tive human rights impact, and if so try to mit­i­gate that impact. That’s basi­cal­ly the core mes­sage with regard to com­pa­nies. But it’s not bind­ing. It’s not bind­ing. It’s a rec­om­men­da­tion. It’s wide­ly appraised but it’s still a recommendation.

And then I’ve also list­ed prob­a­bly the most rel­e­vant indus­try ini­tia­tive relat­ed to the tech com­pa­nies, the Global Network Initiative that was found­ed by The Berkman Center for Internet and Society. A few tech companies—in the begin­ning was only like three or four, five— I think they are eight now, but all the main ones are in there. And they’ve also set out a num­ber of base­lines and rec­om­men­da­tions with regard to how they should ensure that their prac­tices are human rights com­pli­ant. However, as I will come back to, there are real lim­i­ta­tions to the way that they think about and imple­ment human rights with­in the Global Network Initiative.

Then now if we go over to some of the empir­i­cal stuff I’ve done, when I start­ed with this research I had a promise from the two com­pa­nies that I would get access to talk to key pol­i­cy peo­ple with­in. However, it has proved quite dif­fi­cult to get access. It’s been a chal­lenge that could deserve a talk on its own. I won’t go into that here, but I’ve man­aged in the end to do around twen­ty inter­views, more or less fifty/fifty between Google and Facebook; a bit more Google inter­views. I’ve also ana­lyzed around twen­ty talks in the pub­lic domain. So that’s the good thing about our age, that you can actu­al­ly find a lot of the cor­po­rate exec­u­tives and oth­er staff talk­ing about these issues at places such as this. And then after­wards you can basi­cal­ly lis­ten to it. And often they are actu­al­ly more frank in pan­el dis­cus­sions and stuff like [that] than when you have them on a to-hand basis. So that has also been very use­ful. And final­ly I’ve attend­ed var­i­ous pol­i­cy events around these spaces and also been able to car­ry out con­ver­sa­tions there.

And as I men­tioned ini­tial­ly, my idea has been to under­stand, to get a bit away from the nam­ing and sham­ing dis­course. To try to get with­in, try to under­stand almost from an ethno­graph­ic per­spec­tive how do they under­stand and work with human rights. What is their sense­mak­ing around these issues? Why is there such a strong dis­con­nect between the way we in my pri­va­cy com­mu­ni­ty or my human rights com­mu­ni­ty or a lot of oth­er com­mu­ni­ties that I know of, think about these cor­po­rate actors and human rights, and then the way they think about them­selves? What’s going on, what’s the beef? And how does that under­stand­ing then influ­ence the way they work?

So, since we don’t have that much time I will go straight to some of the main con­clu­sions that I’ve found. First of all, there is a strong pre­sump­tion with­in these com­pa­nies of doing good. And that actu­al­ly makes a crit­i­cal dis­course a bit dif­fi­cult because they have a strong belief that they are basi­cal­ly lib­er­a­tors. They are very much anchored in the nar­ra­tive of good-doers. And this is not to say that the Google and the Facebookers are not doing good. They also have great poten­tials in many respects. But it’s just that where­as with oth­er old­er and more estab­lished com­pa­nies, there’s a more you could say mature— There’s a dif­fer­ent kind of recog­ni­tion of that as a com­pa­ny. There are var­i­ous aspects where you might have prob­lem­at­ic issues in the com­mu­ni­ties where you oper­ate. It seems that with­in this sec­tor it’s real­ly dif­fi­cult to have that crit­i­cal dis­course. The pre­sump­tion of being good-doers is so dominant.

Also, there’s a strong sense of being trans­for­ma­tive with the use of tech­nol­o­gy real­ly being on the fore­front, and all the time push­ing the lim­its of what tech­nol­o­gy can do. And that means, for exam­ple, that if you raise privacy-critical issues, for exam­ple in rela­tion to some of Facebook’s prac­tices, one response that you will often encounter would be that well, we need to push the use of the tech­nol­o­gy all the time; that’s our role. And then there’s always this sort of reluc­tance to new prac­tices, new changes. But grad­u­al­ly this whole prac­tice of using tech­nol­o­gy, of using social net­works, is evolv­ing. And we are part of that, and our role is to push the user all the time.

So a sense of being at the fore­front, of being very trans­for­ma­tive. Yet when it comes to human rights, there’s actu­al­ly a very con­ser­v­a­tive approach. And by that I mean that there is a sense that the human rights threats main­ly stem from gov­ern­ments. Human rights threats are some­thing that we like to talk about in rela­tion to gov­ern­ments in coun­tries that we don’t approve of. The easy cas­es, so to speak. The China, the Cuba, the North Korea, etc. There are many of these coun­tries, and they can be very right­ly so be crit­i­cized. But it’s just too sim­plis­tic to say that human rights prob­lems and chal­lenges only occur in these places. And espe­cial­ly when we talk about com­pa­nies that have such a strong impact on users’ human rights, it’s impor­tant to have a recog­ni­tion of the role they may play, their own neg­a­tive impact. And that recog­ni­tion is not real­ly there. It’s pure­ly only about gov­ern­ments. It’s about push­ing back, hold­ing back, against repres­sive gov­ern­ment behavior.

So in oth­er words, the Ruggie guide­lines that I spoke about ear­li­er, the UN guid­ing prin­ci­ples on human rights and busi­ness that speak to the need to assess all your busi­ness prac­tices from that per­spec­tive, that is being trans­lat­ed into some­thing that looks at busi­ness prac­tices in rela­tion to gov­ern­ment requests. So there would be a due dili­gence pro­ce­dure if a gov­ern­ment requests the com­pa­ny to shut down a ser­vice. But where­as they take deci­sions in rela­tion to for exam­ple their terms of ser­vice enforce­ment or their com­mu­ni­ty stan­dards, there would­n’t be the the same type of assess­ment. That would­n’t be per­ceived as a human rights, as a free­dom of expres­sion issue.

So if we zoom in a bit on some of the find­ings in rela­tion to free­dom of expres­sion and pri­va­cy that I’ve focused on most­ly because they are the two human rights that I think are most urgent­ly need­ed to address. There are also oth­er human rights for sure that would be rel­e­vant, but these have been my focus. So, a strong free speech iden­ti­ty in both com­pa­nies. I mean, they’re born out of the US West Coast, not sur­pris­ing­ly. They think high­ly about free speech and they see them­selves as true free speech lib­er­a­tors and play­ing a cru­cial role in that regard. Have a strong pride in push­ing back against gov­ern­ment requests. Also issu­ing trans­paren­cy reports where you can see how many times they have accept­ed or accom­mo­dat­ed a gov­ern­ment request and under which conditions.

At the same time, the enforce­ment of their own com­mu­ni­ty stan­dards— I’ve called it com­mu­ni­ty stan­dards here; it’s called a bit dif­fer­ent­ly depend­ing on the ser­vice. So on Facebook it would be com­mu­ni­ty stan­dards, on YouTube it would be com­mu­ni­ty guide­lines, on Google search it’s a more nar­row regime, so there are vari­a­tions. But for sim­plic­i­ty here, I speak about com­mu­ni­ty stan­dards as the kind of terms of ser­vice enforce­ment that the plat­forms do. The vol­ume of things that are removed here are many many many times big­ger than gov­ern­ment requests. 

Facebook told me recent­ly that they have, I think it was one mil­lion items flagged each day, each day, by users who think that Facebook should look at this spe­cif­ic piece of con­tent and poten­tial­ly remove it. Yet the process­es where­by these deci­sions are made, where­by their staff or out­sourced staff look at the request, the deci­sions they make, the cri­te­ria for mak­ing this. How much con­tent is removed for which rea­sons, which con­tent is not removed, all that takes place in a com­plete black box seen from the out­side per­spec­tive. It’s sim­ply not pos­si­ble to get access to that data.

So you have a huge amount of con­tent that’s being reg­u­lat­ed in process­es that are com­plete­ly closed to the out­side. And more impor­tant­ly, they are not seen as free­dom of expres­sion issues. They are seen as a pri­vate com­pa­ny basi­cal­ly enforc­ing its rules of engage­ment. And from a strict­ly legal per­spec­tive, right­ly so. Because very strict­ly legal­ly speak­ing, free­dom of expres­sion is about gov­ern­ment restric­tions in con­tent on the Internet. And even though I think most peo­ple, also human rights lawyers, would agree that of course it has a free­dom of expres­sion impli­ca­tion, how much con­tent a major plat­form removes, you can­not bring it before a court as a free­dom of expres­sion issue unless you could real­ly prove that there were no alter­na­tive means of express­ing that content.

I’ll have to run a bit with the time, I see. So, very high vol­ume. A mix of norms. By that I mean that the norms that decide which con­tent is removed and which is not is a pure mix of some­thing that’s legal stan­dards and some­thing that’s more stuff that we don’t want” because of oth­er rea­sons. Not because it’s ille­gal but because it’s found inap­pro­pri­ate or unwant­ed or goes against the com­mu­ni­ty norms. And it’s based on what they called a neigh­bor­watch pro­gram,” which basi­cal­ly means that we as users are the ones that are flag­ging the con­tent that we find prob­lem­at­ic, and then the ser­vice on the oth­er side makes deci­sions on what to do with that con­tent. From the free­dom of expres­sion per­spec­tive, that’s also pret­ty prob­lem­at­ic because free­dom of expres­sion is pre­cise­ly meant to pro­tect those expres­sions that the com­mu­ni­ty might not like but nev­er­the­less deserve to be there.

Okay I’ll rush through some of the find­ings in rela­tion to pri­va­cy. So, the taken-for-granted con­text of these com­pa­nies is what they call the per­son­al infor­ma­tion econ­o­my. That’s a new type of econ­o­my that’s basi­cal­ly based on per­son­al data as the key source of income. I mean think about it, all that wealth basi­cal­ly comes from tar­get­ed adver­tise­ments based on all of the things known about the users. That’s what cre­ates the wealth. That’s the per­son­al infor­ma­tion econ­o­my. That’s the taken-for-granted con­text. That’s not some­thing that’s questioned.

And that basi­cal­ly means— So, when you pose ques­tions about that, the answer will be, Well, it’s a free ser­vice, right? Someone has to pay. So, the adver­tis­ers pay that so we can pro­vide a free ser­vice to the users.” And up till now, alter­na­tive busi­ness mod­els, for exam­ple where users paid some­thing, a month­ly rate or some­thing, has­n’t real­ly been in the dis­course. The pre-setting is a free ser­vice and the per­son­al infor­ma­tion econ­o­my. And that means that when you talk about pri­va­cy, they will list all these increas­ing mea­sures where­by users can con­trol their pri­va­cy set­tings. And there are increas­ing means of con­trol­ling your pri­va­cy set­tings, but pri­va­cy con­trol with­in this con­text basi­cal­ly means that you can con­trol how you share infor­ma­tion with oth­er users. It’s what I call front stage pri­va­cy control.” 

So, I can con­trol which users are to see which infor­ma­tion about me to some extent, but the back stage pri­va­cy con­trol, the flow that goes on behind my back between the com­pa­ny and between oth­er affil­i­at­ed part­ners of the com­pa­ny, that’s not framed as a pri­va­cy issue. That’s the busi­ness mod­el. So you have the busi­ness mod­el that’s the back stage pri­va­cy han­dling, and then you have pri­va­cy as front stage user con­trol, the way that we can nav­i­gate our infor­ma­tion between oth­ers like our­selves using the ser­vice. That’s real­ly impor­tant to under­stand, because that basi­cal­ly means that pri­va­cy is not lim­its on data col­lec­tion, which is a key prin­ci­ple in European data protection.

Okay, I’ll fin­ish up. I just list­ed some of the key chal­lenges. One, the busi­ness mod­el, that I real­ly think we need to ques­tion and to chal­lenge and to dis­cuss with these partners. 

The corporate-state nexus, I haven’t addressed that very much today but basi­cal­ly the inter­change of data between state pow­ers and cor­po­rate pow­ers that we know so very lit­tle of, still.

Then there is— I mean, all these major actors, they are US com­pa­nies. And there is a sense, at least from the peo­ple I’ve spo­ken to, of European pri­va­cy,” of Europeans being over­ly con­cerned with pri­va­cy in a way that’s a bit incom­pre­hen­si­ble to most Americans, at least the ones I’ve spo­ken to. Because it’s just a very dif­fer­ent con­cep­tion of pri­va­cy. And the way that many Europeans have pri­va­cy is some­thing that’s real­ly essen­tial­ly linked to our iden­ti­ty and auton­o­my. It’s quite dif­fer­ent from a US per­spec­tive, and I also think we need to get that up on the table and speak to that more open­ly and address that more open­ly. Because with these glob­al data flows, these under­ly­ing pre­sump­tions, these under­ly­ing zones of con­tes­ta­tion need to be addressed if we are ever to get some kind of more glob­al agree­ment on these issues.

Then we have the con­sent­ing users. Data pro­tec­tion is basi­cal­ly based on user con­sent in the European mod­el. And prac­ti­cal­ly all users con­sent as a premise for using these ser­vices. That also puts some lim­its on what we can then demand after­wards in terms of data protection. 

Then there’s the very state-centered approach to human rights that are found with­in these cor­po­rate enti­ties. And final­ly, what I call the black box. The black box of inter­nal pro­ce­dures around espe­cial­ly con­tent reg­u­la­tion, that is almost treat­ed as trade secrets, means that we can’t we can’t real­ly get into a dia­logue on that.

Okay, I think I’ll fin­ish here. Thank you.

Further Reference

Session page at the re:publica 2016 site