Jillian C. York: Hello. So when we thought about this theme… We were both at re:publica in Dublin last year. We got to see the announce­ment of this them of re:publica, and the first thought that we had was how can you love out loud in a time of ubiq­ui­tous sur­veil­lance? And so as we’ve cre­at­ed this talk we’ve set out to answer that ques­tion. I am going to say I’m not rep­re­sent­ing my orga­ni­za­tion today, I’m just speak­ing from my per­son­al per­spec­tive. And I think yeah, let’s take it away.

Matthew Stender: We have a lot of con­tent we’re going to cov­er quick­ly. If any­body wants to dis­cuss things after­wards, we’re more than hap­py to talk. We built kind of a long pre­sen­ta­tion so we’re going to try to fly through, be as infor­ma­tive and delib­er­ate as pos­si­ble.

So one of the things that we want­ed to dis­cuss is the way in which facial recog­ni­tion tech­nolo­gies are sur­round­ing us more and more. It’s a num­ber of things. One, retro­fitting of old, exist­ing sys­tems like CCTV sys­tems in sub­ways and oth­er trans­porta­tion sys­tems. But as well as new and minia­tur­ized cam­era sys­tems that are now prop­a­gat­ed to a back­end of machine learn­ing neur­al net­works and new AI tech­nolo­gies.

York: There’s a com­bi­na­tion of data right now. Data that we’re hand­ing over vol­un­tar­i­ly to the social net­works that we take part in, to the dif­fer­ent struc­tures in which we par­tic­i­pate, be it med­ical records, any­thing across that spec­trum. And then there’s data, or cap­ture, that’s being tak­en from us with­out our con­sent.

Stender: So our actions cap­tured on film paint a star­tling­ly com­plete pic­ture of our lives. To give you an exam­ple, in 2015 the New York metro sys­tem, the MTA, start­ed installing around 1,000 new video cam­eras. And we think about what this looks like on a dai­ly basis. The things that can be gleaned from this. Transportation pat­terns. What sta­tions you get on and off at. If you are able to be unique­ly iden­ti­fied by some­thing like a metro CCTV inte­grat­ed sys­tem, even when you wake up; the stops that you go to; if you are going to a new stop that you may not nor­mal­ly go to. That we’re actu­al­ly even in this one sys­tem, a star­tling­ly clear pic­ture starts to emerge of our lives in the pas­sive actions that we take on a dai­ly basis, that we’re not even real­ly aware that are being sur­veilled, but they are.

York: And then when that data’s com­bined with all of the oth­er data, both that we’re hand­ing over and that’s being cap­tured about us, it all comes togeth­er to cre­ate this pic­ture that’s both dis­tort­ed but also com­pre­hen­sive in a way.

So I think that we have to ask who’s cre­at­ing this tech­nol­o­gy and who ben­e­fits from it. Who should have the right to col­lect and use infor­ma­tion about our faces and our bod­ies? What are the mech­a­nisms of con­trol? We have gov­ern­ment con­trol on the one hand, cap­i­tal­ism on the oth­er hand, and this murky grey zone between who’s build­ing the tech­nol­o­gy, who’s cap­tur­ing, and who’s ben­e­fit­ing from it.

Stender: This is going to be one of the focus­es of our talk today, kind of this inter­play between government-driven tech­nol­o­gy and corporate-driven technology—capitalism-driven tech­nol­o­gy. And one of the kind of inter­est­ing crossovers to bring up now is the poach­ing of uni­ver­si­ties and oth­er pub­lic research insti­tu­tions into the pri­vate sec­tor. Carnegie Mellon had around thir­ty of its top self-driving car engi­neers poached by Uber to start their AI depart­ment. And we’re see­ing this more and more, in which this knowl­edge capac­i­ty from the uni­ver­si­ty and resource field is being cap­tured by the cor­po­rate sec­tor. And so when the new advances in tech­nol­o­gy hap­pen it’s real­ly for prof­it com­pa­nies that are the ones that are kind of the tip of the spear now.

York: And one of those things that they do is they suck us in with all of the cool fea­tures. So just raise your hand real quick if you’ve ever par­tic­i­pat­ed in some sort of web site or app or meme that asked you to hand over your pho­to­graph and in exchange get some sort of insight into who you are. I know there’s one going around right now where it kind of changes the gen­der appear­ance of a per­son. Has any­one par­tic­i­pat­ed in these? Okay, excel­lent. I have, too. I’m guilty.

Does any­one remem­ber this one? So this was this fun lit­tle tool for end­point users, for us to inter­act with this cool fea­ture. And basi­cal­ly what hap­pened was that Microsoft, about two years ago they unveiled this exper­i­ment in machine learn­ing. It was a web site that could guess your age and your gen­der based on a pho­to­graph.

He thought that this was kind of sil­ly to include, like this isn’t a very com­mon exam­ple. But I was actu­al­ly remind­ed of it on Facebook a cou­ple days ago, when it said oh, two years ago today this is what you were doing. And what I was doing with this, and what it was telling me was that my 25 year-old self was like 40 years old.

So it wasn’t par­tic­u­lar­ly accu­rate tech­nol­o­gy but nev­er­the­less the cre­ators of this par­tic­u­lar demon­stra­tion had been hop­ing opti­misti­cal­ly” to lure in fifty or so peo­ple from the pub­lic to test their prod­uct. But instead with­in hours the site was almost strug­gling to stay online because it was so pop­u­lar, with thou­sands of vis­i­tors from around the world and par­tic­u­lar­ly Turkey for some rea­son.

There was anoth­er web­site called Face My Age that launched more recent­ly. It doesn’t try to just guess at age and gen­der, but it also asks users to sup­ply infor­ma­tion like their age, like their gen­der, but also oth­er things like their mar­i­tal sta­tus, their edu­ca­tion­al back­ground, whether they’re a smok­er or not. And then to upload pho­tos of their face with no make­up on, unsmil­ing, so that they can try to basi­cal­ly cre­ate a data­base that would help machines to be able to guess your age even bet­ter.

And so they say that okay well, smok­ing for exam­ple ages people’s faces. So we need to have that data so that our machines can get bet­ter and bet­ter learn­ing this, and then bet­ter and bet­ter at guess­ing. And of course because this is pre­sent­ed as a fun exper­i­ment for peo­ple, they will­ing­ly upload their infor­ma­tion with­out think­ing about the ways in which that tech­nol­o­gy may or may not be even­tu­al­ly used.

So Matthew, I’m going to hand it over to you for this next exam­ple because it makes me so angry that I need to cry in a cor­ner for a minute.

Stender: So, FindFace. VKontakte, VK, is the Russian kin­da Facebook clone. It’s a large plat­form with hun­dreds of mil­lions of users—

York: 410 mil­lion, I think.

Stender: What some­one did was basi­cal­ly— How’d it go? The sto­ry goes they got access to the pho­to API. So they had kin­da the fire­hose API. They were able to have access to all the pho­tos on on this rather large social media plat­form. Two engi­neers from Moscow/St. Petersburg wrote a facial recog­ni­tion script that basi­cal­ly to date is one of the top facial recog­ni­tion pro­grams. And they were basi­cal­ly able to con­nect this on the fron­tend, which is an app used by users, to be able to query the entire VK pho­to data­base and in a mat­ter of sec­onds return results. So it gives you as a user the pow­er to take a pho­to of some­body on the street, query against the entire social media pho­to data­base, and get match­es that are either the match of a per­son or you can find peo­ple that look sim­i­lar to the per­son that you’re try­ing to iden­ti­fy.

York: So NTechLab, which builds this, won the MegaFace Benchmark (love that name), which is the world cham­pi­onship in face recog­ni­tion orga­nized by the University of Washington. Do you see the inter­play already hap­pen­ing between acad­e­mia and cor­po­ra­tions? The chal­lenge was to rec­og­nize the largest num­ber of peo­ple in this data­base of more than a mil­lion pho­tos. And with their accu­ra­cy rate of 73.3%, they bypassed more than a hun­dred com­peti­tors, includ­ing Google.

Now, here’s the thing about FindFace. It also as Matthew point­ed out looks for sim­i­lar” peo­ple. So this is one of FindFace’s founders (that is a mouth­ful of words), Alexander Kabakov. And he said that you could just upload a pho­to of a movie star you like, or your ex, and then find ten girls who look sim­i­lar to her and send the mes­sages. I’m…real­ly not okay with this.

And in fact I’ve got to go back a slide to tell you this oth­er sto­ry, which made me even more sad, which is that a cou­ple years ago an artist called Igor Tsvetkov high­light­ed how inva­sive this tech­nol­o­gy could be. He went through St. Petersburg and he pho­tographed ran­dom pas­sen­gers on the sub­way and then matched the pic­tures to the indi­vid­u­als VKontakte pages using FindFace. So, In the­o­ry,” he said, the ser­vice could be used by a ser­i­al killer or col­lec­tor try­ing to hunt down a debtor.”

Well he was not wrong, because what hap­pened was that after his exper­i­ment went live and the media cov­ered it—there was a lot of cov­er­age of it in the media as an art project—another group launched a cam­paign to basi­cal­ly demo­nize pornog­ra­phy actors in Russia by using this to iden­ti­fy them from their porn and then harass them. And so this has already been used in this kind of way that we’re point­ing out as a poten­tial. This is already hap­pen­ing. Stalkertech.

But you know, I think that one of the real­ly inter­est­ing thing that Matthew found is that as we were look­ing through these dif­fer­ent com­pa­nies that cre­ate facial analy­sis tech­nol­o­gy and emo­tion­al analy­sis tech­nol­o­gy, the way that they’re brand­ed and mar­ket­ed is real­ly inter­est­ing.

Stender: We can go through some of these exam­ples, even just slide to slide and see. These are some of the top facial recog­ni­tion tech­nol­o­gy com­pa­nies out there. And what’s inter­est­ing, we’re not going to be talk­ing so much about Google, about Facebook, about Amazon, although these com­pa­nies are impor­tant. But we’re here high­light­ing one of the kind of under­stat­ed facts in the facial recog­ni­tion world, is that there are small com­pa­nies pop­ping up that’re build­ing incred­i­bly pow­er­ful and sophis­ti­cat­ed algo­rithms to be able to find facial recog­ni­tion match­es, even in low qual­i­ty, low light, reren­der­ing, and con­vert­ing from 2D to 3D. These sort of things whose names we don’t know. And we can sit back and kind of demo­nize the large tech­nol­o­gy com­pa­nies, but there is a lot to be done to hold small com­pa­nies account­able.

York: And I think you can see the famil­iar thread through all of these, which is what? Anyone want to wager a guess? Smiling hap­py faces. Usually beau­ti­ful women, smil­ing and hap­py, as we saw back on VKontakte’s page.

So, rather than focus­ing on the bad guys, they’re focused on this Oh look! Everyone’s real­ly hap­py when we use these facial recog­ni­tion tech­nolo­gies.” But what a lot of these tech­nolo­gies are doing is in my opin­ion dan­ger­ous. So for exam­ple Kairos, which is a Miami-based facial recog­ni­tion tech­nol­o­gy com­pa­ny, they also own an emo­tion­al analy­sis tech­nol­o­gy com­pa­ny that they acquired a cou­ple of years ago and they they wrapped into their core ser­vices.

And their CEO has said that the impe­tus for that came from their cus­tomers. Some of their cus­tomers are banks. Specifically that a bank teller, when you go to the bank, maybe a bank teller could use facial recog­ni­tion tech­nol­o­gy to iden­ti­fy you. And that would be a bet­ter way than you show­ing your ID or sign­ing some­thing or maybe even enter­ing a PIN.

But, some­times you could have some­one maybe that day who comes in to rob the bank and their face is kind of show­ing it. And so with that emo­tion­al analy­sis tech­nol­o­gy, the bank teller could have it indi­cat­ed to them that today is a day that they will refuse you ser­vice.

But my imme­di­ate thought when I read that was what about peo­ple who live with anx­i­ety? What about peo­ple who are just in a hur­ry that day? So you could lit­er­al­ly be shut out of your own mon­ey because some algo­rithm says that you’re anx­ious or you’re too emo­tion­al that day to be able to do that.

Another exam­ple that I found real­ly trou­bling was this one, Cognitec, which is I think a Dutch com­pa­ny. Theirs does gen­der detec­tion.” So this is used by casi­nos in Macau as well as in oth­er places. I thought gen­der detec­tion was a real­ly fun­ny con­cept, because as our society’s become more enlight­ened about gen­der and the fact that gen­der is not always a bina­ry thing, these tech­nolo­gies are basi­cal­ly using your facial fea­tures to place you in a gen­der.

And I’ve test­ed some of these things before online where it tries to guess your gen­der, and it often gets them wrong. But in this case it’s actu­al­ly where does our gen­der auton­o­my even fit into this? Do we have gen­der auton­o­my if these sys­tems are try­ing to place us as one thing or anoth­er?

Screenshot of an application scanning a child's face to diagnose Down Syndrome.

Stender: So one of things that is real­ly quite star­tling about this is the cul­mi­na­tion of data points. When we’re talk­ing about the way in which young peo­ple are hav­ing their faces scanned ear­li­er, in which iden­ti­fi­ca­tion soft­ware is being used using facial recog­ni­tion tech­nol­o­gy, that no one action may be held in a sta­t­ic kind con­tain­er any­more. That we’re now deal­ing with dynam­ic datasets that are con­tin­u­ous­ly being built around us.

Algorithms learn by being fed cer­tain images, often cho­sen by engi­neers, and the sys­tem builds a mod­el of the world based on those images. If a sys­tem is trained on pho­tos of peo­ple who are over­whelm­ing­ly white, it will have a hard­er time rec­og­niz­ing non­white faces.
Kate Crawford, Artificial Intelligence’s White Guy Problem, New York Times [pre­sen­ta­tion slide]

And one of the issues is that there is an asym­me­try of the way in which we’re seen by these dif­fer­ent sys­tems has, as Kate Crawford has said— (She pre­sent­ed last year on stage one. Some of y’all may have caught that.) And a lot of her research has gone into kind of the engi­neer­ing side. Looking at the ways in which dis­crim­i­na­tion bias repli­cates inside of new tech­nolo­gies. And as facial recog­ni­tion tech­nolo­gies are just one vec­tor these dis­crim­i­na­to­ry algo­rithms, it’s more and more impor­tant that we take a step back and say well, what are the nec­es­sary require­ments and pro­tec­tions and safe­guards to make sure that we don’t end up with with things like this:

A grid of several photos with the objects in them mostly identified correctly, and one of a black couple labeled a gorillas.

With a Google algo­rithm say­ing that a black cou­ple are goril­las. And there’s a lot of oth­er famous exam­ples of this. But if engi­neer­ing teams and if com­pa­nies are not think­ing about this holis­ti­cal­ly from the inside, then there may PR dis­as­ters like this, but the real-world impli­ca­tions of these tech­nolo­gies are very unset­tling and quite scary.

York: Now, Google was made aware of this and they apol­o­gized. And they said that there is still clear­ly a lot of work to do with auto­mat­ic image label­ing, and we’re look­ing at how we can pre­vent these types of mis­takes from hap­pen­ing in the future.” And they have done great work on this but I still think that part of the prob­lem is that these com­pa­nies are not very diverse. And that’s not what this talks about but I can’t help but say it. It’s an impor­tant facet of why these com­pa­nies con­tin­ue to make the same mis­takes over and over again.

But let’s talk a lit­tle bit about what hap­pens when it’s not just a com­pa­ny mak­ing a mis­take in iden­ti­fy­ing peo­ple in an offen­sive way, but when the mis­take has real-world impli­ca­tions.

Stender: So, one of things that is real­ly quite startling-interesting— And we were talk­ing about some of the dif­fer­ent algo­rithms for VK. But here is anoth­er exam­ple, that now sci­en­tists around the world and tech­nol­o­gists are train­ing facial recog­ni­tion algo­rithms on data­bas­es. In this case a group of inno­cent peo­ple, a group of guilty peo­ple,” and using machine learn­ing and neur­al net­works to try to dis­cern who is guilty out of the test data and who is [inno­cent].

York: Incidentally, their accu­ra­cy rate on this par­tic­u­lar task was 89.5%.

Stender: So 89.5%. I mean, for some things almost 90%…not bad. But we’re talk­ing about a 10% rate of either false pos­i­tives or of just errors. And if we’re think­ing about a crim­i­nal jus­tice sys­tem in which one out of every ten peo­ple are sen­tenced incor­rect­ly, we’re talk­ing about a whole tenth of the pop­u­la­tion which at a time in the future may not be able to have access to due process because of auto­mat­ed sen­tenc­ing guide­lines and oth­er things.

And it’s hap­pen­ing now. Nearly one half of American cit­i­zens have their face in a data­base that’s acces­si­ble by some lev­el of law enforce­ment. And that’s mas­sive. That means that one out of every two adults in the US, their face has been tak­en from them. That their like­ness now resides in a data­base which is able to be used for crim­i­nal jus­tice inves­ti­ga­tions and oth­er things. And so you may not even know that your face is in one of a num­ber of dif­fer­ent data­bas­es, and yet on a dai­ly basis these data­bas­es maybe crawled to look for new match­es for guilty peo­ple or sus­pects. But we’re not aware of this a lot of times.

York: And it’s not just our faces, it’s also oth­er iden­ti­fy­ing mark­ers about us. It’s our tat­toos, which I’m cov­ered with which and which now I know— I didn’t know when I got them, but now I know that that’s a way that I can be iden­ti­fied by police so I’m going to have to come up with some sort of thing that cov­ers them with infrared—I don’t even know.

It’s also our gait. It’s the way that we walk. And one of the real­ly scary things about this is that while facial recog­ni­tion usu­al­ly requires high-quality images, gait recog­ni­tion does not. If you’re walk­ing on the sub­way plat­form and the CCTV cam­era picks you up—and you know that the U-Bahn sta­tions are cov­ered with them—if you’re walk­ing that way, a low-bandwidth image is enough to rec­og­nize you by your gait. Yesterday, my boss from across the room rec­og­nized me by my gait. Even our eyes can do this. So it’s pos­si­ble.

The machines have eyes, and in some ways minds. And as more of these sys­tems become auto­mat­ed, we believe that humans will be increas­ing­ly place out­side of the loop.

Stender: So just anoth­er exam­ple, going back to New York, to show how these things real­ly don’t just… We’re now in this inter­con­nect­ed world. I don’t know if you all are famil­iar with stop-and-frisk. It was an unpop­u­lar law in New York City which allowed police offi­cers to essen­tial­ly go up to any­one they might be sus­pi­cious about and ask them for ID and pat them down.

What hap­pened, why this was even­tu­al­ly pulled by the police depart­ment, there were some chal­lenges in court say­ing it was uncon­sti­tu­tion­al. The pol­i­cy was was rescind­ed before it went to court. But the idea of…there are now records from the time that was in place of the peo­ple who were charged under this pol­i­cy. It was found to be very dis­crim­i­na­to­ry in the sense that young men of col­or were dis­pro­por­tion­ate­ly tar­get­ed by this pro­gram.

So if we’re look­ing at crime sta­tis­tics, let’s just say a read­out of the num­ber of arrests in New York City. And if we were to put demo­graph­ic data with that and then feed this into a machine learn­ing algo­rithm, if the machine learn­ing algo­rithms sees that a large per­cent­age of indi­vid­u­als are young black and brown men, what is the machine learn­ing algo­rithm to think except that these indi­vid­u­als have a high­er like­li­hood of com­mit­ting crimes?

In the real world, it was real-world bias by police offi­cers that were tar­get­ing minor­i­ty com­mu­ni­ties. But a machine algorithm…if we’re not weight­ing this sort of infor­ma­tion in the test and train­ing data, there’s no way for a machine to log­i­cal­ly or intu­itive­ly see a causal rela­tion­ship between seg­re­ga­tion and bias in the real world, and the crime sta­tis­tics that are the result from that.

York: So with that in mind, we want to talk a lit­tle bit about how we can love out loud in a future where ubiq­ui­tous cap­ture is even big­ger than it is now. So Matthew, tell me a lit­tle bit about this exam­ple from Sesame Credit. Has any­one heard of Sesame cred­it? Okay, so we’ve got a few peo­ple who are famil­iar with it.

Stender: So Sesame Credit is a sys­tem that’s now been imple­ment­ed in China—there’s an ongo­ing roll­out. It’s a social cred­it rat­ing, essen­tial­ly. It uses a num­ber of dif­fer­ent fac­tors. It’s being pio­neered by Ant Financial, which is a large finan­cial insti­tu­tion in China. It’s a com­pa­ny, it’s not tech­ni­cal­ly a state-owned enter­prise, but it’s fuzzy when it comes to the rul­ing Chinese Communist Party.

The inter­est­ing thing about this new sys­tem is that it uses things like your… It’ll look into things like your WeChat account and see who you are talk­ing with. And if peo­ple are dis­cussing sen­si­tive things, your social cred­it rat­ing can be docked.

So now we’re see­ing this sys­tem devel­oped right now in China that brings in ele­ments from your social life, your net­work con­nec­tions, as well as things like your cred­it his­to­ry, to paint a pic­ture of how good of a cit­i­zen are you. Alipay is now launch­ing the US. I think it was announced today. And so they’re now pio­neer­ing some very sophis­ti­cat­ed tech­nol­o­gy like iris scans. And the con­tact­less mar­ket in China has explod­ed from a few hun­dred mil­lions of dol­lars now into the hun­dreds of bil­lions of dol­lars. And so this tech­nol­o­gy in the last even twenty-four months has become much more preva­lent and has become much more ubiq­ui­tous for its capac­i­ty for indi­vid­ual sur­veil­lance.

York: And this is where life begins to resem­ble an episode of Black Mirror. So what we’d like to remind you is that dig­i­tal images aren’t sta­t­ic. With each new devel­op­ment, each sweep of an algo­rithm, each time you put some­thing there you’ve left it there. I know that I’ve got hun­dreds, pos­si­bly thou­sands of images sit­ting on Flickr. With each new sweep of an algo­rithm, these images are being reassessed. They’re being recon­sid­ered and rei­de­al­ized to match oth­er data that this com­pa­ny or X Company or a gov­ern­ment might have on you. What you share today may mean some­thing else tomor­row.

So right now we feel that there’s no uni­ver­sal rea­son­able expec­ta­tion that exists between our­selves and our tech­nol­o­gy. The con­se­quence of data aggre­ga­tion is that increased cap­ture of our per­son­al infor­ma­tion results in this more robust, yet dis­tort­ed, pic­ture of who we are that we men­tioned at the begin­ning.

And so I think that we’ll take the last few min­utes, and we’ll try to leave a few min­utes for ques­tions, just to talk about that emerg­ing social con­tract that we would like to see exist, we would like to see forged between us and tech­nol­o­gy com­pa­nies and gov­ern­ments.

We can’t see behind the cur­tain. We have no way of know­ing how the col­lec­tion of our visu­al imagery is even being used, aggre­gat­ed, or repur­posed. And we want to remind you also that these tech­nolo­gies are mech­a­nisms of con­trol. And so the first ques­tion that I want to ask, per­son­al­ly, is what kind of world do we want? I think that’s the start­ing point, is ask­ing do we want a world where our faces are cap­tured all the time? Where I can walk down this hall­way and have dif­fer­ent cam­eras that are attached to dif­fer­ent com­pa­nies that have dif­fer­ent meth­ods and modes of analy­sis look­ing at me and try­ing to decide who I am and mak­ing deter­mi­na­tions about me.

But per­haps we’re past that point, and so we’ve decid­ed to be prag­mat­ic a lit­tle bit in try­ing to for­mu­late some things that we can do. So in terms of what we want, we want active life that’s free from pas­sive sur­veil­lance. We want more con­trol over our choic­es and over the images that we share. And we want a tech­nol­o­gy mar­ket that isn’t based on sell­ing us out to the high­est bid­der. Luckily there are some peo­ple work­ing on all of these things right now, not just us, and so we feel real­ly sup­port­ed in these choic­es. And we’ll turn to you for reg­u­la­tion.

Stender: I’ve had the oppor­tu­ni­ty to sit in on a cou­ple of smart cities ses­sions yes­ter­day, talk­ing between devel­op­ment of smart cities in Barcelona and Berlin, as well as smart devices yes­ter­day. We are I think see­ing to some degree a devel­op­ment of a glob­al set of best prac­tices, but very piece­meal and frag­ment­ed. And I think that if we think about image recog­ni­tion tech­nol­o­gy as kind of a lay­er that fits on the many of the oth­er modes of tech­no­log­i­cal devel­op­ment, that it becomes clear that actu­al­ly we need to have some sort of best prac­tices as bio­met­ric data­bas­es con­tin­ue to be aggre­gat­ed.

It’s very dif­fi­cult for one per­son in one coun­try and a dif­fer­ent per­son in a dif­fer­ent coun­try to have rea­son­able expec­ta­tions of what the best prac­tices are going for­ward. I mean, maybe today it’s pos­si­ble but twen­ty years from now what’s the world look like that we want to live in?

York: So these are some of the areas that we think gov­ern­ments, and par­tic­u­lar­ly local gov­ern­ments, can inter­vene. We also think that we can have bet­ter pri­va­cy stan­dards for tech­nol­o­gy. And we know that there are a lot of pri­va­cy advo­cates here and that those things are already being worked on, too. So we want to acknowl­edge the great work of all the orga­ni­za­tions that are fight­ing for this. But we see one way to this is seg­ment pri­va­cy by feature—location, visu­al rep­re­sen­ta­tion, search, social, and move­ment, all of these dif­fer­ent areas in which our pri­va­cy is being vio­lat­ed.

We also think user-centric pri­va­cy con­trols… Right now, most pri­va­cy con­trols on dif­fer­ent plat­forms that you use are not real­ly user-friendly. And trust me, I spend a lot of time on these plat­forms.

And then anoth­er thing, too, that’s real­ly impor­tant to me—and the rest of my life I work on censorship—but I think for­ward con­sent. I don’t feel that I am con­sent­ing to these fifteen-page terms and con­di­tions doc­u­ments that com­pa­nies try to make as con­fus­ing for me as pos­si­ble. And so I think that if com­pa­nies keep in mind for­ward con­sent every time that you use their ser­vice, that’s one way that they can man­age this prob­lem.

But also, we think that you have to keep lov­ing out loud. That you can’t hide. That you can’t live in fear. That just because these sys­tems are out there, yes of course we have to take pre­cau­tions. I talk a lot in my day job about dig­i­tal secu­ri­ty. And I think that this is that same area, where we have to con­tin­ue liv­ing the way that we want to live. We can take pre­cau­tions, but we can’t sac­ri­fice our lives out of fear.

And so one thing is pho­to aware­ness. We’re real­ly glad to see that at a lot of con­fer­ences recent­ly there’ve been ways of iden­ti­fy­ing your­self if you don’t want to be pho­tographed. But also in clubs in Berlin if anyone’s ever gone club­bing (and if you haven’t you prob­a­bly should), usu­al­ly they put a stick­er over your cam­era when you walk in the front door. And that to me… The first time I saw that I was so elat­ed that I think I danced til 7AM. I mean, that’s not nor­mal.

We also think that you know, reg­u­late your own spaces. I don’t want to get into this divi­sion of public/private prop­er­ty. That’s not what I’m here for. But I think that we have our own spaces and we decide inside of those what’s accept­able and what isn’t. And that includes con­fer­ences like this. So if we feel in these spaces—I don’t real­ly know what’s going on with cam­eras here, if they exist or not. But if we feel in these spaces that that’s some­thing we want to take con­trol of, then we should band togeth­er and do that.

Put sta­t­ic in the sys­tem? [Gestures to Stender.]

Stender: This is a gen­er­al point but in any sys­tem that is being focused on, how can we find ways to put sta­t­ic in the sys­tem, whether this is tag­ging a pho­to that’s not you with your name on Facebook, or whether that’s… Yeah, these sort of strate­gies, right. So it’s ways in which, how can we think a lit­tle more out­side to actu­al­ly con­fuse the algo­rithms or to make their jobs a lit­tle more dif­fi­cult. And there’s dif­fer­ent ways to do this, whether it’s wear­ing reflec­tive cloth­ing, anti-paparazzi cloth­ing in pub­lic, or tag­ging things under one label that may not be that.

York: It also means you know, wear­ing those flash-proof gar­ments, cov­er­ing your face, going to buy your burn­er phone in the store and wear­ing a Halloween cos­tume. I’m not say­ing you should do that [whis­per­ing] but you should total­ly do that.

And con­tin­u­ing to love out loud and not live in fear. I can see that we’ve com­plete­ly run out of time because I think the schedule’s a lit­tle bit behind. But there’s some good news. If you want to keep talk­ing to us about this, we’re both pret­ty eas­i­ly acces­si­ble. But we’re also going to take advan­tage of that sun­light that didn’t exist yes­ter­day and go out back for a cel­e­bra­to­ry beer. So if you want to keep talk­ing about this sub­ject you’re wel­come to join us out there. Thank you so much.

Further Reference

Session description


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.