Let me start with an assump­tion that comes out of the paper, that’s avail­able on the web site if you care to look at it, that one of the things that brings us here is that we’re watch­ing algo­rithms move out­side of the the­o­ret­i­cal realm. So out­side of the com­put­er sci­ence ques­tions about how they’re built and how they work, and being deployed inside impor­tant moments in society.

What I like to think about is this ques­tion of how they are being installed as func­tion­ing parts of our pub­lic knowl­edge sys­tem. The ways that they’re being pre­sent­ed as effi­cient, reli­able, author­i­ta­tive mech­a­nisms for pro­duc­ing and deliv­er­ing knowl­edge. And I think this is right in line with the point that Joan gave us yes­ter­day, that we are inter­est­ed in part because Google has point­ed to algo­rithms. We saw exam­ples of that. This is what’s going to assure that the infor­ma­tion you’re get­ting is reli­able. This is what’s going to assure that the infor­ma­tion is rel­e­vant. I hope it’s not just fear of algo­rithms that’s dri­ving us, although maybe that’s a part of it.

But it’s an inter­est­ing ques­tion. Is that Google mak­ing a sort of emp­ty ges­ture? Is that a deflec­tion of respon­si­bil­i­ty. Is that the decep­tion that is in fact part of the algo­rithm work that Robert was ask­ing? Or is that some­thing more? Is some­thing being installed and offered? If not true yet, some­thing that is being posi­tioned as true, as a reli­able form?

So the aim of the paper is that we might see algo­rithms not just as codes with con­se­quences, but as the lat­est socially-constructed and institutionally-managed mech­a­nism for assur­ing pub­lic acu­men, a new knowl­edge log­ic. And this, I would say, draws our atten­tion to the process by which that hap­pens, which is not exact­ly the same as how to algo­rithms work, although it’s not unre­lat­ed, either. So what I’m sug­gest­ing is we’re not just look­ing at the pro­duc­tion of algo­rithms but the pro­duc­tion of the algo­rith­mic as a kind of social jus­ti­fi­ca­tion, as a kind of legit­i­ma­tion mechanism.

And this requires ask­ing how these tools are called into being by, enlist­ed as part of, and nego­ti­at­ed around col­lec­tive efforts to know and be known. So let me see if I can draw some atten­tion to that. When this is work­ing, this depends on the kind of author­i­ty that an algo­rithm pro­duces, a kind of lent author­i­ty of tech­ni­cal and cal­cu­la­tion­al reas­sur­ance. And what I like to do is look at some of the frayed edges where the social role of algo­rithms as tools of knowl­edge are still unsettled.

So let me start with this exam­ple. In 2011 there was a bit of an uproar because Siri had just been intro­duced as this sort of voice-activated search mech­a­nism for the iPhone, and peo­ple noticed that depend­ing on where you asked ques­tions, it seemed to be strange­ly unre­spon­sive or down­right coy about ques­tions of abor­tion. And this is just one exam­ple. People did this in dif­fer­ent cities. There was one sto­ry about a woman stand­ing out­side of Planned Parenthood ask­ing, Where can I find an abor­tion?” and it said [shrugs], Hm. I dunno.”

This is a real­ly inter­est­ing ques­tion. Apple had to sort of field this ear­ly on in its con­struc­tion of Siri as sort of a reli­able infor­ma­tion asset. And if we thought about this, if the idea of treat­ing this as an algo­rith­mic exer­cise, there’s a pret­ty rea­son­able expla­na­tion for why these answers were unsat­is­fac­to­ry to peo­ple who were con­cerned about it.

It might be easy enough to say well, Siri is query­ing search engines. It’s look­ing at Yelp and it’s look­ing at Bing Answers and it’s look­ing at oth­er search queries. And so when you say some­thing like, Where can I get an abor­tion?” it’s pars­ing that and say­ing okay, it’s look­ing for loca­tion based on where it’s look­ing for abor­tion; that’s my sub­ject top­ic. I’ll put some ver­sion of that into a search mech­a­nism and I’ll see what comes back. And if we think about the kind of mate­r­i­al that’s on the web and how it’s orga­nized, we might say, Well, a site like Planned Parenthood might not have abor­tion’ as its key infor­ma­tion term. People who link to it might not be using the anchor abor­tion.’ ”

But pro-life activist groups may very well do that. In fact, the head of NARAL Pro-Choice America said that the kinds of cri­sis preg­nan­cy cen­ters that are offer­ing ser­vices but not abor­tion (quite delib­er­ate­ly not abor­tion) out­num­ber ser­vices that pro­vide abor­tion. They try to game the sys­tem in mak­ing sure that the yel​low​pages​.com search engines will point to their sites rather than some­thing that would pro­vide abor­tion ser­vices. This is a delib­er­ate mech­a­nism and search engines are yet not savvy to this, so it’s very hard to parse that.

So Apple could’ve said, Look, this is based on an algo­rith­mic assess­ment of Web infor­ma­tion. The query you made called up cer­tain kinds of resources. When you asked for Viagra, we were able to find drug stores. We could put that togeth­er. But abor­tion played into this strange mix­ture of what is and is not searchable.”

So maybe this is just a ques­tion of naïveté on the part of the peo­ple who were ask­ing the ques­tion. Maybe we could call for algo­rith­mic lit­er­a­cy. We could say, People should under­stand that when they say, Where’s an abor­tion?’ to Siri they’re going to get cer­tain kinds of answers. If they find those polit­i­cal­ly trou­bling, that’s not Apple being pro-life, it’s an arti­fact of the way search works.”

And this is not unlike the exam­ple that was brought up yes­ter­day by Claudia [Perlich]. The famous Target exam­ple. Target pre­dict­ing whether their cus­tomers are preg­nant, try­ing to send them coupons, and the father who got the coupons got all upset. We might think about it and say, What’s weird is not that Target is try­ing to make a bet about how prob­a­ble it is that because you bought cer­tain kinds of things you match a pat­tern of oth­er kinds of peo­ple who might’ve been preg­nant and we send you a cou­ple coupons. The weird­ness is that the dad freaked out.” The dad got coupons from Target for baby car­riages, and took that not as Target has a prob­a­bilis­tic bet that some­one in the house­hold might or might not be preg­nant and its worth it to them to send some coupons,” but he took it as an asser­tion. He took it as a claim. Someone in your house­hold is preg­nant.” And to the best of his knowl­edge that was­n’t true. It turns out he was wrong. And so what might be strange is that the peo­ple involved are mak­ing mis­ap­pre­hen­sions of what algo­rithms offer.

Now, should that expla­na­tion be suf­fi­cient? Is that enough? I would say even for me, it feels insuf­fi­cient for this. We could look at this and say, This is a kind of naïveté about how algo­rithms work.” We could also look at it and say, This is an artic­u­la­tion that we want more from our algo­rithms than can be pro­vid­ed algo­rith­mi­cal­ly.” That when it comes to abor­tion, when it comes to these polit­i­cal­ly divi­sive issues, a pure­ly algo­rith­mic solu­tion is not going to be enough.

And it’s a call for that. Rather than a naïveté, it’s say­ing, This was insuf­fi­cient, and we call upon Apple to be bet­ter about this.” Now, it gets con­flict­ed. The peo­ple who believe one side of this are call­ing for one change. People who believe the oth­er side are call­ing for a dif­fer­ent change. This does­n’t solve the prob­lem, but it rec­og­nizes that there are com­plex­i­ties in what we expect from a knowl­edge reg­i­men. And the abil­i­ty to ges­ture at algo­rithms and say, This was a pro­vid­ed piece of knowl­edge that was algo­rith­mi­cal­ly based,” is fine up to a point. But we find these edges where that becomes insuf­fi­cient. And there is an out­burst, a reac­tion. Maybe not fully-articulated, maybe unclear, but a reac­tion that says, You’ve reached a point that is insuf­fi­cient.” And I think that’s what was going on here.

So how would we begin to look at the pro­duc­tion of the algo­rith­mic? Not the pro­duc­tion of algo­rithms, but the pro­duc­tion of the algo­rith­mic as a jus­ti­fi­able, legit­i­mate mech­a­nism for knowl­edge pro­duc­tion. Where is that being estab­lished and how do we exam­ine it?

We could look at algo­rithms in prac­tice and ask about the impli­ca­tions of the results they offer, the con­clu­sions they draw; that’s one way. We could look empir­i­cal­ly at what peo­ple think of them when they rely on them. Do they treat them as per­fect­ly unprob­lem­at­ic infor­ma­tion sources? Do they ques­tion them, are they skep­ti­cal? We could look at con­tro­ver­sies and think about when the claim to have been pro­vid­ing infor­ma­tion algo­rith­mi­cal­ly turned into a problem. 

I want to sug­gest that look­ing at how sites reg­u­late inap­pro­pri­ate con­tent; when they run into ques­tions about cen­sor­ship; when they run into the kinds of infor­ma­tion that peo­ple don’t want to see, pro­vides a real­ly inter­est­ing lens. This being sort of one of them. Here was This is infor­ma­tion that you’re not show­ing me that I would like to see.” But those edges where we begin to hold plat­forms respon­si­ble for the infor­ma­tion they pro­vide, espe­cial­ly around the kinds of tra­di­tion­al­ly hot-button issues around sex and vio­lence, pornog­ra­phy, pol­i­tics, sui­cide. All the kinds of things that we find our­selves trou­ble by the infor­ma­tion reg­i­men that could be offered.

Looking at the ques­tion of how infor­ma­tion is curat­ed and the role algo­rithms play in this I think pos­es a real­ly inter­est­ing lens for this. Fundamentally, it’s about mak­ing val­ue judge­ments, so it reminds us that the algo­rithms are mak­ing val­ue judge­ments all the time, but those val­ue judge­ments may run into each oth­er in this cases. 

Judgements about what not to show are con­tentious because they are both in and not in the ser­vice of the user. Sometimes this is for the sake of the user com­mu­ni­ty not see­ing some­thing. Sometimes it’s, Someone might want to see this but I’m not going to show it to them.” And a place where oth­er­wise orga­niz­ing prin­ci­ples have to be cur­tailed and set aside.

And also, when we wor­ry about offen­sive mate­r­i­al, inap­pro­pri­ate mate­r­i­al, it urges us to want to decide who’s talk­ing and who’s respon­si­ble. And that ques­tion of respon­si­bil­i­ty and account­abil­i­ty is one of the lens­es where we can bring algorithmically-produced infor­ma­tion into view.

Finally, it also works against the kind of broad, prob­a­bilis­tic per­spec­tive that I think is more native to con­tem­po­rary algo­rith­mic use. It was not sur­pris­ing to me that Claudia’s infor­ma­tion yes­ter­day was about adver­tis­ing. The idea that you can make prob­a­bilis­tic guess­es based on what peo­ple have been search­ing and what they’re pur­chas­ing. And if you get two clicks in ten thou­sand, that’s a suc­cess. In that envi­ron­ment, the one moment some­one is offend­ed by con­tent can be washed away.

But when we talk about offen­sive con­tent, that one moment is high­ly trou­bling. So it’s that place where one instance of pro­vid­ing the wrong infor­ma­tion becomes polit­i­cal­ly prob­lem­at­ic, despite the approach that says most of the time we get it most­ly right and that’s sufficient.

So let me pick on Google for a lit­tle while, since we’ve been doing that, and think about the way Google talks about whether or not or in what cas­es it wants to cen­sor algo­rith­mic results.

We start with a canon­i­cal descrip­tion that Google has often brought out when peo­ple have crit­i­cized it for infor­ma­tion it’s pro­vid­ing and said that it should change the index, which is an instance ear­ly on when search­ing for the word Jew,” the first result that would come up on the search page was a high­ly anti-Semitic page called Jew Watch. And when peo­ple real­ized that this site was com­ing up as the first result, there was a great deal of crit­i­cism call­ing upon Google to say, This needs to be removed. This needs to be altered. This is problematic.”

And Google made a deci­sion not to alter that index at the time. And they made quite a bit of hay about it, say­ing they were inter­nal­ly torn, they thought this was a rep­re­hen­si­ble site. But in the end, it was impor­tant for them not to alter the index. The same kind of answers that they gave to the Bettina Wulff case: It’s the Web telling us this. It’s algo­rithm judg­ing this. If you don’t like the results, your cri­tique is with the Web and with this site, not with the index. And if we get into the game of mess­ing with the index and start­ing to alter things, then we’ve giv­en up the ghost. It’s a prob­lem­at­ic move. No provider’s been more adamant about the neu­tral­i­ty of its algo­rithm than Google, and reg­u­lar­ly response with this response that it should­n’t alter the search results.

So when Google in its Ten things we know to be true” doc­u­ment or man­i­festo says, Our users trust our objec­tiv­i­ty and no short-term gain could ever jus­ti­fy breach­ing that trust.” I would say this is nei­ther spin nor cor­po­rate Kool-Aid, it’s a deeply-ingrained under­stand­ing of the pub­lic char­ac­ter of Google’s infor­ma­tion ser­vice, inter­nal to Google, and it’s one that both influ­ences and legit­imizes many of their tech­ni­cal and com­mer­cial undertakings.

It does­n’t mean they don’t alter the index. But it’s some­thing that they offer as an expla­na­tion for how to think about the index and how to think about their role. Part of this is that the algo­rithm offers a kind of assur­ance, a kind of tech­ni­cal and math­e­mat­i­cal promise. Frank [Pasquale] in his paper [p.1] calls it the pati­na of math­e­mat­i­cal rig­or. And that lends them a kind of safe posi­tion from which to respond to criticism.

Google Suggest seems to be a dif­fer­ent sto­ry. Google Suggest is the func­tion is the func­tion where if you begin to type a search query it will try to fill in what it’s guess­ing you’re search­ing for. And we can see that this is meant as a pret­ty pro­duc­tive thing. We’ve done as a work­shop. We’ve typed in gov­ern­ing a,” we get gov­ern­ing algo­rithms” as the top hit. Nice pre­dic­tive effort to fill in a space that I might very well have been typ­ing in.

People have made light of the fact that it comes up with some pret­ty bizarre answers some­times. A curi­ous kind of hiero­glyph about what it is that peo­ple are look­ing for in the world. And then some­times you can begin to type and it will fill in some infor­ma­tion. So, how to ki” gives us some­thing, but as soon as you put two more ls in, it stops… And it does­n’t give us any­more results.

I’ll say first, there are a num­ber of queries in which this will hap­pen, where it sim­ply will refuse to give you auto-suggest things. It’s not as if there are no search queries every made that began with how to kill.” I think Google’s wor­ry, for a num­ber of things, could be that the next word is your­self,” and that’s real­ly trou­bling. They’ve had a lot of con­cerns about if they’re pro­vid­ing infor­ma­tion in a sui­ci­dal envi­ron­ment. Maybe they’re wor­ried about tech­niques, teach­ing peo­ple how to do things.

How is it that this instance com­pares to the Jew Watch instance? In both cas­es, an algo­rithm result, based on math­e­mat­i­cal­ly rig­or­ous assess­ment of user search queries and activ­i­ty, pro­duces a result that’s prob­lem­at­ic for Google and trou­bling to peo­ple, and yet in the first case they are proud to say, We don’t alter the index no mat­ter how rep­re­hen­si­ble the result that is returned,” and in this case they say, No prob­lem, we’ll take things out?” Why are they so will­ing to cen­sor the auto-complete func­tion when they’re usu­al­ly so adamant about not cen­sor­ing search results?

Let me give you a hybrid case. A cou­ple of years ago there was an instance where if you typed in Michelle Obama” into an image search, the very first image that cropped up was a high­ly racist, hideous Photoshopping of her face with the face of a baboon. Quite awful, quite stir­ring up some very old and trou­bling racist tropes in American soci­ety. And sim­i­lar to the Jew Watch inci­dent, peo­ple began to com­plain, said, Google should do some­thing about this. This is rep­re­hen­si­ble.” And their first answer was exact­ly as before. They said, We don’t change the index. We find it rep­re­hen­si­ble, but we don’t change the index. This is the Web telling us for what­ev­er rea­son peo­ple are link­ing to this. That’s what we’re cal­cu­lat­ing, and sorry.” 

But crit­i­cism did not sub­side, and Google made a sec­ond deci­sion. The sec­ond deci­sion was that they would alter the index. They would take the image out of their image search. They replaced their ad ban­ner with a lit­tle mes­sage this index has been altered, click here to found out why.” So a moment where the attempt to say algo­rithm pre­vails, this infor­ma­tion has to stand because the algo­rithm mea­sures some­thing and we should let the algo­rithm do what it does, it’s bet­ter to let it do what it does than to start muck­ing about, fell in response to this criticism.

So maybe race trumps reli­gion? Maybe this was more hor­rif­ic than Jew Watch. Maybe because it’s a sit­ting First Lady, right? Those expla­na­tions don’t quite stand. What I would sug­gest is that there is a dif­fer­ent sense of prox­im­i­ty to the results. Maybe in legal terms that would be lia­bil­i­ty,” but I would say it’s beyond that.

When Google serves up the link to Jew Watch, it is a result that must be clicked on. So the user’s still mak­ing a ges­ture that says I will go vis­it this.” Google has offered it up at the top, but it has­n’t actu­al­ly deliv­ered it unto you. The Michelle Obama image is actu­al­ly recre­at­ed in the image search, in thumb­nail form. So Google’s a lit­tle clos­er to pro­vid­ing the image, it actu­al­ly made it vis­i­ble to you. Auto-suggest actu­al­ly makes sug­ges­tions. It actu­al­ly pops those things in. In fact, it’s not only some­thing that seems to be com­ing out of Google’s mouth, it’s putting words in your mouth. Isn’t this what you meant? Didn’t you mean, how to kill yourself?’ ”

And that prox­im­i­ty is a real­ly inter­est­ing, trou­bling ques­tion, because it rais­es the ques­tion of who’s voice do we think the algo­rithm is? And the kind of murk­i­ness, the kind of fraught rela­tion­ship we have to this idea that at an arm’s dis­tance the tool pro­duced that infor­ma­tion. You don’t like Jew Watch? The tool pro­duced that infor­ma­tion. That’s the Web, and it’s care­ful­ly cal­cu­lat­ed, and we’re just over here doing our job. That dis­tance gets nar­row­er and nar­row­er as we think about where the results are being pro­vid­ed from.

So we have, I would argue, a fraught rela­tion­ship to the idea of algo­rithms and what they pro­duce. Sometimes they are reas­sur­ing­ly offered as neu­tral tools, a reflec­tion of what is. Sometimes they’re a mea­sure of user activ­i­ty, reflect­ing of us. And some­times they’re the voice of the plat­form, what they say. What does Siri say? What does Apple say? What does Google say?

And this is more of a ques­tion of what do naïve users think? It’s not like, Oh, some­body thinks Siri’s telling me the answer.” It’s how have we posi­tioned these things as being the voice of the provider, or the voice of the tool, or the voice of our activ­i­ty reflect­ed back to us. And those things are not sim­ple, and they have not been sort­ed out.

Let me do one more exam­ple, because it’s sort of fas­ci­nat­ing to me and because there’s a dif­fer­ent set of algo­rithms that I think we have anoth­er sort of fraught rela­tion­ship with. I’m going to pick on Google a lit­tle big again, sort of. But cura­tion of algo­rith­mic results for a dif­fer­ent reason.

There are a num­ber of tools that I would call inter­nal pop­u­lar­i­ty mech­a­nisms. So, how plat­forms like to tell us what we’re all doing on that site. Things like what’s the most pop­u­lar video? Things like what’s the best-selling book? Things like what’s been most-emailed or most-viewed or most-often read? And then some­thing which I’ve spent too much time think­ing about, the Trends on Twitter: What are peo­ple talk­ing about right now?

These are real­ly fas­ci­nat­ing to me, these kind of pop­u­lar­i­ty mech­a­nisms, pre­sent­ing back in real-time, which I think is impor­tant. Apart of the infor­ma­tion resource itself, these mea­sures of inter­est, mea­sures of activ­i­ty, these are pow­er­ful ways of to keep some­one on the site. Maybe they’ll click on that arti­cle and maybe it’s more like­ly than ran­dom to be an inter­est­ing one. And there’s lots of mea­sures of activ­i­ty and pop­u­lar­i­ty that can be sum­moned up.

So here the knowl­edge is both from us and about us, and the ques­tion of who’s voice it’s speak­ing in is once again tricky. This is not new, that we’re told back to us what’s pop­u­lar. And it’s not an attempt to be naïve and say that we’ve always expect­ed those things to be an unhan­dled, uncu­rat­ed mea­sure. Telling us what’s pop­u­lar is always a mech­a­nism that encour­ages to buy some­thing or just read some­thing, encour­ag­ing us to think about something.

But it’s impor­tant to ask what’s the gain for providers to make such char­ac­ter­i­za­tions? How do they shape what they’re mea­sur­ing? And how do these algo­rith­mic glimpses help con­sti­tute and cod­i­fy the very publics that they claim to mea­sure? The publics that would not oth­er­wise exist except that the algo­rithm called them into exis­tence. That makes it, I think, even trick­i­er when we begin to adjust the results.

YouTube made an announce­ment in 2008 that it was going to begin to algo­rith­mi­cal­ly demote cer­tain videos, videos that they did­n’t find so prob­lem­at­ic that they were going to remove them accord­ing to their guide­lines, but were sug­ges­tive enough and adult enough that they want­ed them out of their most-viewed, most-favorited lists. 

And I thought this was a real­ly pecu­liar thing to do, right? It’s kind of like, You just said that they were kind of okay. They don’t break the rules. But we’re going to obscure them a lit­tle bit.” This is a very clum­sy way to keep bad stuff away from the wrong peo­ple. It’s still there, it’s still working.

So the ques­tion was what else does this do? What else does that mea­sure of pop­u­lar­i­ty do besides being an actu­ar­i­al mea­sure of what’s pop­u­lar? Well, it turns out that YouTube uses those algo­rith­mic mea­sures to pre-populate the front page. When a new user or an unreg­is­tered user shows up, they fill that page with videos you might like, and they base that on pop­u­lar­i­ty. What they don’t want to do is have a new user show up on YouTube and find a bunch of biki­ni videos [and] get the wrong impres­sion. Even though those biki­ni videos are in YouTube and they said they’re okay.

So rather than curat­ing the list of pop­u­lar­i­ty because some­thing is so offen­sive that it should­n’t be there, a kind of clas­sic cen­so­r­i­al removal, they’re curat­ing their own self-presentation by alter­ing the algo­rithm. What do we not mea­sure, so that the prod­uct can do work for us, can pop­u­late that front page well?

And I’m just going to take one minute to give this idea, because it will con­nect to Kate’s talk. I want to think about this idea of what I’ve been call­ing cal­cu­lat­ed publics, and it’s an unformed idea. Maybe Kate will get it even smarter than I’ve been able to. Is it such that these algo­rithms that mea­sure up here’s what’s going on right now, here’s what peo­ple care about, here’s what’s highly-ranked,” which are very easy to add as fea­tures, very easy to offer. The sites have that data and what a con­ve­nient way to maybe get some­one to stay on the site a lit­tle longer, read one more arti­cle, watch one more video. 

But when they are offered up as this is an insight into what peo­ple care about,” what’s trend­ing, what’s most watched, what’s most impor­tant, do we read off of those an idea of that pub­lic that it rep­re­sents? And if that’s not only algo­rith­mi­cal­ly mea­sured, which means cer­tain peo­ple are being count­ed, cer­tain actions are being count­ed, cer­tain things are being weight­ed, but we’re also sec­on­dar­i­ly using that as a way to con­strain it… Not because we want to show what’s pop­u­lar but because we want to show a care­ful­ly curat­ed ver­sion of what’s pop­u­lar (because it serves the front page, because it makes rec­om­men­da­tions). Then what kind of assump­tions are we mak­ing about what this might seem to offer as a true glimpse of the pub­lic, ver­sus a kind of curat­ed ver­sion of the public? 

There’s a fun­da­men­tal para­dox in the artic­u­la­tion of algo­rithms. Algorithmic objec­tiv­i­ty is an impor­tant claim for a provider, par­tic­u­lar­ly for algo­rithms that serve up vital and volatile infor­ma­tion for pub­lic con­sump­tion. Articulating that algo­rithm as a dis­tinct­ly tech­ni­cal inter­ven­tion, as Google often does, helps an infor­ma­tion provider answer charges of bias, error, and manip­u­la­tion. Yet at the same time, there are moments when a plat­form must be in the ser­vice of com­mu­ni­ty and its per­ceived val­ues. And algo­rithms get enlist­ed to curate or are curated.

And there’s com­mer­cial val­ue in claim­ing the algo­rithm pro­vides bet­ter results than its com­peti­tors, pro­vides cus­tomer sat­is­fac­tion. In exam­in­ing the artic­u­la­tion of an algo­rithm, we should pay par­tic­u­lar atten­tion to how these ten­sions between technically-assured neu­tral­i­ty and the social fla­vor of the assess­ment being made are man­aged and some­times where they break down.

Thanks for your patience.

Further Reference

Two oth­er pre­sen­ta­tions fol­lowed this, in response:

The Governing Algorithms con­fer­ence site with full sched­ule and down­load­able dis­cus­sion papers.

A spe­cial issue of the jour­nal Special Issue of Science, Technology, & Human Values, on Governing Algorithms was pub­lished January 2016.

Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Cash App, or even just sharing the link. Thanks.