Welcome, every­body. My name is Ed Finn. I am the Director of the Center for Science and the Imagination at Arizona State University. I’m also the Academic Director of Future Tense. And you are here at Future Tense. This is a part­ner­ship between the New America Foundation, Arizona State University, and Slate mag­a­zine that explores emerg­ing tech­nolo­gies and their trans­for­ma­tive effects on soci­ety and pub­lic pol­i­cy.

Central to this part­ner­ship is an event series here in Washington DC. There’s a blog on Slate, and we also do all sorts of things in dif­fer­ent places. For exam­ple, we recent­ly did an event series called My Favorite Movie with the National Science Foundation Director Francis Córdova. There was an event on afro­fu­tur­ism in New York recent­ly. And watch this space in January for an event on human/robot inter­ac­tion.

And today, of course we’re going to be talk­ing about algo­rithms. The event today, The Tyranny of Algorithms, you can fol­low the dis­cus­sion online with the hash­tag #tyran­ny­ofal­go­rithms, and fol­low us at @FutureTenseNow, because the algo­rithms will be fol­low­ing us, and so I think you guys should, too.

So, The Tyranny of Algorithms is obvi­ous­ly a polem­i­cal title to start a con­ver­sa­tion around com­pu­ta­tion and cul­ture. But I think that it helps us get into the cul­tur­al, the polit­i­cal, the legal, the eth­i­cal dimen­sions of code. Because we so often think of code, and code is so often con­struct­ed, in a pure­ly tech­ni­cal frame­work, by peo­ple who see them­selves as solv­ing tech­ni­cal prob­lems. And so, I’m not inter­est­ed in mak­ing a hard argu­ment today, and I don’t think that oth­ers here are, about algo­rithms as tru­ly tyran­nous. But I think that it’s a way of pos­ing the ques­tion about what’s real­ly a sort of grav­i­ta­tion­al tug, the grav­i­ta­tion­al pull of com­pu­ta­tion.

As an exam­ple, when I came here yesterday…I wear one of those lit­tle Fitbit activ­i­ty track­ers. And in the chaos of deal­ing with our two young chil­dren and get­ting out the door to get to the air­port, I for­got my Fitbit. And I felt the tug, because I thought to myself, here I am, I’m walk­ing all over Washington DC. I’m get­ting all this great exer­cise, and it counts for noth­ing, because I’m not track­ing it and it’s not going be part of my sta­tis­tics. And then I also for­got my watch, and I real­ized that I could live with­out the Fitbit but I could not mod­er­ate this event with­out a watch, so I had to go buy one last night.

So, that notion of algo­rithms as cul­tur­al sys­tems as well as tech­ni­cal sys­tems is real­ly impor­tant. They embed math­e­mat­i­cal rig­or, incred­i­bly clever and sophis­ti­cat­ed ratio­nal thought, and intel­lec­tu­al con­struc­tions. But they also inevitably include all the faulty assump­tions and incom­plete mod­els and good enough approx­i­ma­tions of real­i­ty that aren’t actu­al­ly good enough that humans inevitably cre­ate when­ev­er we’re build­ing any­thing. And so I want to lay out a cou­ple of ideas around the notion of algo­rithm and what algo­rithms know as a way to kick things off.

And first, why algo­rithm” as opposed to anoth­er word like com­pu­ta­tion or big data. And for me, algo­rithms are the the place where the rub­ber meets the road, the moment of inter­sec­tion between the ide­al­ism of math­e­mat­ics, the ide­al­ism of pol­i­cy, the ide­al­ism of big ideas, and the prag­ma­tism of actu­al­ly build­ing a sys­tem that func­tions in the world.

So in a way, algo­rithms are recipes—and I’ll talk about a cou­ple of def­i­n­i­tions of algo­rithms. And I like that term recipe” because it express­es both the the notion of log­i­cal order and sequenc­ing. The prac­ti­cal­i­ty of a sequence of steps that are sim­ple, straight­for­ward, and will have depend­able results. But also the inti­ma­cy, the idea that algo­rithms are not just out in the world, they’re increas­ing­ly very close to us. For many of us, frac­tions of an inch from our bod­ies right now in our smart­phones and our smart devices, and they’re in our heads in all sorts of inter­est­ing ways.

So what exact­ly is an algo­rithm? I’ll give you three sim­ple def­i­n­i­tions. The engineer’s ver­sion is that it’s a method for solv­ing a prob­lem. An algo­rithm is a depend­able set of steps that you can take to solve a prob­lem. Typically, let’s call it a math­e­mat­i­cal or log­i­cal prob­lem.

The mathematician’s ver­sion, which goes back to Alan Turing and Alonzo Church and oth­ers who were think­ing about the foun­da­tions of com­put­er sci­ence, is that algo­rithms occu­py the space of effec­tive com­putabil­i­ty. That is, all the things that can be com­put­ed in a finite amount of time. So it’s the space of math­e­mat­ics that can be com­put­ed, that can be trans­lat­ed into a sequence of steps that a com­put­er can solve in a finite amount of time. And we’ll turn back to that in a moment, as well.

And then the romantic’s ver­sion— I’m not sure if this quote is real, but it’s too good to ignore, by the com­put­er sci­en­tist Francis Sullivan. It’s in course mate­ri­als from one of the clas­sic com­put­er sci­ence cours­es that I took as an under­grad, Algorithms and Data Structures. So, this quote from Francis Sullivan says,

For me, great algo­rithms are the poet­ry of com­pu­ta­tion. Just like verse, they can be terse, allu­sive, dense, and even mys­te­ri­ous. But once unlocked they cast a bril­liant new light on some aspect of com­put­ing.

And I think that notion of poet­ry is real­ly inter­est­ing, because it sug­gests that algo­rithms have some kind of an inte­ri­or life, or that there is mys­tery. That algo­rithms are not only the pre­dictable prod­ucts of ratio­nal thought. That they can sur­prise us. And I’ll come back to that as well.

So, algo­rithms, where do they come from? They’ve been around for a long time. The word comes from Muḥammad ibn Mūsā al‐Khwārizmī, a famous Arabic math­e­mati­cian who lived in Baghdad who helped rein­tro­duce the West through his work in trans­la­tion to alge­bra, and algo­rithm is a Latinized adap­ta­tion from his own name.

What do algo­rithms know? If you think about that notion of effec­tive com­putabil­i­ty, it embeds a desire, or a quest to make more things effec­tive­ly com­putable. One of the things we’re see­ing now, if you think about the stuff that com­pu­ta­tion has worked on, we went from dig­i­tiz­ing maps and store­fronts to now, you can stroll through the British Museum in vir­tu­al space.

Algorithms are dri­ving cars. They’re help­ing peo­ple get dates. They’re grad­ing essays. They’re writ­ing news­pa­per arti­cles. They’re play­ing Jeopardy. They’re diag­nos­ing patients. They’re eval­u­at­ing loans. You’ve prob­a­bly heard that metaphor of the notion of the ris­ing tide, that algo­rithms are grad­u­al­ly get­ting capa­ble of solv­ing more and more sophis­ti­cat­ed prob­lems and chal­lenges, and per­haps dis­plac­ing humans who used to do that work.

I like to think of this in the con­text of ubiq­ui­tous com­put­ing, and the notion of a com­pu­ta­tion­al lay­er, a lit­tle bit like Borges’ map and the ter­ri­to­ry. That lay­er of sen­sors and chips and devices is get­ting thick­er every day, and it’s get­ting more nuanced. And algo­rithms are the machin­ery, are the the vehi­cles, the tools, the objects that live in that lay­er.

And this is a self‐perpetuating log­ic. The lay­er encour­ages us to build more, right. To dig­i­tize fur­ther. To bring more things into the web of com­pu­ta­tion, and make more things effec­tive­ly com­putable.

So, I’ll close with just a cou­ple of key words on how algo­rithms know. The first is abstrac­tion. Abstraction is the cen­tral intel­lec­tu­al maneu­ver that allows us to use com­pu­ta­tion­al sys­tems to engage in the mate­r­i­al world. Abstraction is an incred­i­bly pow­er­ful tool, and at times it can be dan­ger­ous as well. There’s the clas­sic joke about the physi­cist who has to cal­cu­late some­thing in a barn. He says, Well, let’s assume every cow is a sphere.” That’s a clas­sic abstrac­tion move.

Another way that algo­rithms want to know is the notion of antic­i­pa­tion. In 2010, Eric Schmidt said famous­ly, I actu­al­ly don’t think most peo­ple don’t want Google to answer their ques­tions. They want Google to tell them what they should be doing next.” And that notion of antic­i­pa­tion is usu­al­ly pow­er­ful and thought‐provoking.

Surprise, this is where I think things get real­ly inter­est­ing, this notion of an inte­ri­or life for algo­rithms, because our com­pu­ta­tion­al sys­tems increas­ing­ly sur­prise us up and do things that we didn’t expect them to do.

And final­ly, serendip­i­ty. That serendip­i­ty is some­thing that is in many ways gen­er­at­ed or man­u­fac­tured. That it can be pro­duced, and that the ways we pro­duce serendip­i­ty now are dif­fer­ent from the ways we used to pro­duce them.

I hope that we will address these and many oth­er top­ics in the con­ver­sa­tions to fol­low, so let me invite our first two pan­elists to come up now.

Further Reference

The The Tyranny of Algorithms event page.


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.