Ed Finn: Jennifer Golbeck is an Associate Professor in the College of Information Studies at the University of Maryland, College Park. She also directs UMD’s Human Computer Interaction Lab, and stud­ies how peo­ple use social media, and thinks of ways to improve their inter­ac­tions. Ian Bogost is Ivan Allen College Distinguished Chair in Media Studies, and Professor of Interactive Computing at the Georgia Institute of Technology. Founding part­ner at Persuasive Games, LLC, and a con­tribut­ing edi­tor at The Atlantic. So, wel­come both of you.

Alright. So, our top­ic is What should we know about algo­rithms?” What should we know about algo­rithms, Jen?

Jennifer Golbeck: You know, so I talk to peo­ple a lot about algo­rithms, and the ones that I work on as a com­put­er sci­en­tist are build­ing algo­rithms that can take the dig­i­tal traces you leave behind, whether it’s from the Fitbit, espe­cial­ly social media. But any of these traces, and use them to find out secret things about you that you haven’t vol­un­teered to share. Because all kinds of things about you come through in those pat­terns of behav­ior, espe­cial­ly when you take them in the con­text of hun­dreds of thou­sands, or mil­lions of oth­er people. 

So when I go talk about this, the thing that I tell peo­ple is that I’m not wor­ried about algo­rithms tak­ing over human­i­ty, because they kind of suck at a lot of things, right. And we’re real­ly not that good at a lot of things we do. But there are things that we’re good at. And so the exam­ple that I like to give is Amazon rec­om­mender sys­tems. You all run into this on Netflix or Amazon, where they rec­om­mend stuff to you. And those algo­rithms are actu­al­ly very sim­i­lar to a lot of the sophis­ti­cat­ed arti­fi­cial intel­li­gence we see now. It’s the same underneath.

And if you think about it, most of the time the results are com­plete­ly unsur­pris­ing, right? You bought this Stephen King nov­el, here’s ten oth­er Stephen King nov­els.” Sometimes they’re total­ly wrong, and you’re just like, why would you ever think to rec­om­mend that to me? And then some­times we get this sort of serendip­i­ty that you men­tioned, these great answers. And my favorite exam­ple is that I had bought The Zombie Survival Guide, which is exact­ly what the title sug­gests, like an out­door sur­vival guide but for zom­bies. And I read it very quick­ly, and the next day I go back and Amazon is like, Oh, you know, since you bought The Zombie Survival Guide you might also like…” and it has oth­er books by the same author, World War Z, which was made into a Brad Pitt movie which you maybe saw, some oth­er zom­bie books, a cou­ples zom­bie movies, and then this camp­ing axe with a pull-out 13″ knife that’s in the han­dle? And I was like, That’s exact­ly what I need.” The book was telling me this. And then I was like, okay prob­a­bly not some­thing that I need. But I bought it any­way. I thought it was just such a great exam­ple of like, I nev­er would have gone look­ing for it, but it was such a cool thing to recommend. 

And so, I think the thing to know about algo­rithms is that that’s gen­er­al­ly what they do. They usu­al­ly tell us stuff that’s not super sur­pris­ing, or that we kin­da could’ve fig­ured out on our own, but some­times they give us great insights, and some­times they’re wrong. And just like you don’t watch, in order, every­thing that Netflix rec­om­mends, or buy, in order, every­thing that Amazon sug­gests that you should buy, the thing I think we real­ly need to keep in mind with a lot of algo­rithms today is that they’re going to tell us stuff but we absolute­ly have to have intel­li­gent humans tak­ing that as one piece of input that they use to make deci­sions, and not just hand­ing con­trol over to the algo­rithms and let them make deci­sions on their own, because they’re going to be wrong a lot of the time, or they’re… They’re not going to do things as well as a human would do.

Finn: Ian, what do you think?

Ian Bogost: I’ve become real­ly inter­est­ed in the rhetor­i­cal reg­is­ter of this word, algo­rithm. How we use it And I did this piece for The Atlantic ear­li­er this year called The Cathedral of com­pu­ta­tion” in which I sort of said any­time you see the word algo­rithm,” espe­cial­ly in print, in the media, if you try replac­ing it with God” and ask if the sen­tence kind of works, it usu­al­ly does. So there’s this anx­i­ety we have, you know. Google has tweaked its algo­rithm” or What are the algo­rithms doing to us? How are they mak­ing deci­sions on our behalf?” and in what we are we sort of pledg­ing feal­ty to these algorithms?

So there’s a sort of techno-theocratic reg­is­ter to the con­cept of the algo­rithm. And there’s this mys­ti­cal notion about it, too. I think one of the rea­sons we love algo­rithm” instead of com­pu­ta­tion” or soft­ware” is real­ly we’re talk­ing about soft­ware, is what we’re talk­ing about. When we say algo­rithm, we invest this kind of Orientalist mys­ti­cism into fair­ly ordi­nary expe­ri­ences and ser­vices and so forth. 

And you know, this idea of the poet­ry of com­pu­ta­tion is inter­est­ing because I think it helps us kind of get under the skin of the rhetor­i­cal nature of the word algo­rithm, and not just the word but how we use it. When you think about that idea of the poet­ry of com­pu­ta­tion, it should kind of ter­ri­fy you that okay, if we’re going to run our lives, our air­planes, and our auto­mo­biles, and our busi­ness­es on poet­ry, on these sort of poet­ic modes— It’s not because we dis­trust poet­ry, or because poet­ry isn’t good at what it does. It’s because what poet­ry does explic­it­ly is to defa­mil­iar­ize lan­guage. To take ordi­nary speech and to show us some­thing about that speech. To recon­fig­ure the words that we nor­mal­ly use in a dif­fer­ent way.

And this aes­thet­ics of the algo­rithm com­mon in com­put­er sci­ence of ele­gance, of sim­plic­i­ty, of tidi­ness, of order, of struc­ture, ratio­nal­ism, all of those sorts of fea­tures, are fan­tasies. To some extent, these are messy, dis­as­trous­ly com­pli­cat­ed com­pu­ta­tion­al and non-computational sys­tems. Like Amazon has a logis­tics sys­tem, and ware­hous­ing, and all these fac­to­ry work­ers and ware­house work­ers they’re abus­ing and so forth. And all of that stuff, we’d like to kind of cov­er over it. But when we’re able to sim­pli­fy it, to kind of point to this mys­ti­cal God-like fig­ure and say, Oh, the algo­rithm is in charge,” then we feel bet­ter about that gesture.

So maybe one way of think­ing about algo­rithm is as a kind of synec­doche, you know that rhetor­i­cal trope where you take a part and you use it to refer to the whole. So, we talk about Washington instead of the fed­er­al gov­ern­ment. And when we do some­thing like that, we kind of black box all this oth­er stuff. And we pre­tend like we can point to Washington and that that suf­fi­cient­ly describes the way that the fed­er­al gov­ern­ment does or does not func­tion. Which of course it doesn’t do. It allows us to sim­pli­fy the abstract.

So yeah, the tech­ni­cal aspects of algo­rithms, I think have become much less inter­est­ing, cul­tur­al­ly speak­ing, than the rhetor­i­cal func­tions of algo­rithms. How we see this term and this con­cept weav­ing its way into our per­cep­tions. Into the media, into ordi­nary people’s con­cep­tions of the things that they do, and kind of— Oh, Fitbit knows some­thing about me, and so I’m going to use it.” I think those are some­what under­served perspectives.

Finn: I think yeah, as we engage with these sys­tems more, they become more and more impor­tant for every­day indi­vid­u­als, no longer sort of tech­ni­cal experts or some­body who’s design­ing an air­plane or fly­ing an air­plane. And we’re all depen­dent on algo­rithms in many ways, now. And in many new ways that we weren’t even say, ten years ago.

I’m real­ly inter­est­ed in this notion of defa­mil­iar­iza­tion that you both brought up in dif­fer­ent ways. In part this is about black­box­ing things, and abstract­ing things. In part, it’s also about sort of the unin­tend­ed con­se­quences, you might say. You were talk­ing about the dig­i­tal traces that we leave online, which is a top­ic of great inter­est for me as well. 

And one thing I think about is all the copies of our­selves, or the ver­sions of our­selves, that are cre­at­ed. These pro­files that are aggre­gat­ed by dif­fer­ent com­pa­nies and then poten­tial­ly sold. How lit­tle access we have to them. And so, is that some­thing that you think about as well, Jen? Or do you think that there are— Is the con­ver­sa­tion mov­ing for­ward about that? Are peo­ple learn­ing how to read these dig­i­tal ver­sions of our­selves more effec­tive­ly? Or is this a morass we’re just sort of begin­ning to work through?

Golbeck: Yeah. I mean, I want to say that we’re get­ting more sophis­ti­cat­ed about it. But then if you actu­al­ly look at it, I’m not sure that we are. And there’s so many facets to this. But I guess a cou­ple that I think are interesting. 

One, I like to start with that Netflix/Amazon exam­ple because it’s a way that we’re all in inter­act­ing with this tech­nol­o­gy that, if we talk about it it sounds like these ter­ri­fy­ing black box­es who maybe are so much smarter than us, and we don’t even know how to han­dle it. Except we total­ly do, because we use it on Amazon and Netflix all the time, right? And that’s exact­ly the same thing as the scary AI that Stephen Hawking says is going to ruin human­i­ty, right? We actu­al­ly know real­ly well how to deal with it when it’s pre­sent­ed in that way.

On the oth­er hand, if we look at the kind of vir­tu­al ver­sions of our­selves, I think we can look at our own vir­tu­al ver­sions and under­stand and process those. And when I talk about the kind of algo­rithms I make, I get a lot of push­back. Like, Well you know, the ver­sion of myself that I have on Twitter, that’s a real­ly pro­fes­sion­al ver­sion. That’s not how I actu­al­ly am, and so maybe it’s not going to find the right things out about me.” And maybe that’s true and maybe not. Sometimes, depending.

Finn: Are you say­ing that the axe did not fea­ture promi­nent­ly in your Twitter persona?

Golbeck: Actually, you prob­a­bly could total­ly find the axe, look­ing in my Twitter per­sona. I talk a lot about zom­bies online. 

But you know, we can say that for our­selves, right? But then, if you look at how we treat oth­er peo­ple online, these dig­i­tal ver­sions, and espe­cial­ly when peo­ple get them­selves in trou­ble, the one bad thing that some­body does online becomes the entire­ty of that per­son, as we view them. And algo­rithms can see beyond that. But we as humans often can’t, where this per­son put out a tweet that seems racist. And then that per­son starts get­ting death threats, and gets fired from her job, and all of these bad things hap­pen because the one bad thing that you did that gets shared wide­ly and that there’s a record of becomes the rep­re­sen­ta­tion of you as a per­son to the Internet.

And so we have all these dig­i­tal traces, but it’s real­ly hard for us as humans to process those. And as just one one more exam­ple, we’re doing a project now look­ing at peo­ple on Twitter who have admit­ted that they got a DUI. And we’re look­ing at what sorts of things they say, and can you check if they’re kind of chang­ing their ways or what­ev­er. As I had my stu­dents pre­sent­ing this week, Here’s the peo­ple we found who said they had DUIs. And here’s this guy who got a DUI.” And then the stu­dent was like, Actually, he seems like a real­ly good guy, you know. Here’s this stuff with the base­ball team he vol­un­teers for. And here’s these things with his kids.” 

And I was like oh, we have to be moral­ly ambigu­ous? Like, we can’t just hate him because he got a DUI and admit­ted it? Like, there’s all this oth­er good stuff? And we’re so used to kind of see­ing these dig­i­tal traces and mak­ing our own infer­ences like oh, because this is there, that’s a bad per­son, or that’s a good per­son. And actu­al­ly, we’re all very com­pli­cat­ed peo­ple, and we all do bad things and good things. But we’re not great at judg­ing it when we have a full record of peo­ple. And I think that that’s a prob­lem that comes with all this, is that we don’t for­get, and things don’t fade. Everything is there, and we have a hard time deal­ing with that. Algorithms can kind of deal with it a lit­tle bit bet­ter, or we can pro­gram them too. But as humans we have a hard time han­dling that.

Bogost: We also take com­put­ers to have access to truth in a way that we don’t take poet­ry to, for exam­ple. So to kind of come back to this poet­ry busi­ness, if the pur­pose of poet­ry is to defa­mil­iar­ize words, then the pur­pose of algo­rithms is to defa­mil­iar­ize com­put­ers. They show us how com­put­ers work, and they don’t work, kind of. Or they work bad­ly, or they work in this very wonky, strange way. And you see it when you go to Amazon. You see that you ordered some but­ton cell bat­ter­ies because you need­ed two of them. And then it’s like, Oh, per­haps you’d like these oth­er but­ton cell bat­ter­ies.” And no, no, but I see what you’re doing. I see the car­i­ca­ture that you’ve built of me, and ha ha that’s inter— 

But then we flip that on its head and we’re like oh, actu­al­ly this is truth. Amazon knows some­thing about me. Google knows some­thing about me that’s true, and there­fore I can know some­thing about you by see­ing the way that Twitter or Facebook or what­ev­er is re-presenting you to me. Whereas we tend not to do that with poet­ry if you wrote— You know, here’s your book of high school poems. It’s like oh yeah that’s a sort of car­i­ca­ture of you at a par­tic­u­lar mo—ha ha ha, we’ll look at that and then put it aside and under­stand that you as an indi­vid­ual are more than that set of words, right.

Golbeck: If I can give you a quick exam­ple on that, my dis­ser­taon work was on com­put­ing trust between peo­ple online. So, if we didn’t know each oth­er, could I guess how much I trust you? And I was pre­sent­ing this—this is like 2004, 2005, so ear­ly in the social media space. And I was giv­ing this talk like yeah, you know, we tell if our algo­rithms are good because you’ll say how much you trust me, and then I’ll com­pute it, and I’ll com­pare what the algo­rithm said to what you did.

And I would get these answers from these old­er com­put­er sci­en­tists who were like, Well, if the algo­rithm says on a scale of one to ten you should trust me to three, but you said a sev­en, maybe you’re wrong.” Like, the the algo­rithms says a three, so that’s prob­a­bly right, as opposed to all of our per­son­al his­to­ry of inter­ac­tions let­ting you make this very human judg­ment. Like oh, but the algo­rithm says three, so maybe you, human, are wrong.

Bogost: It’s super inter­est­ing to think, Well, what does the com­put­er think about me?” But not so inter­est­ing to think, I absolute­ly trust the com­put­er to make deci­sions about me.”

Finn: Yeah, I think that bat­tle of trust is real­ly inter­est­ing, and the ways in which we now— The space of human agency and the space of shared agency, where we’re sort of col­lab­o­rat­ing with com­pu­ta­tion­al sys­tems. And then the space where we just sort of trust a com­put­er to do some­thing. Those are all mov­ing around in real­ly inter­est­ing ways.

For exam­ple, now I find myself ques­tion­ing my fre­quent­ly blind­ly obey­ing the instruc­tions of Google direc­tions about which way I should dri­ve home. And then some­times ques­tion­ing my pathet­ic slav­ish­ness to this sys­tem that obvi­ous­ly doesn’t get it right all the time. And then paus­ing because of who I am, won­der­ing to what extent I’m just a guinea pig for them to con­tin­ue test­ing that this isn’t actu­al­ly the fastest route, this is just that I’m in Test Group B, to see whether that road is a good road.

So, this pos­es a ques­tion I think also comes out of The Cathedral of Computation,” Ian, that we need to learn how to— So, see­ing is one metaphor. I also tend to think of it in terms of lit­er­a­cy and learn­ing how to read these sys­tems. So, how do we begin to read the cul­tur­al face of computation?

Bogost: Yeah. It’s a great ques­tion. It’s an impor­tant prob­lem. So, the com­mon answer, let’s start there, is this sort of every­one learns to code” non­sense that’s been mak­ing the rounds? Which, it’s not—I mean, I call it non­sense just to set the stage, right. But, it’s not a bad idea. You know, why not? It seems like it’s rea­son­able to be exposed to how com­put­ers work, and to some extent you learn some music, you learn some com­put­ing. Great. 

But real­ly the rea­son to do that is not so that you can become a pro­gram­mer, but so you can see how bro­ken com­put­ers real­ly are. And you put your hands on these mon­strosi­ties, and just like any­thing they don’t work the way you expect. There’s this library that’s out of date and some ran­dom per­son was updat­ing it but now they’re not any­more. And it was inter­fac­ing with this sys­tem whose API…who knows how it works anymore? 

And once you kind of see the messi­ness, the cat­a­stroph­ic messi­ness of actu­al work­ing com­put­er sys­tems, then it’s not that you trust them less or that now we can unseat their revolt against human­i­ty. Nothing like that, but rather it brings them down to earth again, you know. But in addi­tion to that, the way that we talk about these sys­tems, and the fact that we talk about them, that we talk about them more is also impor­tant. That moment with Amazon is a moment of lit­er­a­cy. It’s a moment of you as an ordi­nary per­son rec­og­niz­ing, Okay, I see the way that Amazon is think­ing that it has knowl­edge,” and then work­ing with that, and think­ing about it, and talk­ing about it. That kind of lit­er­a­cy is just as, maybe even more impor­tant, because it’s right there on the sur­face, and we can read it.

And then I think there’s a third kind of lit­er­a­cy that’s impor­tant to cul­ture, which is the way that we dis­cuss these sub­jects in the media. It real­ly does mat­ter. And the more that we present the algo­rithm as this kind of god when we write about it, espe­cial­ly for a gen­er­al audi­ence, then the more we don’t do our jobs of explain­ing what’s real­ly going on and how a par­tic­u­lar sub­sys­tem of a com­pu­ta­tion­al ele­ment of a very very large orga­ni­za­tion that has all sorts of things hap­pen­ing, we do a dis­ser­vice to the pub­lic in that respect.

Golbeck: I agree with every­thing you said. And I think this lit­er­a­cy of just being able to under­stand what we know and what we don’t is so crit­i­cal. Because when I talk about this arti­fi­cial intel­li­gence that I do, it’s com­plete­ly unsat­is­fy­ing, whether I’m writ­ing about it or if I’m talk­ing to peo­ple to say you know, what we do is we took all this data, and we put it in this black box, and we basi­cal­ly have no idea what goes on in there. And it spits out the right answer, and we kin­da know it will do that in pre­dictable ways. But we can’t tell you what it’s doing on on the inside. We spent a cou­ple decades research­ing that, and we can’t. That’s a com­plete­ly not-exciting article. 

So what we do is we say, We put your stuff in this…box, and it may be a black box. And it spits out this answer, and look, here’s some stuff that we kind of com­put­ed inter­me­di­ate­ly that sounds like it’s some insights that make you feel like you’re get­ting a story.”

So, the exam­ple that I use most is we take your Facebook likes, and they put them in this black box, and it can pre­dict how smart you are. And that’s not too sat­is­fy­ing. And so we say, Yeah, and if you look at it, here’s the things that you like that are indica­tive of high intel­li­gence. Liking sci­ence and thun­der­storms and curly fries.” 

And every­one goes, Curly fries?” 

And then when I talk about it—especially like, mar­ket researchers—people get real­ly angry. How can you know that’s going to be true? And it’s going to change.” And it’s like, I’m just telling you that for a sto­ry. We don’t use that. We don’t care about that. It’s not part of the com­pu­ta­tion­al pic­ture, but it allows us to tell a sto­ry that makes it feel like there’s some­thing human going on in there. And that is a strug­gle for me, because you want to tell this sto­ry, Here’s what these algo­rithms do, and it’s unpre­dictable and crazy.” But you can’t tell a sto­ry with just like, black box spits out answer.”

Bogost: Yeah, but we can reframe that sto­ry. I don’t know if this is the best exam­ple, but it’s a kind of infor­ma­tion deriv­a­tives trad­ing that you’re doing, right?

Golbeck: Right.

Bogost: Which, I mean, I don’t know that that’s the the way to talk to the everyper­son about the exam­ple that you— But it doesn’t have to be reframed as com­pu­ta­tion, right. There are oth­er touch­points we have in the world, where like, you know how there’s infra­struc­ture? There are all these high­ways, and you didn’t build them but they were here before you. There are cer­tain com­pu­ta­tion­al sys­tems that were there before us, and we come to them and we actu­al­ly have no idea how they work. We lit­er­al­ly have no idea. So, the work of explain­ing how com­pu­ta­tion­al sys­tems work that doesn’t rely on this appeal to mys­ti­cism, I think is super important.

Finn: I think this ques­tion of sto­ry­telling is real­ly impor­tant. Not only because this is all an elab­o­rate ploy for me to do research on my book project about algo­rithms, but also because humans are sto­ry­telling ani­mals. And sto­ry­telling is essen­tial­ly of a process of exclu­sion, right. It’s select­ing the telling exam­ple that may or may not rep­re­sent the broad­er his­to­ry, but you have to find the exam­ples in order to tell a sto­ry because humans aren’t going to sit down and read the phone book, right? We’re not going to sit own and read the database. 

And so my ques­tion is, how do we grap­ple with sto­ry­telling as…is sto­ry­telling a fun­da­men­tal­ly dif­fer­ent way of know­ing than what we might think of as com­pu­ta­tion­al knowl­edge? You know, when you’re talk­ing about…the com­pu­ta­tion­al approach is the process of inclu­sion, right. We want to include as much data as pos­si­ble to make the data set as rich as pos­si­ble so that the solu­tion will be more com­plete. Is that a total­ly alien way of know­ing? Are there ways to bridge that divide?

Golbeck: I mean, it’s so hard, right. For the com­put­ers, you absolute­ly want to give it every­thing. And then when you’re talk­ing about what the com­put­ers do, gen­er­al­ly when you’re work­ing with this huge amount of data, which is the excit­ing thing now, you’re end­ing up with not log­i­cal insights but sta­tis­ti­cal insights. And any human can look at the con­nec­tions that are formed and go, That doesn’t make any sense to me except that it tends to work most of the time.” And so we want to tell a sto­ry that says here’s some sta­tis­ti­cal insights, and and let me tell you a few.

But that doesn’t real­ly give a pic­ture, and it’s hard to give a pic­ture, of here’s how sta­tis­tics work,” and lit­tle pat­terns emerge as impor­tant from this big mass of data. It’s a sto­ry that I try to tell all the time. But peo­ple, I have found, latch onto the spe­cif­ic exam­ples and have a hard time grasp­ing the big­ger thing. And I think in terms of com­put­er lit­er­a­cy that that is so much more impor­tant than being able to pro­gram. Programming is great, and you will see what a mess it is. But being able to grasp that this is a sta­tis­ti­cal insight and the indi­vid­ual exam­ple doesn’t mat­ter, that’s the thing that I would like to be able to do better.

Bogost: Yeah. I mean, com­put­ers are more like mar­i­onettes, or like table saws or some­thing than they are like sto­ries. They’re these machines that pro­duce things. And you design this machine such that you can then design things for the machine. So you have your table saw, and you make a bunch of jigs so you can get the right cut. And you build this pup­pet, then you have to kind of manip­u­la­tive it in this per­verse way that you can’t real­ly even explain, in order that it pro­duces an effect that appears to give life to the creature. 

It’s a dif­fer­ent way of think­ing in the sense that whether it’s a sto­ry, whether its an out­come, or a busi­ness result. Whatever it is that the par­tic­u­lar com­pu­ta­tion­al sys­tem is doing, it’s not doing delib­er­ate­ly, and it’s not doing it in a sin­gu­lar way. It’s a sys­tem that’s been designed to pro­duce many sim­i­lar kinds of out­comes. And this is a kind of weird way of think­ing about behav­ing in the world, espe­cial­ly since we ordi­nar­i­ly think in and talk in specifics. In sto­ries, in exam­ples, in indi­vid­u­als. And that’s also still how we write about every­thing, includ­ing computation.

And you see this when you see com­pu­ta­tion­al arts, and you see the aes­thet­ics of com­pu­ta­tion if you look at Twitter bots or gen­er­a­tive text, or any kind of gen­er­a­tive art. You know, the results are ter­ri­ble when com­pared with hand-crafted sto­ry­telling, or humor on Twitter, what have you. What’s remark­able about them is not their indi­vid­ual utter­ances or indi­vid­ual effects, but that there is some sys­tem pro­duc­ing many of them, and when you look at it holis­ti­cal­ly you can appre­ci­ate it in a dif­fer­ent way. And kind of get­ting that aes­thet­ic abil­i­ty, right?

I mean, we talk about ethics a lot when it comes to indus­try and to com­put­ing. But we don’t talk about aes­thet­ics enough. Like, one oth­er way into this lit­er­a­cy prob­lem is through aes­thet­ics. Understanding how com­put­ers pro­duce results on the artis­tic reg­is­ter, right. Even if we kind of hate those results, or we can’t rec­og­nize them as art, and say­ing, Actually, some­thing just like that is hap­pen­ing inside of Facebook or inside of Google.”

Finn: Yeah, I think that notion of aes­thet­ics is real­ly impor­tant because I think it’s one of the ways that we can con­front very inhu­man or very alien ideas and sys­tems, method­olo­gies, with­out nec­es­sar­i­ly hav­ing the lan­guage to artic­u­late what it is, right. Aesthetics can be a non-verbal way of engag­ing with these questions.

So, I think there’s a con­nec­tion between aes­thet­ics and what you referred to as illu­sion before, as well. And so my ques­tion for you both now is, are the illu­sions nec­es­sary? Or we could talk about it as that kind of faith, and you know maybe it’s a bank­rupt faith or a mis­placed faith. But is that some­thing we have to have? Is that the only way that humans are going to inter­act with these systems?

Bogost: No, it’s start­ing point. It’s the thing you do when you don’t have bet­ter options. And then you real­ize oh, this is insuf­fi­cient. And this is a good start­ing point. And then you rec­og­nize also the intrin­sic flaws of the illu­sion. And you seek more knowl­edge and deep­er under­stand­ing. And then you real­ize this [has] sort of know been demys­ti­cized now.

And you can do this his­tor­i­cal­ly. Maybe that’s one con­crete exam­ple of some­thing we can do. Go back and and unpack any his­tor­i­cal com­put­ing sys­tem, and see the bizarre rea­sons why it was con­struct­ed in the ways that it was. What it did. How it had an influ­ence on lat­er sys­tems. Then you’re just, Oh, okay. This is just like any­thing else.”

Finn: The Atari, for example.

Bogost: The Atari, for exam­ple. Yeah, I’ve writ­ten a book on the Atari that tries to do exact­ly this. So, com­put­ing his­to­ry has a role to play here. And as a kind of very quick aside on that mat­ter, com­put­er sci­ence as a dis­ci­pline is one of the most ahis­toric that I know of. Just com­plete­ly unin­ter­est­ed in his­to­ry. It’s just bar­rel­ing for­ward, right, mak­ing that last algo­rithm slight­ly more effi­cient so they can do some­thing slight­ly different.

Golbeck: Yeah. I think you’re mar­i­onette exam­ple that you gave. I’ve nev­er heard that exam­ple before, but I think it’s so spot on, and gets to all of these issues that we’re talk­ing about. Because if you’re watch­ing this mar­i­onette per­form, that’s one thing that you can see, right. And then if we try to explain it, Oh, if I pull this string, this thing hap­pens,” we can have all of these debates about why does that thing hap­pen? And why isn’t this thing? And can’t you do it this oth­er way?

But that’s dif­fer­ent than the thing that is being pro­duced for you to look at. And which of those con­ver­sa­tions do we want to have? Maybe both. But they’re two real­ly dif­fer­ent con­ver­sa­tions. And I think that’s part of the strug­gle, that as a com­put­er sci­en­tist I always want to talk about both. Look at this amaz­ing thing that you can see that it’s doing. And then also here’s all these crazy things that make that work.

But it’s real­ly two dif­fer­ent sto­ries, and I find it’s hard to say, Here, you pull the string and this hap­pens.” And peo­ple say, But how do you get this big com­plex thing at the end?” And it’s just too com­pli­cat­ed [crosstalk] to tell it all the way through.

Bogost: Because it’s a lot of strings

Golbeck: Yeah, there’s a lot of strings.

Finn: Yeah, I think that the sort of unan­swer­able ques­tion about whether it’s real­ly a mar­i­onette unless you’re see­ing that com­plex­i­ty at the end, right? And that’s the thing that you focus on. Which I think is about aes­thet­ics and kind of notions of per­for­mance, or when an algo­rithm or a sys­tem becomes a cul­tur­al thing.

We just have a cou­ple of min­utes left. So, what would be some, just to sum up, a cou­ple of prac­ti­cal things that you would sug­gest if some­body wants to actu­al­ly under­stand algo­rith­mic sys­tems better?

Golbeck: Oh gosh, that’s so hard. So, com­ing back to a point that you raised before about algo­rithms as poet­ry or algo­rithms as beau­ti­ful things. I’ve absolute­ly had that thought, that I’ve looked at algo­rithms and I’ve gone, who­ev­er wrote this had this new insight to the prob­lem that I didn’t have. You can learn about algo­rithms with­out hav­ing to learn about com­put­er sci­ence. And so I guess if some­one want­ed to do, that some­one like, I don’t real­ly know any­thing about com­put­er sci­ence. I just want to start get­ting in to see what that is,” that you might start with some kind of basic tuto­ri­als on the Turing Machines. 

You men­tioned Alan Turing at the begin­ning, and he kind of put for­ward this fun­da­men­tal notion of all com­put­er sci­ence that says you can have a piece of paper and basi­cal­ly a lit­tle pen­cil that can write a one or erase a one, and that can rep­re­sent all com­put­ers every­where. And you spend a lot of time as an under­grad­u­ate doing that. It can get very com­pli­cat­ed, but it is an acces­si­ble con­cept. And I think if you spend a cou­ple hours play­ing around with that and see­ing how you can do actu­al­ly sophis­ti­cat­ed math and all kinds of inter­est­ing things, with this real­ly sim­ple machine, it starts to give you an insight into the process that we use to devel­op these much more sophis­ti­cat­ed algorithms.

It won’t help you fig­ure out all of the strings and the train­ing that you need to manip­u­late those strings in the right way to get the pic­ture, but it starts to help you see like okay, these algo­rithms, it’s not this myth­i­cal thing, it’s like a bunch of peo­ple who were beat­ing on this real­ly hard prob­lem, who kind of manip­u­lat­ed into doing the thing. So I think as a start­ing place for learn­ing how the algo­rithms work, it won’t get you into all the com­plex algo­rithms, but it gets you in the space of think­ing about them in the right way. 

Bogost: Yeah, I mean, com­put­ing his­to­ry is what I think we’re both point­ing at. If we’re liv­ing in this deeply com­pu­ta­tion­al­ly age where com­put­ers are inside of and run­ning so much of our lives, maybe we should know where they came from.

Finn: Thank you both so much. That was great.

Golbeck: Thank you. 

Further Reference

The Tyranny of Algorithms event page.


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.