Frank Pasquale: So I’ll start with the sub­ti­tle of my paper, which is about rep­u­ta­tion and search. And we heard Tarleton in the last talk talk about media stud­ies and the study of the algo­rithm. And the sense that the algo­rithms were increas­ing­ly impor­tant in terms of how we are known by the world, and how the world knows us. And for me I try to trans­late this into the ideas of search and rep­u­ta­tion. Essentially search…the study of search, be it by peo­ple like David Stark in soci­ol­o­gy, or econ­o­mists or oth­ers, I tend to sort of see it in the tra­di­tion of a real­ly rich socio-theoretical lit­er­a­ture on the soci­ol­o­gy of knowl­edge. And as a lawyer, I tend to com­ple­ment that by think­ing if there’s prob­lems, maybe we can look to the his­to­ry of com­mu­ni­ca­tions law. And that’s been a lot of my work on Google and oth­er sorts of search engines. 

And the oth­er side is the rep­u­ta­tion side. This sort of exis­ten­tial expe­ri­ence of being searched for. And that expe­ri­ence, I try to look to norms of due process from law, in terms of if our rep­u­ta­tion is unfair­ly con­struct­ed or that we have some chance to con­test that. That we have some chance to under­stand what went on and that we can con­test that. 

The oth­er thing that I want­ed to note here is that where does the algo­rithm play a role here, right? Well the algo­rithm in some sense— There’s all these algo­rithms which we’ll dis­cuss today in the finan­cial con­text in my talk which are about inter­pret­ing sig­nals. How we inter­pret sig­nals about cer­tain indi­vid­u­als? How do say traders at large invest­ment banks, or peo­ple who’re mak­ing deci­sions for pen­sion funds, how do they inter­pret var­i­ous sig­nals sent out by the mar­ket or the high fre­quen­cy traders? 

And anoth­er is sort of a thought about… and in terms of rep­u­ta­tion, how are rep­u­ta­tions con­struct­ed based on var­i­ous sig­nals that are assem­bled about us by par­tic­u­lar­ly the cred­it bureaus, when processed through cred­it scor­ing or oth­er forms of scoring? 

Now, one thing that I noticed as I was read­ing— And you know, I did­n’t give them enough atten­tion as I should’ve but I was try­ing to read some of the oth­er con­fer­ence papers. They’re very rich papers. And you see this sort of dialec­tic build­ing in the con­fer­ence papers in many con­texts. One is to sort of say well there’s this lit­er­a­ture out there that says blame the algos and algo-driven firms,” this sort of sep­a­rate, account­able enti­ty. And that when you see some­thing like the Flash Crash or the crash of 2:30 PM on May 10th, 2010, that you would see those sort of flash crash­es as that’s bot traders out of control. 

Now anoth­er group of theorists—I tend to think of Neyland’s paper and some oth­ers in this area, and Tarleton’s and oth­ers in this area, sort of come back and say, Wait a sec­ond. You can’t blame the algo­rithm. No, the algo­rithm is just one part of a web of caus­es. There’s this very large…you know, just sit­u­at­ed it,” and for exam­ple with our Flash Crash exam­ple we could say alright, there could be all these algo­rithms that sort of dri­ve high-frequency trad­ing or algo trad­ing or oth­er things. But the only rea­son why they had such dis­as­trous effect is because of human error, and the rules, and say reg­u­la­to­ry lag. Problems at the SEC, its bud­get is only a bil­lion dol­lars and Mary Schapiro said that their com­put­ers are two decades out of date. She said that about three years ago so maybe hope­ful­ly they’re catch­ing up. 

But, I tend to think that—you know, I wor­ry that essen­tial­ly that might be let­ting the algo off a bit too eas­i­ly off the hook? Or at least it’s not focus­ing us on where we might more fruit­ful­ly focus. So my com­pro­mise that I’m going to try to get across in each of these dif­fer­ent areas in my paper is that algos are often not to blame but some caus­es are more impor­tant than others. 

And the metaphor I like to use is some­thing from Nancy Krieger, an epi­demi­ol­o­gist. And she says there are many sit­u­a­tions where there are prob­lems, and oth­er epi­demi­ol­o­gists say well there’s a web of caus­es. But Krieger always asks, Where’s the spi­der in the web of caus­es?” And I’m not try­ing to say that there’s some sort of enti­ty that’s actu­al­ly orches­trat­ing the prob­lems, or the exces­sive profit-taking, or the the insta­bil­i­ty that we’re see­ing, or the unfair­ness that we’re see­ing in the areas that I’m dis­cussing in my paper. What I am say­ing is let’s look very care­ful­ly at who ben­e­fits the most from these, and how they keep these sys­tems in action. 

Another group of metaphors that I think is very rich for this con­fer­ence is… You know, we’re all aware of a Don MacKenzie’s an engine, not a cam­era-sort of metaphor, right, for var­i­ous aspects of finan­cial mod­el­ing and oth­er things. So, he sort of gets us to say wait a sec­ond you know, these mod­els that were used in the finan­cial cri­sis that were ulti­mate­ly sort of derived into algo­rithms or sort of used and for­mal­ized in algo­rith­mic sys­tems, that they did not mere­ly reflect finan­cial real­i­ty but they help cre­ate it. 

What I’m going to try to say in this talk is to say the cam­era metaphor was bad and MacKenzie was right to cri­tique that, espe­cial­ly in arti­cles like [The Credit Crisis as a Problem in the Sociology of Knowledge]. But that the engine metaphor is not quite right because there is a fun­da­men­tal­ly com­mu­nica­tive dimen­sion to these algo­rithms and the things that they’re com­mu­ni­cat­ing about, say peo­ple’s rep­u­ta­tions and the val­ue of dif­fer­ent enti­ties that are out there in the finan­cial sec­tor. So I’m going to com­pare them a lit­tle bit more to Photoshop. And a ver­sion of Photoshop where could be true, it could be false. And par­tic­u­lar­ly a quote from Iris Murdoch where she said— I just love the quote where she said, Man is the crea­ture which paints pic­tures of itself and then becomes the picture.”

So let’s start with our first exam­ple in the paper, cred­it scor­ing and the type of pic­ture that it’s cre­at­ing about peo­ple. And so I’ll escape for a bit. I just want­ed to try to show you all this one com­mer­cial. And I don’t know if the A/V is hooked up but let me see if it’s actu­al­ly on. It’s not? That’s fine. Well I actu­al­ly have the audio on my phone, so I can sort of try to sync this. We will see if it works. Are we ready? 

Isn’t that heart­warm­ing? Isn’t that won­der­ful, in the age of the algo­rithm we can still have human con­cern from these companies? 

Unfortunately for poor Stan, what we find out from con​sumer​re​ports​.org is that cred​itre​port​.com is owned by one of the cred­it bureaus, Experian. That the score they sell you is not the FICO score that as Martha has shown in her ter­rif­ic, rig­or­ous work is so impor­tant. But rather it’s some­thing else that really…maybe it’s rel­e­vant, maybe it’s not. And the charge you a dollar…that’s very char­i­ta­ble. But then, the fine print says that after a sev­en day tri­al you’ll be charged twen­ty dol­lars a month. So, poor Stan.

So com­ing back to our sort of metaphor of cre­at­ing rep­u­ta­tions and the web, what we see is essen­tial­ly this sort rep­u­ta­tion cre­ation mech­a­nism where at least for some very dis­ad­van­tage or mar­gin­al peo­ple, they’re sort of drawn into this dis­course and this way try­ing to cre­ate their own rep­u­ta­tion or try­ing to take con­trol over it. But in fact if you went into this, maybe if you are as finan­cial­ly trou­bled as Stan, that $204 a year you might end up get­ting charged might real­ly hurt you. And you know, that might be a real prob­lem there. And I’ll get a lit­tle bit more into this idea of algos. 

Now, let me talk a bit. Because I know the paper’s a lit­tle lop­sided toward cred­it scores. What I want to try to do with this slide is just talk about what are the prob­lems that are across all these dif­fer­ent areas. And the trou­bling aspects of the finan­cial firms’ use of algo­rithms I think are are at least six-fold—there might be more. 

One is this nature of secre­cy, okay. Because the algo­rithm is secret in say cred­it scor­ing, peo­ple might say oh you know, should I pay off this bill with a pay­day loan and pay I don’t know, 80% inter­est on that pay­day loan or what­ev­er it is? Or, should I not take the pay­day loan and then take the hit on this bill? Well you can’t real­ly tell a lot of the times what would be bet­ter, and that’s very trou­bling. And I’ve talked to peo­ple in the Consumer Financial Protection Bureau who I think are trou­bled by that as well. It’s very hard to know. 

Second is that there’s sort of a self-fulfilling prophe­cy aspect of it. If you get a 580 score, that score’s essen­tial­ly com­mu­ni­cat­ing non-creditworthiness. But also when you have a 580 score you may have to pay a 15% inter­est rate, which in turn feeds into non-creditworthiness because it’s hard to make a 15% inter­est rate, right. That’s real­ly hard. So there’s a self-fulfilling prophe­cy aspect of it.

Another aspect of it is this high-volume pro­cess­ing. The cred­it score you know, again point­ing to Martha’s work, came up in the con­text of a lot of these secu­ri­ti­za­tions. It comes up in the desire among a lot of lenders and oth­ers to sort of make a lot of loans all at once. And we have to won­der who exact­ly is ben­e­fit­ing from that.

A fourth aspect is this government/firm sym­bio­sis, okay. It’s not as if this is sort of a crea­ture of the mar­ket and then it sort of is being reg­u­lat­ed from the out­side. And the reg­u­la­tion of the cred­it scor­ing sys­tem is so sort of per­va­sive, under things like the the Equal Credit Opportunity Act, oth­er sorts of aspects of reg­u­la­tion here, that there’s some­thing like in the finan­cial sec­tor. And we see this in a lot of the too big to fail” dis­course. A real inter­twin­ing between gov­ern­ment and the pri­vate sec­tor. And this is some­thing that I find very use­ful­ly described by Charles Lindblom as the con­cept of cir­cu­lar­i­ty in his work on polit­i­cal econ­o­my. More col­or­ful­ly evoked by Hanna Pitkin’s book on Arendt as the Blob.” There’s this blob­bish aspect of sort of revolv­ing door reg­u­la­tors, and finan­cial firms, and sort of rules that are meant to sort of cre­ate this pati­na of reg­u­lar­i­ty and fair­ness but that ulti­mate­ly do very lit­tle, I find, in each of these areas. 

Another exam­ple is over­con­fi­dence because the data under­ly­ing all of this often is very bad. I cite in the paper a report by Steve Kroft at 60 Minutes where he talks to a num­ber of peo­ple that worked at the cred­it firms in order to hear from peo­ple who want­ed to chal­lenge aspects the data that was put into their score. Many of them sim­ply said the cred­i­tor was always right. That was the nature of the deci­sion­mak­ing process. It was a Potemkin process, you know. And this is very trou­bling. And I’ve nev­er seen any sort of con­vinc­ing retort by FICO or by the cred­it bureaus to reports like that. Mike DeWine at the Ohio Attorney General’s office also is doing this. I cite lots of things like this report Discrediting America in the paper. 

And I wor­ry ultimately…you know this ques­tion of self-reference comes back to self-fulfilling prophe­cy. It’s more of a dis­tinc­tive con­cept in the oth­er areas. But I real­ly wor­ry about that as well. 

Now what about these six aspects in anoth­er con­text? Let’s say the mortgage-backed secu­ri­ties and secu­ri­ti­za­tion? You might’ve seen this NPR reporter who got fired for hold­ing this sign which said, It’s wrong to cre­ate a mortgage-backed secu­ri­ty filled with loans you know are going to fail so that you can sell it to a client who isn’t aware that you sab­o­taged it by pick­ing the misleadingly-rated loans.” And the ques­tion I think for our pur­pos­es is, are the people—the quants in these firms—do they have any inde­pen­dent pow­er to say, stand against if they see a sit­u­a­tion like this mate­ri­al­iz­ing? Are they too iso­lat­ed to even rec­og­nize what they’re doing, what it’s being used for? I know that in Jonathan Zittrain’s work when he looks Amazon’s Mechanical Turk, he wor­ries about sit­u­a­tions where you could have all these Turkers—you know, the Mechanical Turk work­ers that’re all dis­trib­uted under Amazon’s labor dis­tri­b­u­tion system—who each par­tic­u­lar task they do they might think, Oh, this is great.” But then it turns out that as they’re aggre­gat­ed it does some­thing real­ly bad. And that’s the ques­tion I think we often have to have about peo­ple who are involved say as quants or sort of devel­op­ing algo­rithms at these firms. You know, do they real­ly under­stand what’s going on—can they? 

When we talk about those six prob­lem­at­ic aspects of algo­rithms here, again there’s a lot of secre­cy. Now in terms of these mortgage-backed secu­ri­ties it was­n’t nec­es­sar­i­ly the nature of the secu­ri­ti­za­tions them­selves. That was more about com­plex­i­ty. But what peo­ple did­n’t under­stand was that they only made sense in many cas­es because of what’re called cred­it default swaps. And these cred­it default swaps were an utter­ly opaque mar­ket. Nobody real­ly under­stood what was going on there, I feel. Or the regulators—at least you can say that the peo­ple in charge did not real­ly under­stand what was going on. There’s also an account of AIG, which wrote many of the cred­it default swaps, a book called Fatal Risk, talks about this push and pull between AIG and its accoun­tants and oth­er peo­ple in terms of actu­al­ly under­stand­ing what was going on. But that secre­cy was very prob­lem­at­ic, and that is what let a lot of self-dealing flour­ish in the years run­ning up to the finan­cial crisis.

It was also about high-volume pro­cess­ing. It was about essen­tial­ly try­ing to make as many loans as pos­si­ble so that there could be as big a cut of fees as pos­si­ble. There is a second-hand ratio­nal­iza­tion that would be we’ve got to get as many peo­ple into hous­es as pos­si­ble. But real­ly? I mean, I don’t know, it’s does­n’t sound ter­ri­bly con­vinc­ing to me. We can get into that a bit lat­er on. 

And also there’s over­con­fi­dence in the face of bad data. And I think the best exam­ple of that might be the cred­it rat­ing agen­cies. We had a law­suit by S&P where essen­tial­ly there was a com­plaint about the about the S&P rat­ing prac­tices. And by the way, S&P always said about their rat­ing prac­tices, they said essen­tial­ly, We are objec­tive. We are inde­pen­dent. You know, don’t wor­ry that we’re paid by the peo­ple who are sell­ing the secu­ri­ties. We are objec­tive and independent.” 

But then as the law­suit comes out, and as we learn more and more about inter­nal com­mu­ni­ca­tions, we get quotes like, It could be struc­tured by cows and we’ll rate it.” We have peo­ple with­in the firm who sing Bringing Down the House” to the tune of Burning Down the House,” in terms of mak­ing fun of what was going on. I mean, it just seems as though this sort of thing is real­ly prob­lem­at­ic. And then, when they’re lat­er called on this, and I think this is a point that Mordecai made in his com­ments ear­li­er, or ques­tion ear­li­er that you know, when they’re called on this and sued over lia­bil­i­ty, they’ve actu­al­ly put out an answer that says that was, When we said it was objec­tive, that was puffery.” 

And puffery is a legal con­cept where essentially…the legal con­cept there is that let’s say that I want sell you a car and I say, Oh it’s the most awe­some car ever.” And then it breaks down after three weeks. You prob­a­bly can’t sue me for say­ing it’s the most awe­some car ever because it’s very hard to real­ly prove that that opin­ion was wrong. And so that’s the wor­ry I have, and also it gets back to Tarleton’s point about the sort of, in the Google DNA when they say we are objec­tive and pure” and all the oth­er things as far as the assur­ances about what we believe in— This I believe,” you know. That sort of stuff. We wor­ry about essen­tial­ly if that can be cheap talk, right. And a lot of this is sort of a prob­lem of cheap talk. Is the sig­nal sent out, are they cheap? Because essen­tial­ly they can all be sort of… Any lia­bil­i­ty for them could be deflect­ed in the form of say­ing oh they were mere­ly and opin­ion. That’s what S&P is try­ing to do now. They’ve got some of the best First Amendment lawyers in the world push­ing their case. We’ll see how it works. They they may well be able to do that. 

There are oth­er lev­els of con­cern here about sort of mod­els, and the nature of mod­els, and mod­ern secu­ri­ti­za­tion, and mod­ern account­ing, and finance. One sort of three-fold lev­el of prob­lems here is that often the mod­el­ers may not know the his­to­ry, so you may bring in a whole bunch of peo­ple who say are bril­liant physi­cists, bril­liant math­e­mati­cians. And then you say, Create some sort of mod­els about the hous­ing mar­ket based on this 1995 to 2005 data involv­ing hous­ing prices.” They might well tell you Wow, they only seem to go up,” you know. But the ques­tion is you know, is that real­ly a good thing to be bas­ing your mod­el on? And a lot of the mod­el­ers, they may well have gone…for physics, they may not know about the Great Depression, they may not have par­tic­u­lar inter­ests in oth­er sorts of things in terms of where have hous­ing prices gone in the past. 

Another thing is that the man­agers don’t under­stand the mod­el­ers’ meth­ods. So you can have a sit­u­a­tion where essen­tial­ly you know, I’m sure any­one who’s an attor­ney in the room has had the expe­ri­ence of maybe a part­ner telling you, Hey, we real­ly need this cer­tain result in this memo.” And maybe you find a lot of cas­es on one side and a few on yours. But you know, it’s a hard dilem­ma, right? 

And this could be the same thing for a lot of the mod­el­ers involved. They could face the same dilem­ma where essen­tial­ly it’s sort of under­stood what they have to come up with. And so, if you have sort of plau­si­ble deni­a­bil­i­ty among the man­agers, where they can say, Oh. I had no idea that there was some­thing prob­lem­at­ic with the mod­els they had.” That’s real­ly a prob­lem. And I think that you can under­stand a lot of cor­po­rate orga­ni­za­tion and cor­po­rate struc­ture as ways of cre­at­ing cor­po­rate veils, plau­si­ble deni­a­bil­i­ty, oth­er things like that. And the role of par­tic­u­lar­ly com­plex algo­rithms or oth­er things like that is to sort of cre­ate this pati­na of respectabil­i­ty, of math­e­mat­i­cal rig­or, over what may well be a process that’s overde­ter­mined to sim­ply sort of line the pock­ets of insid­ers and peo­ple at the top of the firm.

Another exam­ple too is that there can be these daisy chains of val­ue, okay. So, ide­al­ly when chains of val­ue are cre­at­ed or when val­ue is cre­at­ed, we’d like to believe that it reflects some­thing real about the world, okay. But in fact, and as I show in parts of the paper about secu­ri­ti­za­tions, what might end up hap­pen­ing is that cer­tain cap­tive enti­ties are sort of orches­trat­ed to buy parts of dif­fer­ent secu­ri­ti­za­tions. And then once those cap­tive enti­ties buy them then it seems oh wow, peo­ple are buy­ing. And then this sort of cre­ates this type of again, self-fulfilling prophe­cy where every­one’s say­ing oh, that’s great, let’s all jump on this band­wag­on. But only a few peo­ple on the inside know what’s real­ly hap­pen­ing. And again, the mod­els can be used as sort of ways of ratio­nal­iz­ing that. 

And sort of I feel this reach­es a bit of a reduc­tio ad absur­dum in algo­rith­mic trad­ing, par­tic­u­lar­ly cer­tain types of high-frequency trad­ing that I focus on in the paper. I tend to think that you know, the metaphor I try to use is this this con­cept of war and finance. So I have a quote in the paper from Milton Friedman, where Friedman says he was a bal­lis­tics expert, and he felt the often in the realm of bal­lis­tics in World War II you had to decide did you want to have a big bomb on one tar­get where you knew peo­ple were? Or did you want to have say 300 lit­tle bombs? You know, divide your pow­er over 300 lit­tle ones over all these dif­fer­ent areas and have lit­tle impacts? 

And this type of idea of the flex­i­bil­i­ty and mal­leabil­i­ty of force comes in in say the algo­rith­mic high-frequency trad­ing type of area. In terms of, you can sort of go for say one big bet on some­thing. Or you can say have these meth­ods where, to give one exam­ple say, some­one could buy ten thou­sand shares of a stock and then could imme­di­ate­ly can­cel 9,999. Then could jump back in. And the ques­tion is what they’re try­ing to do is part of the strat­e­gy there, you know, to put it very sim­ply in terms of that strat­e­gy, is you’re try­ing to cre­ate an illu­sion of a cer­tain amount of val­ue there. And when peo­ple are fooled…maybe they fol­lowed after you, but then you can come in behind and take advan­tage of arbi­trage, this sort of tem­po­rary fluc­tu­a­tion of val­ues cre­at­ed by your attack strategy.

So, this is a real prob­lem. And the ques­tion might become well, what’s the big deal, they’re play­ing this game. It’s Ender’s Game in the finance mar­kets or some­thing, you know. BattleBots, as [Nanak] some­times analo­gizes them to. But the prob­lem is that the game… This gets back to the Blob idea of gov­ern­ment reg­u­la­tor and reg­u­lat­ed enti­ty cir­cu­lar­i­ty. The game itself cre­ates tons of mon­ey for lots of the folks in it. For exam­ple [Thomas] Peterffy, who was one of the lead­ing peo­ple who were behind these sorts of…who was a pio­neer here, you know. He was a pio­neer to the point where he was told by the SEC that human beings had to enter the orders so the orders to be— He was told the orders had to be typed. So he devel­oped sort of mechan­i­cal thumbs to type, and fin­gers to type in the orders. So he was this sort of real pioneer. 

And you know, the peo­ple like him that have bil­lions of dol­lars can put up ads where they say, I grew up in a social­ist coun­try, and that’s why I’m vot­ing Republican and putting this out on television.” 

Now if you look at the sort of results of a lot of what the finance sec­tor does, and the algo­rithms in it, it enrich­es a lot of folks who essen­tial­ly want to say to you, and who want to cre­ate say, Hey. If you’re look­ing for val­ue in secu­ri­ty, don’t look for it in gov­ern­ment. Look for it in our algo­rithms and also in our sophis­ti­cat­ed meth­ods.” That may be puffery. 

But real­ly that is the prob­lem that I see in a lot of these areas. And so my agen­da for reform is essen­tial­ly— And that’s towards the end of the paper. I get into this idea that essen­tial­ly you’ve got a group of peo­ple that’re sort of try­ing to social­ly con­struct an exper­tise around what they do. And the algo­rithm plays a very impor­tant role in that social con­struc­tion of expertise. 

And I would say that my agen­da for reform, to go on the rep­u­ta­tion side because I know I’m run­ning out of time, is that I would say— This is a very mild reform. Which is to say I just want to see exper­i­men­tal­ism for open and con­testable rep­u­ta­tion cre­ation, okay. Given that there’s this Blob issue of cir­cu­lar­i­ty where the gov­ern­ment is so involved in the cred­it mar­kets, why does­n’t the gov­ern­ment say, Hey let’s have an open source cred­it scor­ing sys­tem that we’re going to use in some sit­u­a­tions.” I mean the con­ser­v­a­tive Jim Manzi talks about the impor­tance of exper­i­men­tal­ism. Let’s experiment. 

And peo­ple are going to say, Oh, they’ll game the mod­el.” But first of all I kind of like Kathy O’Neill’s point. She blogs at Mathbabe, and she says let them game the mod­el, you know. Let them game it because it’s essen­tial­ly that impor­tant us. And she gives a lot of ratio­nale there. Just man­date it in 1% of the contexts.

The oth­er, and here’s a much more con­tro­ver­sial point, is in terms of val­ue in finance and all these secu­ri­ti­za­tions and oth­er things, and HFT that I talk about in the paper is, we’ve got­ta go back to what FDR was choos­ing in the 1930s when we had the last mas­sive crash. And, the path he took was dis­clo­sure, okay. He looked at Brandeis and he said, We’ve got to dis­close. We’ve got to essen­tial­ly make sure every­one can under­stand what’s going on in the finance sector.”

The prob­lem is the more I look at this problem…and you know, Dodd-Frank was based on this. The more you look at this the more you feel like this can be defeat­ed so eas­i­ly. It’s so easy to cre­ate com­plex­i­ty. We see even in the past cou­ple of weeks in the House Financial Services Committee a coali­tion of all the Republicans and a num­ber of Wall Street Democrats, band­ing togeth­er to sort of get rid of cer­tain aspects of Dodd-Frank and and to sort of blow holes in Dodd-Frank, you know. And it’s just so hard to under­stand. Henry Hu’s arti­cle Too Complex to Depict? is par­tic­u­lar­ly good on this topic. 

There were oth­er folks like Moley, Berle, and Tugwell who said we’ve got to actu­al­ly cor­rect the mis­al­lo­ca­tion of capital. 

And if I had to offer one pos­i­tive vision for the role of algo­rithms and math­e­mat­ics in finance, I would look at Sandor’s book Good Derivatives. He’s a very esteemed per­son in this field. He’s talked about the use of deriv­a­tives to sort com­bat cli­mate change, etc. in the con­text of cli­mate exchanges. He’s some­body that shows that essen­tial­ly, the algos…let’s stop blam­ing the algos. Let’s cre­ate some pub­lic pur­pose behind them so that they can be uti­lized for bet­ter ends than enhanc­ing bonus­es for top man­agers. Thank you.

Further Reference

The Governing Algorithms con­fer­ence site with full sched­ule and down­load­able dis­cus­sion papers.