Kevin Bankston: I am not going to introduce these fine folk. They’re going to introduce themselves, and what I’m asking them to do is introduce themselves in the usual way while also answering the question of what are the key questions around AI that’re coming up in your field as you characterize your field, or put another way, what do you find yourself usually talking about when you are talking about AI. So, let’s start with you, Rumman.
Rumman Chowdhury: My name is Dr. Rumman Chowdhury. I lead Responsible AI, which is our ethics in AI arm of Accenture, a massive global consulting firm. So, I actually talk about a lot of things when I talk artificial intelligence, but specifically how it impacts communities, things like bias discrimination, fairness. But also how do we get the right kinds of narratives to build our tools and products as companies start to actually implement and enact artificial intelligence. When I say “companies” the interesting thing is I’m not actually talking about the Elon Musks and Jeff Bezoses of the world. I’m talking about like, Nestlé, and Unilever, and Coca-Cola. So the companies that are already in our daily lives adopt artificial intelligence, what does that mean and how do we do responsibly?
Miranda Bogen: My name’s Miranda Bogen. I’m a Senior Policy Analyst at Upturn, which is a nonprofit based here in DC that promotes equity and justice in the design, use, and governance of digital technology. And what that means is that we’re looking at two main areas: economic opportunity and criminal justice. And so when we think about AI, what we’re often thinking about is scoring people; you know, how are people finding jobs, credit, housing; how are people being rated on their risk of going on to commit a crime, things like that. And while it often is talked about in terms of AI, very rarely what we’re actually seeing is AI. We’re still very early stages of like, statistics. But at the same time, using the frame AI has gotten a whole range of new people interested in these sort of legacy issue areas of civil rights that weren’t interested before, because there’s both…it’s kind of a sexy new thing, but also that there’s a new opportunity to make change that maybe we can break out of some of the policy patterns that we’ve done in the past.
Elana Zeide: I’m Elana Zeide. I’m a PULSE Fellow in AI, Law, and Policy at UCLA’s School of Law. And there I also study automated decision-making, looking at it mostly from the realm of education technology throughout life-long learning. So, I’m looking at scoring systems that are supposed to be scoring systems, and how they structure human value, human capital, and affect human development. And also in that vein, things like efficiency, productivity, access to opportunity.
Lindsey Sheppard: Hi, and I’m Lindsey Sheppard. I’m an Associate Fellow at the Center for Strategic & International Studies. We’re defense and security think tank here in DC. Very focused primarily on emerging technology, national security, and defense issues. So, what I work on, artificial intelligence, primarily we’re thinking about how do we dispel the myths? How do we set expectations? What does reasonable use and what does use actually look like? And then how do you actually go about the process of bringing these technologies into our defense and intelligence structures? So I would say kind of the big macro question that we focus on is not on the algorithms themselves. So we look at the underdeveloped ecosystems surrounding the algorithms. So looking at how do you bring in the right workforce? How do you train your workforce? How do you get the computing infrastructure and networking infrastructure? And how do you have that top-level policy guidance to actually bring this technology in to support US values and interests?
Bankston: Great. So, we’re talking a lot about algorithmic decision-making. Or we could also characterize that as narrow, or artificial narrow intelligence as opposed to say, artificial general intelligence like Skynet or most things you see in science fiction. We’re talking about algorithms that’re trained on sets of big data—remember when we used to say “big data” all the time? We don’t say that anymore, we say AI. But often that data can reflect biases in our real society, or can be biased data sets, which leads to issues of algorithmic fairness. Which is the center of your work, Elana, in many ways so I was wondering if you could start by talking generally about what are the top issues around algorithmic bias as they apply to human potential generally?
Zeide: Sure. So, there are many ways bias can creep into algorithms that can come in from the data itself. Historical data that reflects patterns of inequity. It can trickle into the models that are then used to judge people. And it can trickle in in terms of what I talk about on a day-to-day basis, into the technologies that are then used to determine where people should be in life. What level they should be at in school. We’re looking increasingly at the idea of completely automated personalized teaching systems, so what you should learn, what level you should be at. And recommendation systems. Where should you go to college? What should your major be? What should your professional development be?
And then it moves into the hiring realm. So, in this way you get…because you’re using predictive analytics, you’re really replicating existing patterns. And the question is do we want to do that in human development and in places where opportunity at least is the rhetoric that we use?
Bankston: Miranda, you often address these issues in the context…well, on a variety of issues, but especially in the context of criminal justice. Can you talk a bit about that.
Bogen: Yeah. I think that’s another place where some of the sci-fi tropes honestly I think have inspired what we’re seeing in criminal justice. You know, RoboCop type of things.
Bankston: Minority Report.
Bogen: Yeah, Minority Report certainly.
Zeide: Yeah, I forgot to do that.
Bankston: We’ll talk more about sci-fi. [crosstalk] You don’t have to introduce it to every point.
Bogen: Yeah. But just to plant to seeds. But I do think that that has motivated things, because we see body-worn cameras. We see the vendors who are building those cameras thinking about how to incorporate facial recognition, which I’d say is one of the closest things to AI that I actually see on a day-to-day basis. It’s enormous amounts of data that you know, a human mind maybe can’t draw connections to but with enough data you can. Theoretically, if it’s accurate.
The other thing we’re seeing is in the criminal justice system, deciding who can be released on bail or not. Where police should be deployed. You know, and sometimes this is justified as making the system more “fair,” with the idea that if we’re relying on data we’re kicking out human biases. In other cases it’s that there’s a limitation in resources and so by using data we can more efficiently deploy resources. But I think it’s the exact same problem as in the opportunity space which I kind of straddle. All of that data, especially in the criminal justice system and especially in the US, is so tainted with our own history. You know, if you’re looking at where police ought to go based on where they’ve gone in the past, where did they go in the past? Where they thought crime was going to be, which was based on their stereotypes of which neighborhoods were gonna be bad neighborhoods. So, pretending—and I think a lot of the technologists building these tools either pretend or really truly believe that there’s a ground truth out there that they can just vacuum up and then turn into a predictive model—if we rely on that data as if it’s reality, we’re going to again be not only replicating the past but entrenching it. Because if it ends up in these systems and then the systems get more complicated and we get closer to what we think of when we mean AI, it’ll be harder and harder for us to actually change that in the future. And I think that’s one of the big risks when we’re talking about bias kind of creeping in.
Bankston: Yeah, so like over-policing of communities of color, for example, that data then feeds into these processes that results in more over-policing of communities of color, [crosstalk] and on and on and on.
Bogen: Exactly. And it just augments because it says you know, “Go back to this neighborhood. Oh. There was more police activity in this neighborhood last week, clearly that means there was more crime.” It doesn’t, but that’s what the system can kind of interpret. That’s the only data it has, and so then more police will go back the following week. And they never collect data on those neighborhoods where they didn’t go and so there could be crime happening in other neighborhoods. There could be reason to be dealing with the community out there, but if they’re relying on data that’s steering them in a certain direction you get into a feedback loop that prevents the system from ever learning that there are other examples.
Bankston: I’m just glad to hear you because I know you have opinions in this area.
Chowdhury: I have many thoughts. I always have thoughts. So just to frame what my two colleagues just talked about from a bias perspective… As a data scien—so I’m a data scientists by background; I’m also a social scientist by background. So when I give this talk in let’s say Silicon Valley, I highlight the fact that when we talk about bias there’s actually a lost in translation moment that happens.
When data scientists talk about bias, we talk about quantifiable bias that is a result of let’s say incomplete or incorrect data. This could be a measurement bias. This could be maybe a design bias, collection bias—so if you’ve ever like taken a survey, if you ask people whether or not they voted in the last election there’s some incorrectness to it. And data scientists love living in that world—it’s very comfortable. Why? Because once it’s quantified if you can point out the error you just fix the error. You put more black faces in your facial recognition technology. What this does not ask is should you have built the facial recognition technology in the first place?
So when non-data scientists talk about bias we talk about isms: racism, sexism, etc. So interestingly, we’ll have this moment where data scientists will say, “You can’t get rid of bias,” and what they actually mean is when we build models, it is literally like an airplane model. It is a representation of the real world. It will never be perfec—and actually it should not be perfect. That’s what a data scientist means.
What a lay person hears is, “I am not going to bother to get rid of the isms.” So that is a conversation that my group tries to bridge. So when we build things like Accenture’s fairness tool, etc., to the point of my colleagues, there’s a context to it that’s absolutely critical and important. And it is bridging that lexicon between what we mean in society and what we mean quantitatively that’s absolutely critical.
Bankston: So, y’all have mentioned facial recognition, which is a type of artificial intelligence or applied machine learning-based technology. That has been a very hot policy topic, not only for privacy reasons but because of…ism reasons. Anyone want to talk about what the state of the debate is there and what people are talking about when they’re talking about bias in facial recognition?
Chowdhury: Sure. I mean, I can kick it off, yeah. Well it’s been an evolving narrative. I think the initial narrative was about well…and this is Joy Buolamwini and Timnit Gebru’s work about there’s not enough diversity in these data sets, so what Gender Shades showed was that face recognition is about 98% accurate for white men, only about 60-something percent accurate for darker-skinned African American women. Clearly showing this gap which was a function of like, lack of diversity in the data set.
The narrative now is more about application. And again, creating a more diverse data set so then police can go harass minority children is not necessarily where the AI ethics space wants to be going. So there’s actually a number of bills about banning facial recognition. I think when the most prominent debates was in the state of Washington. There’s actually a bill in Oakland, San Francisco, and other…you know, I’m not going to be able to laundry list all of them.
So to your point, Kevin, it has been the issue that from a legislative perspective but also from a human psyche perspective we’ve latched onto the most. And I think because it’s related to these sci-fi narratives that we’re so famil—like, we all know the story of Minority Report, so it’s much easier as a person who works in the AI ethics space to be able to talk that talk. I don’t have to explain what facial recognition is. People may not necessarily know how it works, and there’s a lot of gaps to fill about actually how inaccurate it is in general. But, people will understand the general narrative enough to know where the problems may come from. And this is where you know, having this common watched science fiction lexicon is quite helpful.
Bogen: But I think you know, facial recognition is not just a problem in the criminal justice context. That’s the most frequent one we hear about. But facial recognition and facial analysis are both popping up in so many other contexts. There are tools out there that are being used to help interview people that are using facial analysis to try to map whether people are qualified for a position. And the people building those tools are doing interesting things to test for fairness…but does that justify the collection of your face to try and map onto this thing that shouldn’t necessarily have to do with how your face moves?
Chowdhury: And to your point it’s like building on— It’s not just face—it’s also the field of affective computing, which essentially puts all of human emotion into about six buckets? So everything about who we are and what we feel falls into like…six buckets.
Bogen: Which I think the most recent research was showing that black men more likely to be angry with a neutral face—
Chowdhury: Right.
Bogen: —than white faces, so. [Bankston sighs loudly] We’re really…pretty far behind any really good use at this time.
Chowdhury: But to your point like, that’s being used to make hiring decisions. So while we can latch onto this narrative of like we understand the minority report catching criminals, oh that might be bad…there are all these ways it’s creeping into our daily lives. And the thing is from a business perspective, it’s always sold as this efficiencies gain. It is a product you sell to help people do their job. And the reason why it often goes under the radar is that it’s sold as a tech deployment. So it is not sold and has to go under…like, has to be reviewed by city council. Or you know, these different groups. If you were to try to sell a team of people to monitor and predict policing, that may actually have to undergo like city council, etc. If I sell you a tech deployment, I am actually under vendor licenses. I actually may not actually have to go through the same channels, and this is where things are sort of being deployed and we find out later and are like what the heck, how come nobody knew?
Bankston: So we are entering a phase where we don’t have a bunch of crazy robots or megaintelligences wandering around, but we do have this mesh of algorithms in the background of our lives, doing things. Often shaping what we see online, which Miranda was the subject of some research you did. Could you briefly talk about that? And then we’ll move on to some other issues.
Bogen: Sure. So, a lot of people have heard potentially of the controversy around employers maliciously or in a discriminatory manner targeting ads online for housing, for jobs, for credit, saying “don’t show this job for housing to black people.” That’s a big problem. There’ve been lots of collaborations, lots of meetings, lots of lawsuits about dealing with that.
What we were looking at was what’s going on in the background? So let’s say I was running an ad for a job. And I really wanted to reach everyone. I wanted anyone to have the opportunity to work for my organization. So I post my ad online. On Facebook was where we had tested it. And I said you know, “send it out.”
And what we found was when we did that, we said anyone in the United States could see this, or anyone in Georgia. North Carolina, I’m sorry. But what we found was that the algorithm that was deciding who sees what ad was making its own determinations about who should see which job, who should see which housing opportunities. I think we found that lumberjack jobs were being shown to 90% white men. Taxi jobs on the other hand were being shown to about 70% African American users. And this was without us telling the system who we wanted to see. We were trying not to discriminate.
But the system was learning from past behavior of users what they were most likely to engage in. What they were most likely to click on. What people like them were most likely to engage in or click on. And it was using that to show those people what it thought they wanted to see, what was going to be most interesting to them, or what they were most likely to click on.
So we were looking at it in terms of ads, in terms of jobs and housing, but you know, this has come up in the past as well with like filter bubbles. Are we only seeing news that we want to read because algorithms are deciding that that’s what we’re most interested in and so we should see more of that? And I think that is similar to facial recognition. When we’re talking about “AI,” that’s a use case where we’re talking about hundreds of thousands of pieces of data that are going into deciding what should be shown to you when on Facebook or on Google.
And that’s the closest to AI that I get to, compared to like say criminal justice contexts like pre-trial risk assessment or who could be released on bail. When people say “AI in the courtroom is going to decide who’s released into bail,” often what they’re talking about is like, a numerical model that’s scoring people on a score of one to six. Which is not really super highly complex math. But these other sort of online systems that are learning from people as they interact with information are closer to that. And it’s really shaping what opportunities people have access to—exactly what you were talking about.
Zeide: Yeah. And following that, I often think of my job as scaring people. And then hopefully making them act on the basis of that fear? And what you were saying in terms of these scoring systems, they’re in the background. They’re not often visible in the way it would be in like a criminal justice system, an explicit decision-making mode. And so I often use sci-fi as my references, to sort of help people understand. “Nosedive,” from Black Mirror is the one that seems to chime with people the most. But Minority Report… Gattaca, even, sort of in previews. Brave New World.
I say these things and people grasp the weight of what I’m talking about in a way that is different than if you just seemingly talk about what seems like an administrative tool. And is often acquired you know, as an administrative tool.
Bogen: And I think anytime you hear the word “personalized…” This is a personalized job board. It’s a personalized news service… what I hear is “stereotype.” It doesn’t know you, it knows what type of person you look like.
Bankston: In the realm of the content we see, there’s also emerging AI that is going to be used to deceive us in a variety of ways. We’ve now seen deepfakes, which is basically creating using AI a video image of someone saying something they never [said]. There was also this amazing thing if you didn’t see it. OpenAI, which is the AI group that Elon Musk amongst others founded, they came up with an algorithm called [GPT‑2] that was trained on 40 gigabytes of Internet text to predict the next word if you gave it a word.
And so then they started feeding headlines into this thing to see if it could write a news story. And my favorite one was they wrote a headline about scientists discovering a tribe of unicorns in the Andes that spoke English. And it wrote…something that read like a human wrote it. And so just imagine armies of these things just spewing out propaganda BS.
Which gets us closer to the realm of geopolitical conflict, which is Lindsey’s bag. And so I’m wondering if you could talk a bit about the role that AI is starting to play in the realm of international conflict and international sort of geopolitics.
Sheppard: Absolutely. So, this is a great example that illustrates kind of the broader trend that artificial intelligence is living in. So we are at a time where we have the democratization of software, and the commoditization of key priority technologies. So this means that more people, more countries, more non-state actors now have access to highly capable, diverse, robust portfolios than they ever did before. And we the US are quite used to kind of being that capability provider, and increasingly other countries, other actors, don’t have to work with us because of this kind of global trend of ease of access, highly capable, low-cost capability.
And so that really brings us back to this question of is there an AI arms race? And it’s often framed in the context of are we winning versus China? How are we doing? Are we falling behind? What is going on? And you have to kind of understand the way in which entities apply artificial intelligence or data analytics. You apply them to achieve your goals and accomplish your needs and support your values. So the way in which for example China applies AI and facial recognition and the abhorrent human rights abuses should not and will not look like the way that the US applies AI. Because those fundamental value structures are different.
So when we think about—
Bankston: Knock on wood.
Sheppard: Yeah. Well I mean it—it has been a little depressing but I tell myself those fundamental value structures are different.
So when we think about who is going to win the race, the race is going to be won by the countries that figure out how do we make AI work for us? How do we use AI and data-driven techniques, and this new portfolio of highly-capable, easily-accessed technology work for us? And that’s going to be the country or those entities that win the race.
If we want to really pick apart how are we doing versus China, we’re still leading the way in research and development and innovation within the United States. And I think there is a certain emulation of our model that permeates across the globe. But we’re really falling behind on the deployment. And that’s where a lot of the narrative of we’re falling behind China, we’re falling behind these authoritarian regimes that’re figuring out how to make AI work for them…we’re not thinking well about how do we actually take the technology, lead in research, development, and innovation, and how do we deploy it in ways that support our ethical and normative values. And so I think conversations like this thinking about this as a highly capable system, how do we make it work for us?
Bankston: So I’m glad you brought up ethics. We’re gonna spend the next few minutes talking about now that we’ve set out some of the issues what’re the sort of policy interventions we’re seeing? And I’d say we’re seeing sort of self-regulation to some extent, usually under the frame of AI affects or AI fairness, and then some interesting legislative and regulatory moves.
But Rumman, you do AI ethics… What the hell are we talking about, when we’re talking about AI ethics?
Chowdhury: Yeah, so I have actually a lot of thoughts on the statement you made. So first, I actually have serious problems with framing it as the AI arms race. Number one, if we’re going to talk about the inclusion of diverse narratives, framing everything in terms of a war-like patriarchal structure of a zero-sum game is literally the worst way, and the least inclusive way to talk about the use of a technology. So by naming it that way, setting it up to be A, combative; and B, some “winner” and “leader” which sets up the hero narrative that we were just talking about as problematic. So even in that name, we have set this up to be patriarchal and war-like. So I actually don’t like to refer to it as an arms race, and actually interestingly have been talking to some folks who want to frame the discussion more as like the Space Race, about creating like the International Space Station, etc. Something more collaborative. Because it’s not as if we’re all just gonna be fighting each other over values. That is a framed narrative.
The other thing I may actually take issue with with you on is you know, to the average citizen in China the use of artificial intelligence deployment has been fabulous. We like to harp on their treatment of the Uighurs. That is small minority group.
Now, if we were to take that same narrative and flip it on the US, some of our deployments have been no different. We should point the finger at ourselves, at other counterparts. If you want to look at India’s Aadhaar system and the exclusion of lower caste groups… And it’s by design. It is to fulfill an internal political design, right.
So I don’t think we should sit on a high horse and act as if our values are better, or that we’re going to do it better. Because when we take the AI arms race narrative and we talk about it in Silicon Valley? the concern is not so much oh, how do we do it in a way that’s better or more ethical. It’s actually, “Shit, China’s beating us, how do we get there faster?” So no one’s even thinking about— Because we have— The arms race narrative pushes this imperative of running faster? we don’t—much like we did with the nuclear arms race—don’t actually bother to stop and think what should we be doing, because we’re so busy looking at the other guy “beating us.” And the problem with the “beating us” part is the other…the opponent…(our imaginary opponent)…has shaped the narrative and the metrics for us. So it’s harder to—actually if we are going to have a values-aligned system, it’s harder for us to adhere to our values if someone else is defining what the race is all about, right. Because we’re gonna have to adhere to their metrics to get there. So that was my spiel.
But when we talk about ethics, when we talk about—
Sheppard: Just to say, I agree with you more [crosstalk] than you may think I do.
Chowdhury: Yeah yeah yeah. Okay. Good. I’m glad.
So, to talk about— And this…it’s such a complex issue. Because this is actually you know, a global issue. I mean really just reminding us that borders and states and boundaries are artificial constructs of politics, right. Like that is the number one thing working in AI reminds you. So if you think about a law like GDPR, General Data Protection Regulation, it transcends borders and boundaries and that’s why it’s actually impactful. If it were just focused on the EU it would not actually have the level of impact that it does on tech companies.
So when we think about fairness, ethics, etc., it needs to actually transcend borders and think more about communities, and groups, and narratives that can filter upwards. And the difficulty has been—and this is sort of why I take issue with this top-down framing. So much of what we talk about is about governance. Governing whether it’s systems, or how do we create sets of values. And that needs to by design be inclusive, and what we have not actually figured out is, how do we understand what ethics means to all the different impacted groups? Because you know, who does Gmail impact? I don’t—like everyone? Great? Now let’s get “the diverse perspectives” to figure out what the ethical framework is for that… Well, good luck. So it’s a tough nut to crack, but it has to do with the fact that all these technologies and these companies transcend borders and boundaries, and they impact literally every community out there.
Bankston: So, that all sounds very straightforward and it’ll get solved by ethics boards, right?
Chowdhury: Super easy. Yeah yeah, no absolutely.
Bankston: Miranda. Ethics boards. [panelists all laugh]
Bogen: Oh boy. Well so I think the problem with the framing of ethics and how I hear it around ethics boards, but also just in general about we need to make our AI ethical and how are we going to do that? All of that presumes that at some point we’re going to come to an agreement or consensus on what ethics are… And have we ever done that in our society? No. We’ve been struggling over that for the history of not only our country but the entire world and you know, the history of humanity. All of humanity. And that’s what societies have been structured around, is struggling over those values and structures of governance and ethics.
And so I think what’s really important here is to set up structures such that whatever we build in today is malleable, so that if our values change in society, we can ensure that the tools that we’ve implemented to fulfill those values are also changed. Like if we had the technical capability to build AI systems 100 years ago, what would our society look like today? It’s super frightening. And so I think boards and things like that, they’re not so useful in the sense that they’re gonna come up with a solution, but we do need to come up with mechanisms so that people are thinking about these systems in an ongoing way over time. But not only you know, the privileged sort of high-level people who are in those boardrooms. How are they talking to the people who are not only using the technology but affected by it, as Rumman said.
Chowdhury: So after what we’re sort of all laughing about and referring to if you’re not familiar is the Google ATEAC board issue that happened in April. So what had happened is there was a lot of pushback from the academic and activist community that led to the board being disbanded.
Interestingly, in the AI ethics space, we have these unique roles of industry ethicists. People like myself and my counterparts in these other companies. That’s kind of a…a new thing. And for those of us in these jobs, what I pulled together was a Medium article where we talked about essentially what Silicon Valley is now “disrupting” is democracy. That’s actually what they’re trying to do. They’re trying to create these democratic systems but they’re doing it in the way only Silicon Valley knows how. Which is very problematic. So what that Medium article was about was actually fielding the industry ethicists who were able to contribute, and some thoughts on how we believe we can govern the use of these AI systems in an ethical way.
Bankston: Some have suggested that these various boards are attempts at ethics-washing, sort of giving the appearance of some sort of self-regulation but really as a way of forestalling actual regulation. That said, there are some ideas around actual legislation on the table, particularly coming into the context of the debate over new privacy legislation? I was wondering if anyone could or would speak to…[crosstalk]
Zeide: Yeah. So I’ve been in—
Bankston: …how that is shaping up.
Zeide: I’ve been in the privacy space, which is how I got into the data space, which is how I got into AI space, for a little while now. And I’m amazed at the legislation we’re seeing, and the conversation around it. Last week there was a horde of privacy professionals in town. And for the first time I heard people talking realistically about the idea of legislation that would take into account intangible privacy harm. So not just an economic harm, which is what you usually need for a law like that to work. And talking about it as imminent, in some way, shape, or form. I think that’s remarkable. And it shows we’ve come a long way. And that there seems to be an agreement that privacy is no longer the really classical idea of notice and consent. That people do not read terms of service. And I think increasingly, which is something I’ve argued, that they don’t have a lot of choice or alternatives in terms of many mainstream tools, so expecting people to opt out from that is a poor way to ensure privacy.
Bogen: I mean I think the reason people are paying more attention to privacy now is we’re realizing what can be done with our data. It’s not just a theoretical “your data’s being collected and maybe it will leak, and someone will steal it, and then they’ll steal your credit card.” It’s being used to make decisions. It’s being used to shape your information environment. And I think that’s what’s instigating a lot more attention from the Hill at the moment and why people are focusing on privacy as the remedy.
There’s also another intervention that was introduced recently called the Algorithmic Accountability Act, which is intending to compel companies or entities that are building predictive systems to check those systems beforehand for their impact. To check them for bias, or discrimination, or other types of harms. And I think that’s interesting because what it’s trying to do is get people slow down. You know, don’t go full speed ahead, try and think before you act. There’s still a lot of questions in that proposal like, who gets to—know, I think they envision the Federal Trade Commission enforcing that and creating rules around it. But who gets to see those impact assessments? Do companies really have to do anything if they find some kind of harm? Who’s defining like how much harm it would make them have to change their their model?
But what I think is interesting there is again, the incentive to move to artificial intelligence or machine learning is often “remove the friction,” you know. Make everything more efficient and easy. And I think the reason we have laws, and especially the reason we have civil rights laws—which is what I mostly focus on, is because pure efficiency led to an awful lot of bad outcomes. And so there’s a reason to slow down. There’s a reason to not be efficient. There’s a reason to not be hyper-personalized. Because if we do that, we’re catering to only a certain part of society that can take advantage of that ease, whereas other people can’t. And so I think those types of proposals of forcing us to not be as efficient…well, I think businesses don’t like them and we still don’t know what they’ll look like. There’s a purpose for that type of intervention.
Bankston: Moving on to the question of this event, what can sci-fi teach us or not about AI policy? I’m curious for y’all’s takes on how AI and sci-fi has been helpful or hurtful to the discourse around AI in policy, or helpful or hurtful to your attempts to engage in that discourse personally. You know, I already flagged what my pet peeve is, which is AI has conditioned us to worrying more about Skynet and less about housing discrimination. And I often think that Kafka is actually our best representation of AI in the sense of his books are all about baseless bureaucratic systems that don’t make sense and control your life. But I’m curious what y’all think.
Sheppard: So I think in engaging with policymakers, particularly in the national security space, the equation of consciousness or sentience with intelligence or replicating intelligent function prevents us from having an honest conversation about when and where and how do you best use these systems. To think about it as a conscious being versus an algorithm and data and all of the problems that we’re talking about, that really masks the ability to come in to your problem area and to have an honest conversation about what are the true pitfalls, what’re the true benefits, and how do we actually bring artificial intelligence or machine learning or computer vision into a workflow.
Zeide: So for me…I gave you some my touchpoints a little earlier. But for me the anthropomorphizing of technology is a real issue. So when I talk about education technology, people often think about replacing teachers and the idea of robot teachers. And they picture the Jetsons, for those of you who may be old enough to know that. You know, a robot in the front of the classroom talking. And there are things that can automate instruction right now that don’t look like that, that are simply a platform. And yet they have the same sort of impact that putting a teacher at the front of the room would have in terms of what students learn and how they advance.
I also think that the all-or-nothing aspect of a lot of science fiction is…it impedes some conversation. So, for reasons that make sense, most science fiction says once this technology has been developed and been deployed, they don’t see it developing, they don’t see it being adopted ad hoc, they don’t see it messing up. And every single technology that I have ever used has messed up at some point. And I don’t think that our narratives account for that in the way that even accountability is… You know, forget something as sophisticated as bias. Like, what about typos?
Bogen: For me it’s two sides of a coin. One is that I think sci-fi has helped journalists frame old questions in new ways. Like back to the criminal justice context, if we’re talking about robots in the courtroom or Minority Report, that gives people an immediate frame of reference that something they thought they knew was happening is changing and it’s changing because of technology, and it’s worth paying attention to. So I think that has as I mentioned earlier kind of broadened the community of people that are interested in these issues.
You know, just last week or two weeks ago The Partnership on AI, which is one of the self-governing entities that’s been created in recent years to try and think about some of these issues, released a report about pre-trial risk assessment, about using AI in the courtroom, coming out and saying this technology is not ready yet, we should consider whether…I believe they said whether it ever ought to be. But that there’s many open questions and some severe limitations to using this technology. That’s a totally different stakeholder group than has been involved in the criminal justice context for quite some time, and it lands some credibility to have the technologists saying you know, “We know what’s going on here and we can’t build this yet and you don’t want us to build this.” So that’s interesting.
On the other hand, when the media frames some of these kind of news stories using a sci-fi trope, people can presume that they understand what’s happening when in fact it’s a complete overblown perspective of what’s happening. So for instance if we’re talking about social credit scoring in China, I think the “Nosedive” episode of Black Mirror, the episode where we’re talking about everyone scoring every interaction that they have and you have like a score that you walk around with and that determines what you have access to. People have that vision when they think of what China’s doing, and that’s just not the case. They’re much more rudimentary. Still working on sort of patchworks of blacklists that are based in their value system and so it’s not as jarring to the mainstream Chinese society as I think we imagine it would be, because a lot of us have this frame of a pop culture example of what a social credit scoring system looks like. And so it it kind of redirects energy where maybe that energy could be used in coming up with different solutions or thinking about how to prevent what’s actually going on here in this country that we ought to care about because we’re distracted by this frame that we think we’re familiar with.
Bankston: And Rumman, and then we’ll get a Q&A.
Chowdhury: Sure. I love everyone’s points that they’ve made. I whole heartedly agree, especially with the anthropomorphizing one. It’s extremely problematic.
I guess the one that I would raise is a problem I see in Silicon Valley a lot. A fundamental belief in maybe the tech industry as a whole but definitely in Silicon Valley which is driven by some of this literature is that the human condition is flawed, and that technology will save us. And this is the obsession behind having microchips in our brains so that we have perfect memories. Guess what, we don’t want perfect memories. Because there are people who—
Sheppard: There was a Black Mirror episode [inaudible]—
Chowdhury: Yeah. But there are people who actually are alive who have a condition where they vividly remember everything that ever happened, and they live in constant trauma. Imagine being able to relive your parent dying with the same level of intensity you did when they actually died. We are meant to forget things. So I think there is not— Because of this notion that technology will perfect us or fix us in a way in which you know, humanity is weak and flawed, is problematic because when we try to create artificial intelligence, we don’t create it around human beings, we retrofit human beings to the technology. And especially living in a world of limited technology, technology that is not quite where it should be in the stories but as Elana very accurately said is maybe 30% of the way there? We actually try to force ourselves to fit the limitations of the technology rather than appreciating that maybe we are the paradigm to which technology should fit.
Bogen: I had one more thing, Kevin. I think the other thing is, even when we’re reading sci-fi that’s intended to be dystopian and we’re intended to read or watch it as being dystopian, it’s acculturating us to the idea often of this constant surveillance. That in order for the technology to work that’s in that story, the data’s needed and that that’s just inevitable. And so even we see it going wrong, I think we’re getting used to that idea, and that’s what we’re seeing today in the pushback to facial recognition. There’s just not enough people pushing back to facial recognition because we recognize that. It’s something that’s inevitably coming. Maybe it would be bad, but it’s going to come. And I think that’s something to think about as well, even if it’s clear the story is going south because of that surveillance.
Bankston: So we don’t have a whole lot of time for Q&A ’cause we’re jamming a lot of content in today. But we do have time for a few. Ground rules: Questions in the form of a question. Keep them brief. Answers responsive to question, keep them brief. Hands raised. Yes ma’am, please wait for the mike to come to you.
Audience 1: You know, the conversation has two sides in a way. One is government kind of issues and we have you know, civil rights kind of protections against that. Other is private sector kind of intrusions against privacy and surveillance and things of that nature. Putting aside the government side for the moment, where on the commercial side do we have options to do pushback from a legal action perspective? Are there causes of action? And you know, just a final footnote, and yet we all go out and buy Alexa and install it all over our house and leave it on, voluntarily. But yeah, I’m interested in the private against…as you know, the new phrase “surveillance capitalism.”
Chowdhury: I can maybe start that one. So what we’re seeing and actually Miranda mentions, the HUD… So what government is trying to do now is, how can we take existing law and existing protections and apply in these new settings? It’s a bit of an uncharted space, because as Amanda said you can put an ad out in good faith and then the algorithm is making decisions based on how it was trained. You may not even realize that it’s been deployed in a biased manner. So we had to come to that realization that that happened, and then be able to figure out what is the “angle.” And what we’re seeing in a lot of the— And I know you want to sort of separate government but you kind of can’t in this. With the UK group the ICO (Information Commissioner’s Office) and the FCC and some of the language of the bills, we see like latching on to the notion of protected classes. So what are the groups that are already protected, and then how can existing law sort of be leveraged to further that, and that’s a starting point for then starting to build further protections.
And to your point about Alexa, you raise an issue in the AI ethics space, which is what do we have to offer? Technology companies have nice shiny gadgets, the ability to look cooler than the Joneses or whatever. They offer you incremental ease. What do we offer? We offer scare narratives. We offer… So in our space, we actually have to figure out— And yes, the notion of liberties, freedoms, and protection is less tangible than a shiny new watch. Unfortunately. So what can we as the AI ethics space offer people that can combat this narrative that tech companies have honed so well?
[another panelist begins speaking; indistinct]
Bankston: I’d like to fit it in; let’s keep moving. Questions. That gentleman near the back.
Damien Williams: Hi. Thank you all very very much for your conversation today, it was really great. My question is for Rumman. You mentioned the translation problem between different communities about bias. But I wanted to kind of dig down a little bit on that and maybe challenge it a bit, and ask you, is there not a space in which some of us might mean we can’t remove bias because we’re not talking about isms but we’re talking about the foundations of isms?
Chowdhury: Oh, I like that.
Williams: Perspectives.
Chowdhury: Yes, absolutely. Well and actually there’s an entire narrative now that we really shouldn’t be thinking about fairness, we should be thinking about justice. We shouldn’t talk about bias, we should talk about pain and harm. So absolutely. And I think Miranda raised this really well that this is not going to be a solved space. And I think we just all have to get comfortable. And it’s funny because in industry we’ve been saying like change is the new norm, and everything’s going to be— It’s just been like boilerplate narrative for years when talking about technology. And I think we will actually have to grapple with the fact that we will just be living in a space of constant change, growth, and evolution. So, absolutely, you’re you’re totally right.
Bankston: One more quick question—
Chowdhury: Which I will not answer.
Bankston: Hands? Hands. This gentleman right there.
Audience 3: To what extent does what’s imaginable in AI ethics a function of the imperative for scalability that venture capital funding AI development demands? I’m thinking of the scalability of returns on investment. Scale-free versus concentration of capital.
Zeide: So, I’ve thought a lot about that in terms of the practicality of being able to implement, accountability, explainability, transparency, ethical models…algorithmic impact models. When you have a profit impera— And people inside these companies, some of whom are not…evil, actually. [Chowdhury laughs] Anyway. But, there’s an imperative. There’s a commercial imperative to, especially for the publicly-owned companies, they need to produce profit for their shareholders. And when that is the ultimate bar, and when those results are scrutinized, incredibly carefully, I think it leaves companies in a very difficult position to be able to slow down, and increase friction, and be thoughtful about implementation. Because they all seem to be racing against each other.
Bogen: But I think there’re some really interesting examples, and to your question earlier of how do we push back against corporations that are doing this, we’re doing that. The advocacy community is learning how to advocate to the tech companies using shareholder action, using sort of public campaigns, using directed research to say “here’s your problem and here’s how you can fix it.” And I think especially as the companies…the people building this technology, are creating these systems that are making really important decisions in people’s lives that may fit with the law, that may not, we I think can come to expect those actors to also be playing a role of governance that we have the responsibility to pay attention to in that way, and to tell them what we expect as the public. What we expect them to do, what we expect them not to do. And to get other people to appreciate that fact as well.
Bankston: Well, that’s a nice closing note of agency and hope. So please thank the panelists. Thank you.