Nathan Matias: So, I’ll ask just a couple of questions and we’ll open it up just for a couple of minutes. So one of the things I found interesting about both of your conversations is that as we start to see code becoming a powerful force in society, we’re no longer just trying to change laws but we find ourselves—just as we’re citizens trying to encourage the government or congresspeople to change laws—we’re now standing outside of companies saying well, there’s code that affects our lives. Maybe these very systems we’ve heard earlier that companies might want to keep secret for complicated reasons, how do we think about creating change when it comes to code that we don’t control?
Ethan Zuckerman: I published a very angry piece this morning in The Atlantic asking really the question of why Facebook doesn’t give us more control over the newsfeed algorithm. And my core argument was there’s this change going on right now, it’s going to reprioritize our friends and family, it’s gonna deprioritize content we’d paid attention to.
You could imagine giving users a checkbox, or a slider. You know, “Yes, I want this change. No, I don’t want this change. Here, let me try settings in between.” And a proof of concept, Gobo, which basically shows you that it’s possible to give half a dozen sliders and get very different pieces out of it.
It’s interesting that the only path that I can think of to try to get Facebook to make this change is either pigeonholing Facebook execs and saying, “I really think you should try this.” Or sort of naming and shaming, and sort of going out in the press and doing this.
It’s pretty clear that you’re not going to get there through norms, through laws. You’re not going to go to this Congress and sort of say let’s pass laws about transparency for social networks. It’s also unfortunately pretty clear that you’re probably not going to get there through code. I wish I saw a lot of alternative projects sort of coming up to challenge the dominance of these existing monopoly platforms. But these monopolies are very powerful once they get a certain user base.
So you do end up with this relationship like my dear friend Rebecca MacKinnon has written about, where it feels like you’re petitioning a monarch. You’re going to a sovereign and basically saying, “I would like you to be a benign despot rather than just a despot.” It’s really worrisome that we’re this far down this line.
Matias: And that’s McKennon’s book Consent of the Networked?
Zuckerman: That’s right. So one of the things that I that I wanted to ask Karrie in all of this is… So, it’s amazing that we have Julia Angwin, God bless Julia Angwin. It’s amazing that we have you and Christian, God bless you and Christian. Who should be auditing? Is this something that we want governments to do? Is this something that NGOs should do? And I don’t just mean online, I mean sort of in general. If we take everything from the housing discrimination cases that you were talking about, all the way through these algorithmic discrimination cases, how should we do this and what would we need to make this possible within the case of these sort of monopoly platforms that Nathan’s referring to?
Karrie Karahalios: So, I think that’s an excellent question and I think it depends on the specific thing that you’re auditing. So for example, there was a really nice web site awhile back that looked at travel. And it wasn’t organized but a lot of people would go on this travel site and they would talk to each other about when to find the best flights…in a sense they were auditing it together.
Zuckerman: Yeah.
Karahalios: And in this case I think that you know, every citizen has a right to be able to audit this and look at this data and put it together. Let’s say for example you want to do a collective audit for credit scoring. That is a really hard collective audit to do. That requires lots of different citizens to come together to pool their data, to give it to a trusted source, and have this trusted source actually aggregate it and find something with it. So in that case you need somebody who’s trusted, and going on your trust scale maybe it’s not the government here. You know, maybe it is a third party that is supported and the government will listen to that will do something for you there. And so it really is very context-dependent.
But a lot of the things that we’re looking at, specifically in social media, we want every citizen to be able to do.
Zuckerman: Is that a reasonable task, right? So we’re united here with a group of really smart, really committed citizens who are doing the work of…in some cases becoming behavioral scientists sort of to be able to do the work. While I would love for everyone in the world to become as smart and engaged and thoughtful as the people in this room, my experience is that those people are actually fairly rare. Is there a way in which we need some sort of institutional layer to sort of insure that we get there? I’m not sure that without a Julia that we don’t ever get an audit of these flight risk systems for the simple reason that many of the people affected by these systems are not necessarily in a great position to sort of take the time an the incredible amount of work that it takes to audit them.
Karahalios: Yeah. And so what we found in our work for example is that in auditing an algorithm… Let’s say we audited an algorithm that had over a hundred features. It turns out that people that actually cared about these individual hundred features were…you could count on one hand. They didn’t really care so much.
It turned out most people wanted a high-level overview that everything was going okay. And that’s what every single person should have. And that’s why I love a lot of these plugins that ProPublica makes where you can actually interrogate the ads that you get on your pages, and you can look at lots of different things. And so the level that you get from the audit I think is very different from what people want. And so a computer scientist or developer might want to get thousands of features coming out of it to understand it. But in terms of the world of algorithm literacy, most people do not want that level of detail.
But the term that you said earlier, “control,” I think that’s critical. And could I go back for a second to address—
Zuckerman: Yeah.
Karahalios: —something you said earlier? So, I think control really is something we don’t think that much about. And I love the piece that you built with the sliders. I think it’d be wonderful to have that the power to see something like that. And I also see that Facebook has controls in there. They have control sometimes…sometimes they’re hard to find because they’re in like five, six different places. But they’re there if you want to use them. But this issue of control I worry sometimes can be a placebo effect.
And the reason I say that is because we recently conducted a study where we had controls similar to the ones you had. But not exactly the same controls. So for example, social media controls are different from controls on Photoshop, for example. I can click a control that says “make this black and white” and I can see instantly that it happened. A social media control is subjective. A social media control might say that this is more popular. Or that this has more vitality. Or that this person is closer to you in some way.
And what we did is we conducted the study, and we had people view a control panel that actually worked to the best of our ability. And we tested this and people said it worked. And then we showed them the saved feed, with the same control panel but everything was random.
Zuckerman: Huh.
Karahalios: And it turned out that in assessing levels of satisfaction and preference, people preferred having a control panel to the case where they only had the feed with no control panel. But when we compared the random control panel and the fully-functional control panel, they were indistinguishable.
Zuckerman: Yeah, this is the button on the street lights, right. You have to have, it just doesn’t have to actually be attached to something.
Karahalios: Yeah. And so, I would love to keep talking to you more about this because I’m not really sure where to go from here!
Zuckerman: Yeah, no. We’re doing the same analysis. We didn’t think to be quite as evil as giving people completely disconnected sliders, but. And the one thing we did do is we added in a tab on each post that ends up in there that says “explain to me why this ended up in my feed,” or “explain to me why this is filtered out.” And and we’re sort of hoping that that’s going to help with it.
You know, I have multiple questions in this. I question whether people want that much control. I think everyone thinks they want that much control. I think they will back away from using it over time.
Karahalios: Yeah. I agree.
Zuckerman: My guess is that Facebook’s settings are pretty good for a lot of people a lot of the time. I guess what I’m really interested in in all of this is…we are starting to find these different point cases where these algorithmic biases come into play. And we’re having fun sort of celebrating the worst of them, right? So the beauty contest is…amazing. Like you know, we do an automated beauty contest and we forget about people of color. Like, that’s a bit of an oversight. Joy Buolamwini, down in our lab, has been interrogating in general how badly facial recognition software does with people of color, and Joy is a very dark-skinned woman. Family’s from Ghana.
And what she started finding were these just completely quotidian examples. She would try to get Snapchat to put a filter on her face and it wouldn’t recognize her, and I’d show up in the frame and the filter would go on me, immediately.
We shouldn’t have to find the most absurd versions of these and then spend enormous amounts of time exposing them. First and foremost, companies should be doing this with their own technology so that they are not getting humiliated when we pull it out, and because it’s the right thing to do in the long run.
But if we still feel the way that Nathan sort of posits that we feel, that we don’t have any ability except to go to the feet of the sovereign and say, “Oh dear please sovereign, please allow black people to be seen by your algorithm as well,” then for me this asks questions of do we need an NGO which is the algorithmic auditing force, or as Joy puts it the Algorithmic Justice League…do we need government auditors that sort of look at these—although you make the point that the government may be the least-trusted entity that we can deal with around this. How do we go after this on a systemic basis?
Karahalios: I think there’s two points here. One thing that we find in our studies first and foremost that we need to address and we need more research on is why people trust these algorithms. And so one of the things that we found is that— So I love that you put that tab on there like “why do you think this is happening?” When we asked people why this was happening, they were like, “Oh…it’s the algorithm.” And they would come up almost always with an excuse—they would convince themselves that the algorithm was right and they would come up with a story to describe why the algorithm was right.
Zuckerman: Yeah.
Karahalios: And so I think that’s one issue we need to address, in the sense that first people need to… If we want people to stand up and do something, people have to start understanding that maybe it’s not an omniscient god-like algorithm. And so one step, which takes a really really really really long time, is some form of algorithm literacy. And that’s going to take…forever. I’m not gonna say that’s gonna happen overnight. But it doesn’t mean we should stop trying to make that happen.
Zuckerman: One of the great examples I find with this is… So we’ve seen this in the news field, people trust algorithmic news judgment, Google telling you this is the right news story for your query, almost two to one over human editors. But then when you show people targeted ads targeted to their demographic characteristics, most of us have had the experience of going, “Actually I’m not a 6o-year-old Ecuadorian llama farmer.” And when you can sort of remind people that it’s the same company that’s trying to sell you organic llama feed for your flock that’s also giving you the targeted news they start eventually making the connection that “Oh. Well actually, these things maybe aren’t all good, but that’s—
Karahalios: [crosstalk] Interesting—
Matias: What’s your second point, Karrie?
Karahalios: Oh, I’ll finish on that and then I’ll get to the second point.
So we just did a study on ads this summer. And it turns out that by having these interventions and providing these interventions for the public, we’re getting people to go from from believing in algorithmic authority to algorithm disillusionment.
Zuckerman: Mmm.
Karahalios: And so for example, it turns out like, I thought that because I hated ads everyone hated ads. I really thought that. It turns out I was wrong. It turns out there’s a huge population of people that love ads. I just didn’t talk to them. Until this summer. And one of things that people really like about ads is if it’s something that pertains to them, or if the explanation for the ad actually caters to something that they’re proud of themselves for. So for example one person said you know, “It’s as [if] I’m seeing this ad because I’m a hip millennial. I love that. I love this ad.” And so this explanation for it made them like they ad more.
And so one thing that I’ve had to do is change my model of…just because I don’t like them other other people don’t like them. And maybe think about how people might be able to have a discourse with some of these algorithmic systems around them, instead of just thinking of this algorithm as this black box that just gives you an answer, you deal with it. What if it was more of a conversation, for those who actually cared to participate?
And what was the second?
Matias: You said there were two points that you really want to get across.
Karahalios: Oh, yeah yeah yeah. And so—sorry. And so, like in the Fair Housing Act where they actually stipulate that they support— Like, I would love for there to be some type of organization that also provided funding, because I can’t tell you how expensive these audits are, where you got this funding to conduct them. And maybe you have to prove yourself in some way. Like I actually found the data scientists at Facebook open to discussion. Like, the Facebook 1.0 API that we used for a really long time. We’ve done audits with it. They were supportive of us doing it. They claim that they adapted some of our interface internally into Facebook. They tried to keep giving us access to it, but it was not possible.
And I don’t know the exact mechanism for this but if there was some way that people could propose an audit, get funding for it, and do it… And I’m not saying— It’s actually unfair to say that it’s just academics. Because it feels like it’s a privileged position and the audits that ProPublica has done have been incredible. And the investigative journalism that’s been done has been incredible. But I do think we need to start thinking about some channel that also provides funding for these to allow people to do these audits.
Zuckerman: One thing I really love about that idea (I swear it’ll be the last thing I say, Nathan, and then I’ll shut up) is that we’ve experienced the phenomenon for the last five or six years that people with very strong tech skills want to find a way not only to make the world better but to do so using that particular skill set. And so finding some way to sort of harness the energy not just of people in this room, but people around the world who are convinced that their technical skills, their analysis skills, could make the world better, this is a lovely direction to start pushing all the people who are starting to get very thoughtful about doing analysis of data in a direction that actually helps us deal with civil rights on a very broad level.
Matias: Excellent. So, I’ll take two questions one right after the other. Then we’ll try to batch them and answer them briefly so we can all have a break as well. So one here from the front. And then…someone on the side.
Audience 1: Hi, I’m Turrell. [sp?] And primarily on Reddit. But…and this doesn’t address your topic so much but addresses creating change. And it also addresses the idea of how we deal with these giant corporations’ sort of black box. The mantra we hear from so many moderators and platforms is if you don’t like it, go make your own! But we don’t really support that. And I think that’s something that we can really focus on, is actually supporting—whether it’s a nonprofit or individuals or something—actually being able to start their own communities and giving them sort of startup information. The data, as well as the support systems? Just in general.
Matias: Thank you. And one question in the very back, on the aisle.
Audience 2: Hi, I’m Biman. [sp?] I’m curious how investigative journalism brings forward things that Facebook or Instagram might be doing which affect the mental model of the user. There was news about how they ran data on a group of users where they might show a positive or negative newsfeed and that might affect their behavior. Or how Instagram might withdraw likes from a user to make them keep checking the app. My question is that these things drive their profit model and it’s not easy to make them stop doing that? So as a general public user, just raising awareness so that users using these apps can take things with a grain of salt and not be affected psychologically by what they see, a good idea or how a sense of control can be applied to the mechanisms these companies are using without thinking of the general mental effect it might be causing on the public.
Matias: Thank you. So, both questions that touch on the kind of business models and the funding and the sustainability of the general benefits that we gain from the Internet and the challenges created by this current business model ecosystem.
Zuckerman: Do you want to try first, or?
Karahalios: Sure. I guess I’ll start with a comment about your question first in that I think one of the many ways to address a little bit about what you’re saying is through the design of the interface. And one approach that we’ve been looking at is this idea of seamful experiences, whereby you reveal something in the interface to actually help a user understand a little bit better. Now, whether or not Facebook would actually do this is debatable. But there are things in the seams that they are starting to do which look promising. And the idea of using the interface to help educate people is one that we need to keep looking at.
In terms of the psychology of it, I mean, that’s a long story that could go on for hours. But as a bridge from one question to the other, one of the things that I’m really excited about is that now more and more, like if I’m in a Lyft, or if I’m on the street, or if I’m talking to students, I think it’s incredible that the conversations we’re having are about algorithms in the sites that we use every day.
And something that would encourage more and more people to talk about this, and then how we could make sense of what they say would be fascinating. Like I would imagine that if people looked online at what people were saying and there were some aggregate visualization of peoples’…Like you showed in your talk Ethan, that people might listen if they see more and more people were discussing this certain topic.
I was going to say something else but I forgot so I’m gonna…
Zuckerman: So, answering the question from our Redditor friend about how we would encourage people to build their own platforms and move there. I had the privilege of spending about a year writing a study with Neha Narula and Chelsea Barabas, two really brilliant scholars here, on decentralized publishing systems. So we were really looking at this question of how do you build a decentralized Reddit? How do you build a decentralized Twitter?
And my initial instinct was, oh my god that’s really hard. How do you deal with all the database issues, namespace issues…? It turns out to be hard but not that hard. Like, technically there’s a bunch of good systems out there that get a long way toward solving them. So what that does is raise the question of why aren’t these platforms thriving?
And the big answer for why these platforms aren’t thriving is that people feel like they can’t exit these existing platforms. People have so much invested in these platforms, so many relationships, so many contacts, that sort of shutting it off and going cold Turkey just isn’t the answer for most people.
And so the three of us ended up sort of recommending two things. And they’re pretty simple but I actually think they’re pretty big. One is that you need the right to export from a platform. Not only all of your content but also your social graph. You need to be able to take the relationships you have and potentially move them elsewhere.
And the second one is sort of even more subtle. You need the right to aggregate. You need the right to be able to use a tool that would be able to follow you…not only on Twitter but on Mastodon; not only on Facebook but on Diaspora; not only on Reddit but on Steemit; and be able to sort of integrate those things so that you can still be involved with a community while you’re involved with a new one.
This probably requires a policy change. I don’t see Facebook adding the Export From Facebook button. And I’ll tell you Facebook has made it impossible for us to use Gobo, which is a prototype aggregator, to fully engage with Facebook. But if I were going to push for legislation the way that you and and Christian and others are so wonderfully pushing for judicial judgment, it would be for those two things. Those are the factors that I think would let us have an environment to make it much more likely that we could have competitive platforms.
Karahalios: And the one thing I would add to that—I completely agree, is this…you know, in the early days of the Internet, I remember in 1993 my adviser told me to make an HTML page. I had no clue what he was talking about. So I went to the NCSA web site and figured out what it was. We built a camera in three days to get a picture of us because we didn’t have a digital camera. We put it up there.
Today, everyone can have presence online using Facebook. So you know, myself and my colleagues are trying to build tools to make it easier for people to put up some of these networking sites without having to be an expert programmer. Like if you can use a basic toolkit to start your own little Facebook group. But again, what Ethan says is critical. Like, if you’re there alone it’s kinda useless. You need to bring your friends with you, and that’s hard. That’s hard.
Matias: Well, thank you so much for sharing with us. I think some things that I’m drawing from this… First, many of the people in this room are both in a situation where we are governed by algorithms, but we’re also putting algorithms in place. That in today and over this weekend, people will be discussing how communities can deploy their own AI systems to do moderation. And there might be lessons about transparency, or audits, or how people can question those systems that communities can learn.
Another thing is this reminder of the value of building public interest nonprofit organizations, which we are partly here to think about, that serve these goals.
And finally I’m just going to point out that I’m on stage with someone who just nonchalantly said, “We didn’t have a digital camera so we built one in three days so that we could participate on the Web.” So even as we look at unsurmountable challenges or at least challenges that seem unsurmountable, we need to remember that today many of us have a digital camera in our pocket, and many things are possible that we can’t even yet imagine. So let’s thank these two wonderful speakers for their participation.
Further Reference
Gathering the Custodians of the Internet: Lessons from the First CivilServant Summit at CivilServant