I think the question I’m trying to formulate is, how in this world of increasing optimization where the algorithms will be accurate… They’ll increasingly be accurate. But their application could lead to discrimination. How do we stop that?
All they have to do is write to journalists and ask questions. And what they do is they ask a journalist a question and be like, “What’s going on with this thing?” And journalists, under pressure to find stories to report, go looking around. They immediately search something in Google. And that becomes the tool of exploitation.
One of the things that I think is really important is that we’re paying attention to how we might be able to recuperate and recover from these kinds of practices. So rather than thinking of this as just a temporary kind of glitch, in fact I’m going to show you several of these glitches and maybe we might see a pattern.
The question is what are we doing in the industry, or what is the machine learning research community doing, to combat instances of algorithmic bias? So I think there is a certain amount of good news, and it’s the good news that I wanted to focus on in my talk today.
As we dug into this topic, we realized research gets forbidden for all sorts of reasons. We’re going to talk about topics today that are forbidden in some sense because they’re so big, they’re so consequential, that it’s extremely difficult for anyone to think about who should actually have the right to make this decision. We’re going to talk about some topics that end up being off the table, that end up being forbidden, because they’re kind of icky. They’re really uncomfortable. And frankly, if you make it through this day without something making you uncomfortable, we did something wrong in planning this event.
I often try to tell people that Google is not providing information retrieval algorithms, it’s providing advertising algorithms. And that is a very important distinction when we think about what kind of information is available in these corporate‐controlled spaces.