Samim Winiger: Welcome to Ethical Machines. We are your hosts…
Roelof Pieters: Roelof.
Winiger: And Samim.
Pieters: Ethical Machines is a series of conversations about humans, machines, and ethics. It aims at starting a deeper, better-informed debate about implications of intelligent systems for society and individuals.
Winiger: For the second episode, we invited Jack Clark, the world’s first neural network journalist, reporting for Bloomberg, to talk with us. Let’s dive into the interview.
So how long have you been in the US for now?
Jack Clark: I’ve been here for about three years now. I moved here to join The Register, where I wrote about AI and databases. And then I got hired by Bloomberg, and so now I’m helping to cover AI for us as well as doing more traditional enterprise companies.
The way I think of it is, neural networks are going to be fundamental to a very large amount of the AI that we’ll see and experience for the next few years. So I figure if I report on any company or individual messing around with this technology, then I can get a view of a good section of the AI world as it expands. And from a story point of view it’s very fruitful, because both lots of research happening and it’s creating some fascinating experiments and products that we can write about.
Pieters: So what do you think is the best approach to explaining these complicated topics?
Clark: I think you have to read everything. I spend about one or two hours a day reading the preprints as they come on arXiv, read the papers and study that. And then try to turn it into an analogy. Because no one knows what a neural network is outside of academia, but everyone knows that when you’re an extremely young child you’ll like, pick up a flower and stare at it for several hours. And it’s this early form of learning that children do that is analogous to what we’re trying to do with some of these systems.
Winiger: You’ve heard major figures in deep learning and beyond blaming journalists lately for overhyping the issue, for mischaracterizing. You’ve heard a lot of ugly words being thrown around. How would you confront these very regarded figures criticizing journalism in such a broad way?
Clark: Ninety-five percent of the people making these criticisms have spent years studying artificial intelligence. They have a technical background. They probably have a PhD. They have years of experience of looking at very complex technology and coming away with objective and sort of applied applications of it.
Most journalists don’t have PhDs in machine learning. From the journalist’s perspective you know, I’ve had to spend several years reading a lot of literature and doing a lot of mathematics and trying to teach myself this stuff and I’ve made that investment. It’s difficult to find the time. So people like Demis Hassabis or Yann LeCun or Jeffrey Hinton, one of their main responsibilities and of their public relations departments should be to spend a lot of time with journalists and make sure that they they just educate the journalists about how this stuff works. It yields a media where you have an understanding of it. Because they have to be generous with their time just as we do in our writing of their subject.
Pieters: So what are things which kind of happen, be it like unexpected or just interesting to you. What are the kind of most exciting things?
Clark: The two very exciting things to me, one is memory systems? You know, we have seen in recent years everyone has started to look at long-short term memory again. The appreciation for the role the hippocampus plays in consciousness, that same observation is being applied to AI to give us systems that can do long-term reasoning and multi-part pattern recognition.
The second one is about reinforcement learning being combined with deep learning in robotics. I mean, you saw just this week Fanuc invested a stake in Preferred Networks. Preferred Networks have been doing reinforcement reinforcement and deep learning applied to robotics platforms. They read their neural Turing Machine paper, they read the q‑learning paper. They’ve also read work out of Berkeley’s lab from Pieter Abbeel and Sergey Levine on end-to-end visuomotor policy training.
So that already has created a situation where Fanuc has invested money, ABB has put money into Vicarious. And we have some startups I’ve just heard about who are all doing this. So this is exciting, you know. Especially after we saw the DARPA Grand Challenge, those robots looked so kind of drunk. They were falling all over the place. They were stupid, very very slow. And I spoke to Dr. Gill Pratt who ran that, and he’s of the opinion as well that we’re going to get a huge increase in robotics capability from the application of sensing systems from neural networks.
Pieters: But connected to that, what would you recommend, as a graduate student in machine learning? Stay in academia? Join one of the established corporate research labs? Or start your own startup?
Clark: I think what we’re going to get deep learning artists at some point. Now that is the worst idea in the world if you would like to have money or a career. But it is something that will happen. We’re going to get some artists who are using this generative stuff. I’ve seen work that you’ve done—I’ve seen a lot of it. I think that the field of new aesthetics we’re seeing from this could become very very interesting once we have a better understanding on how to use the models.
To answer your actual question, though, there are two areas of possibility. One is custom accelerators. If you can figure out how to do good FPGAs to do learning, then you can start to put classifiers on low-cost drones and things like that. That would be the area I’d recommend. But FPGAs are incredibly difficult. So you know, good luck.
Winiger: So, recently we’ve seen lots of friends of ours that are working on deep learning libraries or key pieces of the puzzle getting hired immediately by Facebook Google, etc. And so I guess as a student especially, you’re confronted with this reality right now. You can either get immediately hired by one of the big players, or go and do a PhD by an institution that is sponsored by one of the big players, or start a startup. To reformulate that question, which one of these is that less evil poison, in a sense?
Clark: Well, here’s the problem. This is a poison question, right. Because from a society view, everyone should stay in academia because it begets the largest quantity of open research and the best teaching for the next generation. I don’t know about Europe so much, but in America with the way funding is and competitive tenure situations, and just the misery of being a post-doc at say, if you’re not a top-tier institution? That’s such a hard life that the rational decision for many is going to be to go and work at Google or Facebook. Because you will get excellent training, you will get some of the best data you can access in the world, and you will be paid giant amounts of money. So, go for the company. But keep in mind that everyone in this community has a responsibility to do the best research they can in the most open way.
Pieters: Yeah. Connected to this is the question of biopolitics, you know, in a post-structuralist sense, where it’s about control of the landscape. Where you have all these people in their free time creating libraries. But I’m getting hired by one of the big companies. It’s a land grab for talent, that’s clear. But at the same time it’s also a land grab for control of the resources on the software level. So that’s a trend, at least. Do you see this trend?
Clark: Yes, definitely. Partly it’s that AI is a relatively small community. You know, Yann LeCun and his friends at NYU all work on Torch, whereas DeepMind has done work on other libraries. Even the languages. Some people really like Lua, others are more doing stuff with Python. Some people who are very intelligent are just writing things in C. But I am afraid of those people because I don’t know how you can do that. You know, there’s a diversity happening here.
Now, I don’t know how it gets fixed. Someone needs to grow up and commit the community to one or two of them. If we look at the history of software, that’s not gonna happen for a few years.
Winiger: Maybe pivoting from the corporate discussion, in the previous episode where we had Mark Riedl, we had a long chat about generative tech in story generation, that kind of thing. And we touched briefly on generative journalism. And he mentioned that was one of the few areas where generative text creation really is being deployed in industry in a large way. Which was to me eye-opening. And I knew at the borders this was happening, but to hear it firsthand was very interesting. I mean, what you think? Is it going to be a radically new journalism quite soon, or what do you think about that?
Clark: …Yes. [laughs]
Winiger: Right.
Clark: So number one, I work at Bloomberg. We obviously do very competitive stories. When you do an earnings story, we try and have a first version out within three or four minutes. We do that by writing incredibly detailed templates. We talk to people in the days running up. We have a whole team of people standing around with numbers, checking every one when you push it live.
Obviously this is something that is going to be increasingly automated. Because this is a job where I am trying to be like a computer. And whenever you’re doing that job, you realize “at some point a computer will do this.” The Associated Press already uses technology from Narrative Sciences to generate earnings reports for companies that they don’t cover.
The only problem that these tools have is that they can’t do context. So it will look at all of the indicators in the earnings release. It will look at the analysts’ recommendations. And it will make sentiments decision based on whether the company beat or didn’t beat. The problem is that the market isn’t rational, so sometimes a company can beat on all of the analyst estimates but go way down because buried somewhere in the release is a reference to how they’re changing accounting, or they’re writing down something or whatever.
Generative journalism will be a reality. We’re not quite there…yet, but it’s very clear to me that it’s going to foster a sort of winner-take-all situation where if we at Bloomberg develop some small tools, I will be able to do stories much more efficiently and that leaves more time for investigation.
The New York Times, in a prototype version of their new CMS, they’re using recurrent neural networks to suggest tags based on the story. So they’re already kind of augmenting articles with some of this machine intelligence. Which is a great idea.
Pieters: Well, and the other way around there’s this recent work on question answering systems, where some newspapers, very very specific in their style by using both the summary and the actual article to train a long-short term memory but a question answering system. Which basically is to say that if it can go one way it should also be able to go the other way.
Clark: It should be able to. You may have been aware DeepMind did some of the work there. And took in all of the Daily Mail, which is a large tabloid web site. And what they found is that if you put the phrase into their learn system you know, “Does coffee cause…” or “Does eating lobster cause…” every time “…” be cancer, because the Daily Mail loves writing articles about how everything’s going to give you cancer. So that shows you how even with a very large data set there could be some problems that are very unpredictable.
Winiger: Yeah, I mean this is in my experience as well working with generative systems. You really have to rethink the design process as one of choosing inputs and outputs. Choosing the Daily Mail seems like an exercise in comedy more than anything else.
Clark: It’s funny, but one of the things that Google has been talking about a lot is that… And you may know more about this. The European Commission publishes huge documents it publishes them concurrently in twenty-seven different languages. So there’s an idea of not only can we use that as a very good store of text to learn concepts, but we can learn concepts as they cross from one language to another because we have that mapping. And because it’s not just a French-German dictionary it’s French-German-Italian-English all on one thing, you can learn a very complex, rich representation across the different cultures. So that seems fruitful to me.
Winiger: It just brings back a discussion I had a few days ago with [Guy Acosta?] one of the developers in deep learning. And he brought up this interesting idea that the trained weights of these nets can be set in connection with what previously was the database. So the next oracle in a sense will be the one that is wielding the most pre-trained weights and that there will be a whole set of law cases unfolding soon where people will get sued for the specialized training on top of pre-trained nets and things like that. Which I found interesting. It just came to mind when you were talking just there.
Clark: Well, if you think about it what we’re doing is we’re turning very high-dimensional mathematic representations of a sort of large knowledge space into intellectual property. Which should be the most frightening idea in the world to anyone. This is from most abstract thing you could possibly try and turn into a capitalist object, and we’re heading down that direction. I don’t think that can work. I think that if you look at the way that you encode information from a trained net, the legal cases will be hugely complex. But Google has been acquiring many patents and so has IBM and so has Microsoft. So we might get a Cold War scenario where there is no law suits because all of them have enough patents to threaten each other with you know, nuclear bomb law suits. Who knows?
Winiger: I mean, it’s a horrible scenario. Nobody wants to see that because I suppose it would stifle innovation, really.
Clark: How do we avoid it? You know, what are things that you guys think could be done to stop it happening?
Winiger: I think step one is to support the current openness. Because we see this marketing money rushing in and obviously they’re smart marketing people. They try to set a cultural sentiment. And I think that’s one dial that as a society we can start to turn in the other direction. Upgrading the now somewhat old-sounding notion of the public domain. Especially in the US it sounds completely out of date but it could quite easily actually be dialed back into fashion. So that’s one approach, I suppose. It’s a really hard problem, isn’t it?
Pieters: There’s Creative Commons licenses, right. I mean, why does Google for instance bring out a patent for all these different algorithms they develop. Why not launch it with a Creative Commons license, where there may be [an] attribution clause. Okay, you have to attribute it to Google, so Google is still protected on the attribution principle but it still would be actually open.
Clark: Well, it really is unfortunately this horrible sort of mutually assured destruction game theory scenario, where Google may not have wanted to patent this, but what it may have done (which companies do regularly) is looked at all of the patents IBM has on AI, said like, “Holy moly. If we don’t have some AI patents, we can be in a legally weak position with respect to IBM should there be a law suit.” So it creates this scenario where even if it’s not a good idea, you’re going to amass these patents because as a corporation you have to do rational things for your investors. And any investor would kind of rightly say, “Hey, Google. By not patenting any of this stuff, you’re putting yourself at a disadvantage to your competitors in the marketplace who are amassing the tools necessary to mount a legal attack.” It’s a bit depressing but I think that is the rational corporate response.
Pieters: One of the big stories is also the more future concerns about the development of AI, leading mass employments, or to Terminators, ets. But let’s stick to the mass employment scenario for now. It’s being argued if there is a mass employment there needs to be a solution for the people who are unemployed, which might be something like a minimum income. So where do you stand on this issue?
Clark: It’s a huge problem. I’ve read a lot of research by David Autor, who is a great economist I believe at MIT. He has written a lot about this. His analysis is that we don’t have the data to be able to project that AI could lead to large-scale unemployment. But we also don’t have the data to say that that won’t happen. And then if you look at people like Andrew Ning, who does AI at Baidu you know, he said to me, “When the US went from 50% of people working in farming to 2% over fifty years, that was fine because the farmer knew that their son or daughter should go to college because farming would be mechanized and there wouldn’t be a job.”
The speed of today’s economy means that this same transition is happening within a single generation. And that’s where the problems come in, is that we have no system in society for retraining people when they’re midway through their lives to take on a new type of employment or job. And that will be the issue that AI brings to the table. Because if you’re a truck driver, if you’re a lawyer working in e‑discovery or data stuff, you’re a journalist doing a lot of journalism that just requires sort analysis of numbers which are out there, there is huge evidence that AI is coming for you and is moving very rapidly.
And just from another slightly more basic economic point of view, the thing that AI does is it makes your existing capital expenditures—you know, your warehouses, your factories—it increases the efficiency of them and it lowers the depreciation of them. So as a business operator like Amazon, you have the hugest incentive in the world to roll out Kiva Systems robots to as many of your warehouses as fast as you can. Because whenever you look at the numbers, the efficiency is so much greater than with a staffed model. This is going to be a defining issue, maybe. I expect within the next five to ten years we see the big effects. If self-driving cars come on schedule and receive the kind of uptake that people at J.P. Morgan, people at the big banks are objecting, it’s coming, you know.
Winiger: It’s super interesting. So when it gets to the kind of hard question of how society should frame automation, etc., in the West especially self-worth and these more philosophical constructs are really based on full employment and so forth, right. I mean, the whole psychic in the West is built on these notions. And so in a sense we are saying it’s all going to collapse sooner or later. It’s already now an election cycle topic.
Clark: It’s going to be challenging. I have a friend who’s actually also English. They work in New York in finance. So they’re aware of technology and what happens. I speak to them about this issue and they say to me, “Well, but Jack, what will people do if they don’t have to work? People have to work. It’s natural.” And I talked to a lot of people who have that view. So as you say, we have such a deep-set psychological association between work and self-worth that watching that change is going to be difficult. Maybe this is an area where the Europeans can take leadership because we’ve always had an appreciation for holiday and not working. Maybe that will help us, you know.
Pieters: Maybe you could argue that Google and Facebook and [inaudible], it would be in their interest to push this new philosophy of there will be people unemployed and it’s fine in the sense that they will have the [?] people working against this trend. In political leadership of this kind of trend, this a good thing. It can be framed in the narrative of this is good for business, good for the world, then they are, at least on the corporate level the ones who are most to gain from this, right?
Clark: Yeah. And from a public relations standpoint, as a company you never want to be associated with the destruction of jobs and increasing inequality. And unfortunately for these AI companies like Facebook and Google, they’re already being tagged with that. Because they have a very competitive market and they give engineers free food and massages and buses. And we’re in San Francisco where you have huge homeless problem and huge inequality as well. But this is an issue that they’re going to need to take a leadership role on because otherwise they’ll risk discontent from society becoming directed at them because they’ve become a symbol.
Winiger: Right. That actually ties it really beautifully back to beginning of the discussion of the greater need of explaining this really complex set of issues to the public much better. Because otherwise that whole debate is going to actually break down. I mean, that might end in a really nasty scenario, I guess.
Clark: Yeah. The other issue this is bound up with is why can’t I pay for Facebook? Why can’t I pay for Twitter? Why can’t I do some some situation where either I pay them and they don’t get my data, or they pay me a very small amount of money and I give them my data? Because if you taught people that their data has some value, or that they have the option of capitalizing on that value by buying a service instead of getting it for free, they would understand what AI means.
Because all AI is in the way that it all affects a lot of society, it’s the outcome of us transferring loads of very well-annotated, clean data to corporations. And that issue will have to become one because if you’re Google and you say, “Well, you don’t pay for Gmail because we subsidize it with adverts and you get a lot of value from it even though we take your data,” that is a very reasonable argument, but at some point people are going to ask well why can’t I have the other option.
And then the companies have to say, “Well actually, your data is so valuable when we combine it with everyone else’s that we have no incentive to do this, from an AI development standpoint.” I believe that would start a conversation among people about this.
Winiger: And for the generative work I’ve been doing, we copyrighted it. I mean, what happens if you train a net with copyrighted images and you generate outputs with that, you get into a really interesting situation with copyright law very quickly, really. And that’s just the beginning.
Clark: But I can think of really puzzling scenarios, like if I train a generative music system based on a CD I buy of the New York Philharmonic playing Bach, a what point am I still using copyrighted performances from the New York Philharmonic versus at what point is it just Bach played by a generative system? It’s very very hard to discern that borderline. Because we know that what happens as you do this generative stuff is you distort the underlying material to the point that maybe it is fair use, that maybe it isn’t the original IP anymore. There’s no way to answer this stuff simply. It’s going to be a very horribly complicated time, I think.
You can imagine you train a movie system on every single action movie in history. And then you create a generative system which will go from frame to frame or scene to scene and sort of interpolate a new movie out of this. At what point are you infringing copyright? Like how do you even judge that anymore? It’s crazy. And as I said earlier, we’re going to get artists who do this. We’re going to get people who want to contribute to the cultural discussion and the aesthetic discussion, that do this stuff, and the legal system and rightsholders will have no clear path for how to react. It’s new territory.
Pieters: Well, even more problematic, once you start charging money for the generated material or you want to copyright that, then it becomes interesting for all those copyright holders from the original inputs. Because they might want to also cash out that.
Clark: You know, do you end up licensing the object itself? So say the value of a photograph of my customized vehicle is quite low. Would it be greater for me to share a data set which is a set of several hundred photos of the car from every single angle so you can train a system to have a representation of it? And should I charge my data on the richness of the AI representation you can derive from it? Again, I don’t know but this feels like conversations which creative people are going to have to start having.
And then you get to the really outlandish scenarios when you start to combine all of this which we’re talking about with a distributed trust-based blockchain system, or the running of code and validation of it. Again, you start to get autonomous programs that will mine the Internet for content and then will sell generative art for bitcoin, anonymously. Well, what do we do when that has happened in the world?
Pieters: So there’s a precedent in Holland of someone creating language with natural language processing (I think it was not a neural network, but I mean machine learning in any case), which was argued that it was hate speech or threatening tweets. And he got his door kicked in by the police. And in the end the question was who was responsible for this content on Twitter? Was it the machine learning algorithms or the person behind it, who argued it was—
Clark: Put the server in prison.
Pieters: Yeah. At least according to the Dutch legislation, in the end it was the person who created the algorithm who was held responsible for this.
Clark: You know, Google when they launched Google Photos had a huge problem, which was that the system was identifying people of color as gorillas. Which is literally the most offensive thing your system could do, pretty much. Again, is the Google person responsible for not testing all of the corner cases? Is Google the corporation responsible for not doing QA? Valid and complicated questions, because it certainly offended and hurt some people. But then, they weren’t hurt by a person, they were hurt by generative decisions of an algorithm that emerged out of a data set whose provenance we as the public aren’t told because it comes from a private company. Where is accountability in this universe?
Winiger: It’s tricky. I mean, I suppose on the one hand yeah sure, generative systems do manifest a lot of autonomy, in a sense. On the other hand it’s the perfect black box to hide nefarious human action behind. And you know, you just stand there and you raise your hand and say, “Well you know, it was the black box. Excuse my—or it’s—behavior. Don’t sue me.” I mean, it’s a bit of both really, isn’t it?
Clark: I had a conversation with a hedge fund recently, who I can’t name, but it was about what they thought of deep leading and deep learning systems and how it applies to trading. These people are very concerned by deep learning because we can’t inspect the models very easily. We can’t get very good telemetry. And the whole concept of taking an emergent generative system and plugging it into a trading environment gives these people nightmares. It is like the worst thing they could imagine.
And yet, there is going to be a huge incentive to use deep learning to pick up on some signals which are not yet being processed by technical trading firms. So we’re going to see some very interesting arms race there. You know, we already see it with satellite imagery being run through deep learning systems to look at the height of oil and gas towers to infer supply. As we get the rollout of low-cost drones, making it possible to surveil far out commodities, we’re going to get a whole range of learning systems plugging into the market. Again, it’ll be a good thing for efficiency but it will also open us up to horrible problems that we cannot even imagine. Which is exciting and kind of unnerving as well.
Winiger: If you made it this far, thanks for listening.
Pieters: And also we would really love to hear your comments and any kind of feedback. So drop us a line at info@ethicalmachines.com
Winiger: See you next time.
Pieters: Adios.
Winiger: Bye bye.