Roelof Pieters: Roelof.
Winiger: And Samim.
Pieters: Ethical Machines is a series of conversations about humans, machines, and ethics. It aims at starting a deeper, better‐informed debate about implications of intelligent systems for society and individuals.
Winiger: For this episode, we invited David J. Klein to talk to us about machine learning, conservation, and climate change. Let’s dive in.
Thanks for making the time. Welcome to Ethical Machines. It’s a pleasure to have you on. Maybe we can start with the obvious question, could you tell us who you are, your background, and what brings you here, basically.
David J. Klein: Sure. Well, I grew up on a ranch in Florida, and I spent many years sort of marveling at nature. At the same time I was a huge science fiction fan, so I’d come back in and take apart all of my motorized toys and put them back together, and watch Doctor Who and read Asimov and Bradbury.
I eventually decided to go in a technology direction in my career, although I could’ve easily gone in a different direction. And I went to Georgia Tech. But after a while I became increasingly uninspired by the work I was learning about in electrical engineering. So I started looking for a way to keep myself interested. So I was taking courses in psychology and genetics, and a couple of really important things happened as I was searching.
First of all, I happened upon a course called Sensory Ecology. It was taught by a professor at Georgia Tech; the name of that the professor was David Dusenbery. Sensory Ecology is really about information transmission in biological systems, and how behavior and morphology of organisms coevolves with their information transmission and reception systems. And I was hugely inspired by that course.
And so I started looking at ways of combining Double E with studies of the brain. So the second important event was I asked around and I found a young professor at Georgia Tech who had recently come there out of the lab of Carver Mead from Caltech. And Carver Mead and his students were the pioneers of this field of neuromorphic engineering. And so I was able to land an undergraduate research assistantship in Steve’s lab, and I was doing research and development on neuromorphic vision chips. So we were designing vision chips that were mimicking the processing being done in the mammalian retina.
And really since then, everything I’ve done has been in that intersection area between neuroscience and engineering. My graduate work was in a Double E lab of Shihab Shamma’s in University Maryland, but we were doing experimental neuroscience there, studying the processing of sound in the auditory cortex. From there on I was a postdoctoral researcher at the Institute for Neuroinformatics in Zurich. So I was working on auditory AI projects there an auditory representation learning.
And then I came out to Silicon Valley about a decade ago and I joined a company Audience, where we had the vision of reverse engineering the human auditory system in order to do a better job of speech enhancement and auditory source separation and robust speech recognition. And we developed a chip that went into the iPhone and went into the Samsung Galaxy, and it was a great success. We were the first company to pull multiple microphones into a smartphone. And as these signals are coming in they’re first transformed by computational models of the mammalian cochlea. So not using a Fourier transform but actually using a filter back inspired by the mammalian cochlea.
So from there on, I’ve been working on various projects in various startups including my own. I was using autoencoders starting in 2008 to beat the state of the art standards in video compression. And I’ve been working on a bunch of different projects as a consultant, adding mostly deep learning‐fueled intelligence to various products ranging from face recognition to snoring recognition.
So on the conservation side, I was lucky to get connected to these researchers at University California Santa Cruz. They were starting a company several years ago which is called Conservation Metrics. And this company, it was based on their work applying passive acoustic monitoring technology to monitor and help save endangered sea birds, mostly. Over time, I developed technology for them and now they have this analysis pipeline for all the acoustic data that they get in. So the biologists who are analysts in the company have the ability to build deep learning models to detect endangered species of interest and get to a more detailed understanding of how these populations are doing and how they’re responding to conservation interventions.
And so that’s been exciting in that it’s had a very large impact on their work. Their analysis throughput has increased by ten times compared to methods they were using before using deep learning models. And I think we’re just scratching the surface. We’ve expanded from audio processing to image processing, mostly land‐based camera networks that are used by conservationists today. And there’s a lot of potential. I mean, the vision going beyond that is integrating all kinds of sensor sources, all the way from environmental DNA sampling all the way up to satellite‐based imagery. All of these sensor types have a bearing on the wildlife conservation and more broadly environmental conservation problem.
Winiger: So following up from there, the research paper that you published a while back called “Deep Learning for Large Scale Biodiversity Monitoring”. How does this play into the work you just mentioned?
Klein: It really has to do with the vision. So Conservation Metrics, we’ve been solving very specific problems in the conservation sector using deep learning. And it’s great to be able to work with conservation scientists and their existing projects today and to see what problems they have and how can the process be streamlined using machine intelligence. And that’s why Conservation Metrics was labeled as a “laser” in this recent TechCrunch article by Shivon Zilis. We’re very much focused on specific problems that exist in projects today.
But there’s this broader vision, the idea that we can leverage these sensor networks that we’re putting out in these remote areas. So we have these things across the world. I mean, we have them in Australia, we have them in Hawaiʻi, we have them in coastal areas of the United States. And there’s at least an order of magnitude greater need for monitoring. I mean right now we’re using these microphones and cameras networks on the ground, but the conservation sector believes that there’s a lot of value that can be gleaned from for example satellite imagery or DNA sampling called eDNA.
If we really want to have a detailed enough understanding of these ecosystems so that we can really engineer solutions on less than a ten‐year runway— I mean, right now we don’t really have that understanding. It’s actually one of the most important insights that I’ve gotten in working with biologists and ecologists, is that today it’s actually not really known on a scientific basis how well different conservation interventions will work. And it’s because we just don’t have a lot of data. I mean, these conservation projects—think about trying to save populations of endangered sea birds that might feed in islands close to Japan but breed in islands close to Hawaiʻi. I mean it’s a huge area; there’s no way you can send people out there to get enough data to develop a scientific understanding of the problems. How these species are being impacted by human actions.
So we need technology, we need to deploy sensors and many different types of sensors to monitor these populations and monitor these ecosystems. And we need algorithms like deep learning‐based algorithms that we can use to distill insights from this data. Because it’s way too much data. I mean, step one is getting the data but it’s way way too much data for people to look at directly. We have a project in Kauaʻi where we’re detecting the sound of an endangered bird colliding with power lines there. And we’ve discovered that it’s a much bigger problem than it was previously thought because we were able to extend the temporal scale of the monitoring using these microphone networks.
When we get data back from the lab, it’s hundreds of thousands of hours of audio. It would take a single person ten years just to listen to that, let alone find things of interest. So that’s where we’re applying deep learning, where we’re enabling these biologists through various interesting means to build models and then distill these hundreds of thousands of hours, or many millions of images, down to a small subset that they can use in their backend analysis of how population densities are changing.
Pieters: One more question, like how this relates not just to extinction of global animal populations but also to things like the state of biodiversity or climate change? Shat would you generally say is the impact machine intelligence in a kind of broad sense can have in this area, and is it already having enough impact?
Klein: The situation appears to be very dire. I mean, if you look at global.biodiversity[?], the last about half of the world’s animal populations since 1970, species extinctions are orders of magnitude above the natural background rate. And a lot of scientists are calling this the sixth extinction, and there’s no debating that it’s due to human causes. And it’s a bunch of different human causes. The number one cause today is not climate change, it’s direct exploitation. It’s farming, it’s fishing. Then we have pollution. Amphibians and birds are being just wiped out. Insect populations as well.
And you know, global spending on this has increased in recent decades. So the UN has recognized that this is going downhill at an alarming rate, and right now we’re spending about $20 billion a year globally. But all metrics are showing that it’s not really helping. When you ask, “Well why isn’t it helping? What could we be doing better?” And there’s not really great answers out there right now. Funding is being driven by emotion and logic and some models. But it’s not really being based on data. You know, we can argue that certain species that we care about because they’re cute, or we can make impassioned arguments.
And that’s really the kind of thing that’s driving money flow today. But there’s not a lot of data that’s showing us how well we’re doing with a particular kind of intervention versus another kind. Let’s talk about birds again. You know, should we remove an invasive snake or we should we build artificial nests? Which one of those is more effective? Usually we just argue for one, we do it, and then many years later we determine if it worked or not.
So your question’s about how could technology help. So, we do expect that climate change will become the number one problem, pretty quickly. Because it’s shifting habitats at a rate that natural ecosystems cannot keep up with. We’re eroding the value of nature. So we can get into how do we measure the value of nature. Actually that’s a really interesting topic.
The two things we can do obviously on the technology side, one is trying to slow climate change. And we can do that by various means. We can innovate on technology for energy; clean technology that is much less destructive to our atmosphere. We can innovate on solutions for transportation that uses less energy. So that’s one side of things, trying to slow the degradation.
And the other side of things, where I’ve been more focused, is developing systems that enable scientists to understand these systems that we’re modifying so that as we make conservation interventions we can say on a more fine‐grained basis how well they’re working. Ultimately, we might need to understand these systems so that we can restore them.
Winiger: I mean, the question you just raised, how do you value nature. I want to dive into that as you brought it. What is the cost function of valuing nature? It seems like a really hard problem to crack. Is there any thinking around this?
Klein: There’s been quite a bit of work in this area called ecosystem services. It’s economists and biologists, ecologists, are getting together and performing an increasingly detailed account of how nature serves us. You know, what more tangible value do we derive from ecosystems? And there’s a bunch of different ways. If you look at how we use bee populations to pollinate crops and all the value we would get from those crops. The fact that ecosystems form natural buffers that protect our populations from storms. The fact that trees take a lot of carbon out of the atmosphere and therefore regulate our planetary temperature. And many many many other ways. I mean, we even derive pesticides and medicines from nature. So if you add all that up, we’re currently at an estimate of around $125 trillion a year of value that we’re extracting from nature. That’s roughly double global GDP.
So that work is going on. But of course there’s a great debate about ecosystem services. You know, can you actually quantify? Because a lot of people will say, “A future with no nature is not a future I want to be in.” How do you quantify life? That’s a really great question. I’m not aware of work beyond brainstorming. When you start looking at the uses of reinforcement learning for monitoring and maybe maintaining natural systems, it does beg the question okay, what’s the reinforcement signal? What are the objective functions here that are being optimized? And I don’t have a great answer for that. But it’s a great area of debate and discussion because that may be one of the only solutions we have.
The approach that I’ve been taking right now is okay, we’re getting in these petabytes of data from sensors and we’re getting that down to a very very small subset. But that may not end up working out. The scale of the problem may be too large. I mean, how many hundreds of trillions of dollars are we going to have to spend to restore these systems? I think the much better approach would be to let these systems care of themselves.
But what is the objective function? It’s not just the presence of activity, like life activity. One of the great examples is the concept of the ecological trap. So, we have areas like Central Park in New York which were thought to be ecological traps. So it attracts animals, there’s a lot of life there. But there’s not a lot of renewal of life; it’s kind of a dead end. So if we just designed a reinforcement learning system to say okay, let’s find automatic actions that will increase the diversity and plentifulness of life in a certain location, that in itself is not enough. We need to have a much more detailed understanding of what is a healthy ecosystem. There’s always a balance there. Today we don’t have a detailed scientific understanding. We’re just scratching the surface. So that’s why I’m excited about developing technology that can help us see what’s going on.
Pieters: What would you say, because one of the arguments being made, the Singularity will take care of it, Moore’s Law will automatically solve climate change, animal extinction and related problems. Or almost like the invisible hand of the market will fix it. You know—
Pieters: So what would you say to these kind of…
Klein: Yeah… I think that’s… I think it’s pretty dangerous thinking. I mean you know, can anybody point to any kind of technology projection of more than thirty years that’s ended up being accurate in any significant way, any actionable way? I mean, why do we think this is different now? Basing our future on a wait‐and‐see attitude, it’s like…it’s just dangerous. I think problems like this are going to be solved with a lot of work, and work on all these three things: technology, science, and politics you know, policy. We need to tackle all these problems in a very methodical and a coordinated way. And the thing is even as we fail—there’ll be a lot of failures, but we’ll be learning a lot, and we’ll be creating an understanding that will be much more broadly useful for humankind. So the idea that technology innovation is passed down humans from the mountain and it’s just going to solve everything to me doesn’t ring true.
Winiger: You hinted at politics and policymaking, and I’m going to frame that as culture.
Winiger: This notion that large‐scale change will happen, it’s technological change and cultural change go hand in hand. And so do you actually see machine learning help change our culture? So our beliefs, our causes, our priorities.
Klein: That’s such an interesting question. I think that as we get a more detailed understanding of natural ecosystems, in part by attacking this problem, that we’ll start to be able to have the ability to create these cyborg ecosystems. So I would recommend you looking at the work of Brad Cantrell. He’s an architect. And there’s others like him, they envision this future where we have this kind of confluence of intelligence in monitoring the environment, and also robotics, and also if you look at advances in material sciences, we can start to create cities that are much more tightly integrated with nature. Cities where nature more flows through cities and we understand how to interface with nature in a much more fine‐grained way.
The aesthetic that drives me in that area has been science fiction depictions of future Earth. You know, future Earths that are very green, where we have nature integrated with cities, down to our energy innovations are inspired by nature. You know, our architecture’s inspired by nature.
There’s another part to it that’s a little bit more weird but I think also worth discussing. Because now we have a much more detailed understanding of genes, so how to interpret genes and how to modify them. That’s actually one of the big impact areas right now of deep learning. And if you look at the work that’s being done in image processing, the generative and creative art that’s coming out of that, there’s a future in which I believe that the interface with nature could become a lot more intimate at the genetic level. So we’ll be able to start envisioning hybrid structures between humans and the natural world that we create with these generative models.
Pieters: So this would give a new meaning to personalization.
Pieters: A very different kind of generative recommender systems.
Klein: Yeah. I bring this up as kind of like a vision and aesthetic fuel. I go about my day‐to‐day being the laser. You know, looking at problems and solving specific problems. But as you go along you need something that is driving you in a direction.
Pieters: All these humans are driving a lot of these problems we’ve been discussing. And so I think probably a logical way of approaching a solution is social engineering, in a sense. I suppose using machine learning to influence populations, maybe we’ll see some of that or we’re already seeing some of that.
Klein: That’s a very interesting point. I would love to be able to use tools, and they’re starting to mature you know, where we can start to understand the whole chain from how energy is derived and how it’s used, and then ultimately what you’re using that energy for. Andrej Karpathy had an interesting tweet a few months ago that got me thinking. He made some calculations that showed how much equivalent wood he was burning in powering a GPU to solve a particular machine learning problem. I think it’d be fascinating to have a more detailed understanding of that whole chain. Where the energy’s coming from, how the energy’s formed, and how we’re using it.
And so with that visualization, I think as a society we’ll start seem more optimal ways of living. The one that we discuss a lot, obviously, is transportation. I mean, the fact that cities—well, in the United States in particular and in China—are just ludicrous in how much energy we spend getting around to buy milk and to work at our desk jobs with no thought about consequences on a day‐to‐day basis. It’s interesting to discuss how there’ll be a cultural change for everyday, smaller‐scale mundane tasks, once we start to be able to visualize this kind of energy flow.
Pieters: Maybe shifting gears a bit. When you were talking earlier about consultancy for social good, in a sense, or for meaningful problems to solve. You tend to hear more and more this kind of “X for social good.” So what is your experience on running profitable social enterprise?
Klein: Yeah, it’s a really interesting time right now. There’s so many for‐good and for‐profit companies starting up. And then so there’s a very healthy debate going on about whether or not that’s a good thing. The reason it’s happening is partially because folks trying to do the right thing, trying to implement social good and nonprofit organizations, have been frustrated. I mean, they’ve been really frustrated by the slow pace of progress and the fact that they don’t have access to the top‐quality talent and they spend a lot of their time trying to raise money. And so they see things happening in the tech world that are much more rapidly innovating through this ethos of competition and disruption. And I think we have benefited from that but there’s a lot of problems to solve there still. I mean I think it’s a great thing to try to flesh out.
The thing is we want something that’s more flexible. We’re trying to find an interesting way to structure things so that the act of solving the problem generates enough profit to support a vibrant technology innovation scene, more like a traditional tech startup. One of the things you have to guard against is mission drift. Time will tell what’s the best strategy in doing that. I think that it is a dangerous thing to have money driving decisions now. Because of people that care the least about the mission and the most about money will tend to get the most power within organizations unless we can have ways of diligently protecting against that kind of thing.
Winiger: Could you envision let’s say the year 2025 where an entity like Google is a major player in renewables or conservation? Could you envision such a future?
Klein: I could envision such a future, yes. Newsflash: energy is big business. And right now it’s being dominated by the fossil fuel industry. But that’s going to change. It’s going to be something much cleaner, much more efficient. And that transition will create a lot of wealth, a whole new global leadership that has the planet’s health much more in their minds I hope I can see in seventy years or so what that looks like. I think we’re going to be beyond what we currently see in solar power and wind power. We’re going to have much more interesting cyborg interfaces with the natural world.
Winiger: In Germany they have this amazing transformation unfolding in real time with 30% peak time energy production now on renewables. But the thing that really gives me hope is that half of that renewable energy is in citizen‐controlled hands—
Klein: Good point.
Winiger: —cooperative hands. And there’s this kind of silent revolution of distributed power unfolding, and distributed structures that controlled the tech, basically. Which is very uplifting, from my perspective.
Klein: That’s something that I’ve just recently become interested in and aware of, these kind of distributed value chain systems. There’s lot talk about Bitcoin, blockchain. And you know, that’s something that I didn’t fully understand the potential of, these kinds of markets until recently. So I’m really looking forward to digging more into that and understanding how they can be leveraged. You can imagine systems where this planetary network of sensors is being put into a global, distributed CDN. And solving the most critical problems with that data will set the price of the data and the value of collecting data, and the value of solving problems with that data.
I do think that machine intelligence is going to be a big part of this. I think it’s debatable how much that will add to a human understanding of these systems. I tend to be on the side of we as humans use machine intelligence ultimately to derive insight. These large‐scale machine intelligence systems will add an incredible amount of understanding of how the natural world works.
Pieters: For instance here in Stockholm, we’re having a project where we’re actually looking at windmills. Windmills are highly inefficient, in the sense that they produce energy but you don’t really know how much energy it produces where or when—
Pieters: —because you need a good weather model—
Pieters: —and that’s super difficult to do. So a lot of things to win there, by using things like deep learning, making models much better, and thereby also making things much more efficient.
Klein: Yes. My understanding is that that’s one of the primary drivers right now of the increase… The percentage of total energy in the United States coming from wind is being largely driven by more accurate weather prediction models. That didn’t happen by accident. That actually happened through policy. And that policy was multi‐dimensional. There was policies to put funding into large‐scale computing systems that facilitate this kind of work. Put money into funding algorithmic research that can lead to improvements in weather predictions. And the hope was that those would lead to increased uptake of wind power, and that’s happening. So that’s a great example of this multi‐pronged technology, science, and politics that can be successful. But there’s a lot more that can be done. We can’t claim any kind of victory right now.
Pieters: You mentioned a few things which kind of give hope for the future. One is cultural change, that people will become more aware of these things but then also develop more technology to address these issues. Better metrics. More accurate interventions. And things like restoration.
Klein: Yes. Geoengineering is a scary but potentially inevitable outcome.
Pieters: So which other things kind of excite you? Let’s say at NIPS, what are you interested in at NIPS?
Klein: I think the area that I am personally the most excited about is actually one of the furthest away from my domains of experience, which is genetics. The technology evolution in genetic transcription technology is on a double exponential. You know, so Moore’s Law is this exponential relationship. Genetic technology’s on this double exponential. And it’s now affordable. Ten years [ago] it was impossible and now it’s affordable to fully sequence the human genome and anything else we get our hands on. And that’s going to continue.
So this technology is going to be everywhere. And deep learning is going to be a big part of that. So there’ve been a couple startups recent, including Deep Genomics and Atomwise that’ve started up to tackle this problem. Existing players such as Illumina are very excited about the potential. And you know, these startups are looking at everything from drug discovery to cancer diagnosis based on very small blood samples.
We have it being used in precision agriculture, that we can take environmental samples like very small air and soil samples and detect disease and problems, and have information we can use to optimize agriculture.
And of course now there’s not just the analysis but now we have the generative part of that with CRISPR and the related technologies. We’re just starting. I mean, if you look at the technology that’s being used today for genetic analysis with say, deep learning models, they’re much much more simplistic than what we see in these large‐scale image recognition systems. They’re borrowing from image processing and speech recognition and they’re showing that like so many other things, right off the bat we’re seeing large gains in recognition accuracy for detecting how certain drugs bind to different sites on the sequence.
You know, it’s been projected that the genomics industry is going to increase by ten‐fold in the coming few years. I think that is true. It’s going to become a huge huge industry, and machine learning and specialized deep learning architectures for genetic analysis and for genetic editing are going to become a thing in the next few years.
Winiger: I mean, the cost decrease is obviously the most prominent sign of this unfolding revolution, really. But as well it opens up some broader, terrifying scenarios where you can start the gene drive from the comfort of your bedroom and probably try to measure the impact on a large‐scale biosystem on your deep learning model at home rather than somewhere on AWS. But you only can get that far. These models probably will, on the dark side, become a reality as well, then.
Klein: Yes, yeah. There’s a dark side to all of these things. These powerful technologies, they have such destructive power if used in the wrong way, intentionally or unintentionally. And so how do we address that? I mean, we address it by understanding, by a hands‐on approach, by discussion and deciding together as species where we should be applying our energies and how we should be using things. And to have as much transparency as possible across the board.
Winiger: So are you a realist or are you an optimist, or… What would you call yourself.
Klein: I’m definitely an optimist. Yeah. I am an optimist. I’ve changed over time. When I was younger, when I would go into kind of a meditative state I would envision things completely falling apart, and potentially quickly. But as I’ve advanced in my career and I have the ability now to talk to policymakers and talk to people in technology talk to people on the ground doing the work, I’m a lot more optimistic. I see that at least in the people that I have come in contact with, and admittedly that’s compared to global power structures it’s a very small slice. But I’ve become optimistic. I can see that developers in the future will how so much power to implement change. You know, we tend to be a lot that strives for scientific truth and optimization. And I think we will collectively decide on optimization for good. And the thing is you know, good is not an objective thing. It’s something that we all have to continually revisit as a species.
Winiger: If you made it this far, thanks for listening.
Pieters: And also we would really love to hear your comments and any kind of feedback. So drop us a line at firstname.lastname@example.org.
Winiger: See you next time.