Andrew Maynard: So let me start by asking you a question: who does not like thinking about risk? So, please raise your hand if the thought of doing something that might embarrass you or harm you makes you feel uneasy. Yeah, already hands going up. Already. So it could be driving in bad weather, or wondering whether you should eat something or not. Or, that really nagging feeling that one day, you’re gonna forget to put your pants on. But you won’t remember until you’re half-way to the office.
So risk is a funny thing. It affects pretty much everything we do. And yet, most of the time we treat it like a dirty little secret. Something that’s there, but we’d rather not talk about it, a little bit like an embarrassing relative. This probably isn’t such a good idea, though. Because if we’re not smart about how we live with risk and how we think about it, it’s a little bit like being an ostrich that sticks his head in the sand and just hopes everything will go away. Spoiler alert: it probably won’t.
So, I think about risk all the time these days. It probably helps that this is my job, it’s what I’m paid to do. But it wasn’t always that way. So if you wind the clock back just a few years to when I was a teenager. Like most teenagers, I was rather idealistic. I wanted to make the world a better place. But I had a problem. My problem was that I was hopeless at most things. In fact the only thing that I could do was physics, strange as it might sound. And I just couldn’t work out how I could use physics to make the world a better place. I was young at the time, remember, and I was rather naïve.
So I put all my effort into becoming a physicist. I became a research scientist. I did research that I thought was interesting; probably nobody else did. I published papers that I thought were great, I’m sure nobody else read. In other words, I was a model scientist.
But I didn’t forget about that nagging desire to make the world a better place. And partly because of this, I got involved in workplace health and safety research. What I did was I studied airborne particles. I studied where they were generated in workplaces, how they got to people who were working there, and how they potentially entered their lungs and caused damage. I did this for over a decade, first of all working for the British government and then later for the US government, as a research scientist and as a research science leader. And over this time, something rather serendipitous happened.
So towards the end of the 1990s, people began to get very excited about a new technology. It was called nanotechnology. So this is the technology of taking matter and designing it and engineering at an incredibly fine scale—down to the level of atoms and molecules. And part of this technology involved creating exquisitely small particles that had a range of really unusual properties. They were called nanoparticles.
But as people began to do this, they started asking questions like what happens if these particles get out into the environment when they’re not suppose to be there. Or get into the human body when they’re not supposed to be there. What are the risks?
Now this was really exciting to me. And to understand why it was exciting you have to go back to my PhD, which was at the University of Cambridge in the UK. So this is toward the end of the 1990s. My PhD was in the analysis of what we then called ultrafine particles, really nanoparticles, using what were then high-end electron microscopes. And when I finished my PhD I was told, “This is great work, studying nanoparticles, but totally and utterly irrelevant. Nobody’s interested in nanoparticles.” So you can imagine how excited I was to discover that finally, my expertise was of some use.
So, I began to get more and more involved with nanotechnology. This is when I was working for the National Institute for Occupational Safety and Health in the United States. I helped develop their research program there around nanotechnology safety. And I got involved in a group of federal agencies that were looking at how we could develop this technology safely. This began to pull me out of my laboratory, and it got me more involved in thinking about risk, and about science and technology more broadly. But at heart I was still based in the laboratory. That’s where my heart was.
And then in the mid-2000s, I was thrown completely and utterly out of my comfort zone. I was asked to join a Washington DC-based think tank, the Woodrow Wilson International Center for Scholars. And I was asked to join them as the science advisor on new project, the Project on Emerging Nanotechnologies. And this was a project where we were trying to work with all stakeholders, groups and people who were potentially impacted by this technology, to understand how we could develop it responsibly and safely.
So talk about risk. I was thrown completely and utterly out of my comfort zone. One day I was a lab scientist, the next day, almost literally the next day, I was expected to talk with journalists. Not something they teach you to do as a scientist. I was giving Congressional testimony. Definitely not something they teach you to do as a scientist. And I was working with policymakers and advocacy groups and even interacting with members of the public. And I must confess, for the first two years I was absolutely terrified.
But the experience opened my eyes. And perhaps for the first time in my life, I began to see how my teenage aspirations to make the world a better place actually fit together with my scientific expertise, and increasingly my experience and expertise as a science communicator and a science policy expert. And at the heart of everything I was doing was risk.
So you go back to nanotechnology. Nanotechnology, I discovered, was just the tip of a whole new world of technology innovation, and that really urgent challenges of ensuring that new technologies developed safely, responsibly, and effectively so they do more good than harm. So here was a challenge I could really get my teeth into it, and it was a doozy.
So just think about emerging technologies for a moment. Get you head around this. It’s easy to see how some new technologies are changing the world we live in. It wasn’t that long ago that we didn’t have the Internet. Just a few years ago, smartphones weren’t ubiquitous. When my kids were born, we didn’t have things like Snapchat, and Facebook, and Twitter. So these have all had a profound impact on the ways we live our lives, but they’re really just the tip of a much larger technology iceberg.
So you take nanotechnology for instance, this ability to design and engineer matter down at this very very fine scale. This is changing everything around us, from super lightweight materials, to how we create solar cells, to even how we develop the new generation of cancer-treating drugs. It’s a really powerful technology platform, but it’s not the only technology platform. You look at things such as artificial intelligence. Autonomous vehicles. The Internet of Things. Even gene editing. These and other technologies are emerging faster than we can keep track of them. And they’re fundamentally challenging and changing the ways we live our lives and even how we think about ourselves as human beings.
So to be completely honest, and this is the physicist in me, this is a fantastic time to be alive. We have never been surrounded by so much technological ability. But it’s also a scary one. Because each of these technologies can potentially be as dangerous as they are beneficial.
Some of these potential dangers are remarkably similar to things that we’ve dealt with in the past, so let me take one example again from nanotechnology: carbon nanotubes. So these are incredibly fine, long strands made up of carbon atoms. Which are really exciting to material scientists. They’re incredibly light, incredibly strong, they conduct heat and electricity very very well indeed. But if you take the wrong type of carbon nanotube—they don’t all behave this way, but the wrong type—and you get it into your lungs, it can do a lot of very serious damage. So this sounds somewhat similar to some of the challenges we’ve faced with industrial chemicals for many years now, but it is a new risk because it’s a new material.
On the other hand, we’re facing some completely new challenges, such as possible dangers of self-driving cars, for instance. Or the security risks of living in a world where everything, it seems, is connected to the Internet, whether it’s our garage door, whether it’s our clothes, or whether it’s our toaster, even. Or even the risks of artificial intelligence beginning to threaten our existence and getting to the point where artificial intelligence and computers are so smart that they decide the one thing they really cannot deal with is people. Crazy as it seems, it’s a risk that people spend a lot of time thinking about.
And then some of these emerging technologies fundamentally challenge what it means to be human. For instance, a group of scientists recently announced that they’re starting a project to form the first fully artificial human genome. Within the next ten years they’re hoping to do this. And they’re seeing this as the first step to creating fully artificial people in the lab, from laboratory chemicals, with no biological parents. So just think about that for a second. Within the next couple of decades, we could be designing people on computers and growing them in the lab. I mean just let that sink in.
So you remember that line from the movie Jurassic Park. You can see where this is going, where the character played by Jeff Goldblum says, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think and ask if they should.” Sometimes with emerging technologies it feels very much like this.
So looked at in this way, these new technologies seem pretty risky. But there is a subtler risk, and that’s the risk of not developing them in the first place. Because let’s face it, for most people in the world life isn’t perfect. We still have disease, and discrimination, and poverty, and pollution. And in some cases, if we get it right, new technologies can make a difference to these challenges, if they are developed responsibly.
So how do we make sure that this happens? How do we make sure that we develop new technologies that help build a better world and don’t cause more problems than we’re trying to solve with them? To address this, we’re developing a new center at Arizona State University. We’re calling it the Risk Innovation Laboratory. It’s a virtual lab where we’re experimenting with ideas and different ways of doing things. We’re effectively doing what technology innovators do. We’re getting really creative with how we think about risk, and we’re using this to discover new ways to survive and to thrive in a risky world.
So to help with this, we’re actually approaching risk in very different ways and drawing on people with very different experiences, all the way through from scientists and engineers, to artists and social scientists. And one fundamental way in which we’re thinking differently about risk is thinking about risk as a threat to something that’s important.
So I have another question for you. You don’t need to raise your hands this time but just keep it in your head. Think for a moment about what is incredibly important to you. It might be your family. It might be your job. It might be your health, or happiness, or security, or a sense of wellbeing. Or it could be that freedom to learn new things and invent stuff or build stuff, or even—let’s be honest here—make a ton of money. It might even be the freedom to take risks and to be adventurous.
Okay, so you’ve got that thing in your head. Now imagine how you would feel if somebody threatened to take that away, that thing that’s really important to you. As you think about that, you can get a new sense of how to think about risk, that threat to something that’s incredibly important to you. Or not achieving something that is important to you. The not achieving is just as important as losing something you already have.
So this is a new way of thinking about risk that can transform our approach to the safe and the responsible development of new technologies. Thinking about risk as a threat to something that’s important helps you work out which is maybe the best way forward here.
And this brings us back to risk not just being a four-letter word. Risk is inevitable. It’s what actually makes being alive what it is. Perversely, it’s actually sometimes something that makes life worth living. But if we don’t learn to handle it, if we don’t learn to navigate it, it will get the better of us. Because make no mistake—and just in case it seems like I’ve been trivializing things here, I don’t intend to do that. Risk is really serious business. Ignore it or pretend that it doesn’t exist, and you have a big problem. Instead, we need to understand it better. We shouldn’t be shy about talking about it. And we need more imagination and more innovation in how we think about it and how we respond to it. If we do this, we’ll be able to better make this world a better place in spite of the risk rather than failing to do so because of it. Thank you.