Andrew Maynard: So let me start by ask­ing you a ques­tion: who does not like think­ing about risk? So, please raise your hand if the thought of doing some­thing that might embar­rass you or harm you makes you feel uneasy. Yeah, already hands going up. Already. So it could be dri­ving in bad weath­er, or won­der­ing whether you should eat some­thing or not. Or, that real­ly nag­ging feel­ing that one day, you’re gonna for­get to put your pants on. But you won’t remem­ber until you’re half-way to the office. 

So risk is a fun­ny thing. It affects pret­ty much every­thing we do. And yet, most of the time we treat it like a dirty lit­tle secret. Something that’s there, but we’d rather not talk about it, a lit­tle bit like an embar­rass­ing rel­a­tive. This prob­a­bly isn’t such a good idea, though. Because if we’re not smart about how we live with risk and how we think about it, it’s a lit­tle bit like being an ostrich that sticks his head in the sand and just hopes every­thing will go away. Spoiler alert: it prob­a­bly won’t. 

So, I think about risk all the time these days. It prob­a­bly helps that this is my job, it’s what I’m paid to do. But it was­n’t always that way. So if you wind the clock back just a few years to when I was a teenag­er. Like most teenagers, I was rather ide­al­is­tic. I want­ed to make the world a bet­ter place. But I had a prob­lem. My prob­lem was that I was hope­less at most things. In fact the only thing that I could do was physics, strange as it might sound. And I just could­n’t work out how I could use physics to make the world a bet­ter place. I was young at the time, remem­ber, and I was rather naïve.

So I put all my effort into becom­ing a physi­cist. I became a research sci­en­tist. I did research that I thought was inter­est­ing; prob­a­bly nobody else did. I pub­lished papers that I thought were great, I’m sure nobody else read. In oth­er words, I was a mod­el scientist. 

But I did­n’t for­get about that nag­ging desire to make the world a bet­ter place. And part­ly because of this, I got involved in work­place health and safe­ty research. What I did was I stud­ied air­borne par­ti­cles. I stud­ied where they were gen­er­at­ed in work­places, how they got to peo­ple who were work­ing there, and how they poten­tial­ly entered their lungs and caused dam­age. I did this for over a decade, first of all work­ing for the British gov­ern­ment and then lat­er for the US gov­ern­ment, as a research sci­en­tist and as a research sci­ence leader. And over this time, some­thing rather serendip­i­tous happened. 

So towards the end of the 1990s, peo­ple began to get very excit­ed about a new tech­nol­o­gy. It was called nan­otech­nol­o­gy. So this is the tech­nol­o­gy of tak­ing mat­ter and design­ing it and engi­neer­ing at an incred­i­bly fine scale—down to the lev­el of atoms and mol­e­cules. And part of this tech­nol­o­gy involved cre­at­ing exquis­ite­ly small par­ti­cles that had a range of real­ly unusu­al prop­er­ties. They were called nanoparticles. 

But as peo­ple began to do this, they start­ed ask­ing ques­tions like what hap­pens if these par­ti­cles get out into the envi­ron­ment when they’re not sup­pose to be there. Or get into the human body when they’re not sup­posed to be there. What are the risks? 

Now this was real­ly excit­ing to me. And to under­stand why it was excit­ing you have to go back to my PhD, which was at the University of Cambridge in the UK. So this is toward the end of the 1990s. My PhD was in the analy­sis of what we then called ultra­fine par­ti­cles, real­ly nanopar­ti­cles, using what were then high-end elec­tron micro­scopes. And when I fin­ished my PhD I was told, This is great work, study­ing nanopar­ti­cles, but total­ly and utter­ly irrel­e­vant. Nobody’s inter­est­ed in nanopar­ti­cles.” So you can imag­ine how excit­ed I was to dis­cov­er that final­ly, my exper­tise was of some use. 

So, I began to get more and more involved with nan­otech­nol­o­gy. This is when I was work­ing for the National Institute for Occupational Safety and Health in the United States. I helped devel­op their research pro­gram there around nan­otech­nol­o­gy safe­ty. And I got involved in a group of fed­er­al agen­cies that were look­ing at how we could devel­op this tech­nol­o­gy safe­ly. This began to pull me out of my lab­o­ra­to­ry, and it got me more involved in think­ing about risk, and about sci­ence and tech­nol­o­gy more broad­ly. But at heart I was still based in the lab­o­ra­to­ry. That’s where my heart was. 

And then in the mid-2000s, I was thrown com­plete­ly and utter­ly out of my com­fort zone. I was asked to join a Washington DC-based think tank, the Woodrow Wilson International Center for Scholars. And I was asked to join them as the sci­ence advi­sor on new project, the Project on Emerging Nanotechnologies. And this was a project where we were try­ing to work with all stake­hold­ers, groups and peo­ple who were poten­tial­ly impact­ed by this tech­nol­o­gy, to under­stand how we could devel­op it respon­si­bly and safely. 

So talk about risk. I was thrown com­plete­ly and utter­ly out of my com­fort zone. One day I was a lab sci­en­tist, the next day, almost lit­er­al­ly the next day, I was expect­ed to talk with jour­nal­ists. Not some­thing they teach you to do as a sci­en­tist. I was giv­ing Congressional tes­ti­mo­ny. Definitely not some­thing they teach you to do as a sci­en­tist. And I was work­ing with pol­i­cy­mak­ers and advo­ca­cy groups and even inter­act­ing with mem­bers of the pub­lic. And I must con­fess, for the first two years I was absolute­ly ter­ri­fied.

But the expe­ri­ence opened my eyes. And per­haps for the first time in my life, I began to see how my teenage aspi­ra­tions to make the world a bet­ter place actu­al­ly fit togeth­er with my sci­en­tif­ic exper­tise, and increas­ing­ly my expe­ri­ence and exper­tise as a sci­ence com­mu­ni­ca­tor and a sci­ence pol­i­cy expert. And at the heart of every­thing I was doing was risk. 

So you go back to nan­otech­nol­o­gy. Nanotechnology, I dis­cov­ered, was just the tip of a whole new world of tech­nol­o­gy inno­va­tion, and that real­ly urgent chal­lenges of ensur­ing that new tech­nolo­gies devel­oped safe­ly, respon­si­bly, and effec­tive­ly so they do more good than harm. So here was a chal­lenge I could real­ly get my teeth into it, and it was a doozy. 

So just think about emerg­ing tech­nolo­gies for a moment. Get you head around this. It’s easy to see how some new tech­nolo­gies are chang­ing the world we live in. It was­n’t that long ago that we did­n’t have the Internet. Just a few years ago, smart­phones weren’t ubiq­ui­tous. When my kids were born, we did­n’t have things like Snapchat, and Facebook, and Twitter. So these have all had a pro­found impact on the ways we live our lives, but they’re real­ly just the tip of a much larg­er tech­nol­o­gy iceberg. 

So you take nan­otech­nol­o­gy for instance, this abil­i­ty to design and engi­neer mat­ter down at this very very fine scale. This is chang­ing every­thing around us, from super light­weight mate­ri­als, to how we cre­ate solar cells, to even how we devel­op the new gen­er­a­tion of cancer-treating drugs. It’s a real­ly pow­er­ful tech­nol­o­gy plat­form, but it’s not the only tech­nol­o­gy plat­form. You look at things such as arti­fi­cial intel­li­gence. Autonomous vehi­cles. The Internet of Things. Even gene edit­ing. These and oth­er tech­nolo­gies are emerg­ing faster than we can keep track of them. And they’re fun­da­men­tal­ly chal­leng­ing and chang­ing the ways we live our lives and even how we think about our­selves as human beings. 

So to be com­plete­ly hon­est, and this is the physi­cist in me, this is a fan­tas­tic time to be alive. We have nev­er been sur­round­ed by so much tech­no­log­i­cal abil­i­ty. But it’s also a scary one. Because each of these tech­nolo­gies can poten­tial­ly be as dan­ger­ous as they are beneficial. 

Some of these poten­tial dan­gers are remark­ably sim­i­lar to things that we’ve dealt with in the past, so let me take one exam­ple again from nan­otech­nol­o­gy: car­bon nan­otubes. So these are incred­i­bly fine, long strands made up of car­bon atoms. Which are real­ly excit­ing to mate­r­i­al sci­en­tists. They’re incred­i­bly light, incred­i­bly strong, they con­duct heat and elec­tric­i­ty very very well indeed. But if you take the wrong type of car­bon nanotube—they don’t all behave this way, but the wrong type—and you get it into your lungs, it can do a lot of very seri­ous dam­age. So this sounds some­what sim­i­lar to some of the chal­lenges we’ve faced with indus­tri­al chem­i­cals for many years now, but it is a new risk because it’s a new material. 

On the oth­er hand, we’re fac­ing some com­plete­ly new chal­lenges, such as pos­si­ble dan­gers of self-driving cars, for instance. Or the secu­ri­ty risks of liv­ing in a world where every­thing, it seems, is con­nect­ed to the Internet, whether it’s our garage door, whether it’s our clothes, or whether it’s our toast­er, even. Or even the risks of arti­fi­cial intel­li­gence begin­ning to threat­en our exis­tence and get­ting to the point where arti­fi­cial intel­li­gence and com­put­ers are so smart that they decide the one thing they real­ly can­not deal with is peo­ple. Crazy as it seems, it’s a risk that peo­ple spend a lot of time think­ing about. 

And then some of these emerg­ing tech­nolo­gies fun­da­men­tal­ly chal­lenge what it means to be human. For instance, a group of sci­en­tists recent­ly announced that they’re start­ing a project to form the first ful­ly arti­fi­cial human genome. Within the next ten years they’re hop­ing to do this. And they’re see­ing this as the first step to cre­at­ing ful­ly arti­fi­cial peo­ple in the lab, from lab­o­ra­to­ry chem­i­cals, with no bio­log­i­cal par­ents. So just think about that for a sec­ond. Within the next cou­ple of decades, we could be design­ing peo­ple on com­put­ers and grow­ing them in the lab. I mean just let that sink in. 

So you remem­ber that line from the movie Jurassic Park. You can see where this is going, where the char­ac­ter played by Jeff Goldblum says, Your sci­en­tists were so pre­oc­cu­pied with whether or not they could, they did­n’t stop to think and ask if they should.” Sometimes with emerg­ing tech­nolo­gies it feels very much like this. 

So looked at in this way, these new tech­nolo­gies seem pret­ty risky. But there is a sub­tler risk, and that’s the risk of not devel­op­ing them in the first place. Because let’s face it, for most peo­ple in the world life isn’t per­fect. We still have dis­ease, and dis­crim­i­na­tion, and pover­ty, and pol­lu­tion. And in some cas­es, if we get it right, new tech­nolo­gies can make a dif­fer­ence to these chal­lenges, if they are devel­oped responsibly. 

So how do we make sure that this hap­pens? How do we make sure that we devel­op new tech­nolo­gies that help build a bet­ter world and don’t cause more prob­lems than we’re try­ing to solve with them? To address this, we’re devel­op­ing a new cen­ter at Arizona State University. We’re call­ing it the Risk Innovation Laboratory. It’s a vir­tu­al lab where we’re exper­i­ment­ing with ideas and dif­fer­ent ways of doing things. We’re effec­tive­ly doing what tech­nol­o­gy inno­va­tors do. We’re get­ting real­ly cre­ative with how we think about risk, and we’re using this to dis­cov­er new ways to sur­vive and to thrive in a risky world. 

So to help with this, we’re actu­al­ly approach­ing risk in very dif­fer­ent ways and draw­ing on peo­ple with very dif­fer­ent expe­ri­ences, all the way through from sci­en­tists and engi­neers, to artists and social sci­en­tists. And one fun­da­men­tal way in which we’re think­ing dif­fer­ent­ly about risk is think­ing about risk as a threat to some­thing that’s important. 

So I have anoth­er ques­tion for you. You don’t need to raise your hands this time but just keep it in your head. Think for a moment about what is incred­i­bly impor­tant to you. It might be your fam­i­ly. It might be your job. It might be your health, or hap­pi­ness, or secu­ri­ty, or a sense of well­be­ing. Or it could be that free­dom to learn new things and invent stuff or build stuff, or even—let’s be hon­est here—make a ton of mon­ey. It might even be the free­dom to take risks and to be adventurous. 

Okay, so you’ve got that thing in your head. Now imag­ine how you would feel if some­body threat­ened to take that away, that thing that’s real­ly impor­tant to you. As you think about that, you can get a new sense of how to think about risk, that threat to some­thing that’s incred­i­bly impor­tant to you. Or not achiev­ing some­thing that is impor­tant to you. The not achiev­ing is just as impor­tant as los­ing some­thing you already have. 

So this is a new way of think­ing about risk that can trans­form our approach to the safe and the respon­si­ble devel­op­ment of new tech­nolo­gies. Thinking about risk as a threat to some­thing that’s impor­tant helps you work out which is maybe the best way for­ward here. 

And this brings us back to risk not just being a four-letter word. Risk is inevitable. It’s what actu­al­ly makes being alive what it is. Perversely, it’s actu­al­ly some­times some­thing that makes life worth liv­ing. But if we don’t learn to han­dle it, if we don’t learn to nav­i­gate it, it will get the bet­ter of us. Because make no mistake—and just in case it seems like I’ve been triv­i­al­iz­ing things here, I don’t intend to do that. Risk is real­ly seri­ous busi­ness. Ignore it or pre­tend that it does­n’t exist, and you have a big prob­lem. Instead, we need to under­stand it bet­ter. We should­n’t be shy about talk­ing about it. And we need more imag­i­na­tion and more inno­va­tion in how we think about it and how we respond to it. If we do this, we’ll be able to bet­ter make this world a bet­ter place in spite of the risk rather than fail­ing to do so because of it. Thank you.