David Cox: Quick audi­ence par­tic­i­pa­tion. How many of you—raise your hands if you have one of these. 

Keep your hands up if you’re quite attached to it. Good. I’m attached to mine as well. So, usu­al­ly when we think about brains, we think about the naturally-occurring kind. But what we’re gonna talk about today is the prospect of build­ing arti­fi­cial brains. And the only rea­son we can real­ly have this con­ver­sa­tion today is because there are two fields that are explod­ing right now and are on a col­li­sion course with one anoth­er. On one hand we have neu­ro­science, the study of the brain. And the oth­er side, we have tech­nol­o­gy. And in par­tic­u­lar com­put­er science. 

Now, it might seem weird to sort of con­nect tech­nol­o­gy and neu­ro­science. But it’s actu­al­ly some­thing we’ve been doing for a very long time. So if we look through his­to­ry, we’ve always looked at the mind and the brain through the lens of the cur­rent tech­nol­o­gy of our day. 

So in the era of Descartes, hydraulics was high tech­nol­o­gy. And Descartes imag­ined that there was an ani­mat­ing flu­id that would flow through the body to move the body. So using tech­nol­o­gy to think about, as a metaphor for our mind. 

Move to the era of Freud, steam is the tech­nol­o­gy of the day. And we start think­ing about the mind build­ing up pres­sure and let­ting off pres­sure, and start think­ing about stok­ing the engines of cognition. 

Fast-forward to the elec­tron­ics era and radio. We have crossed wires and being on the same wave­length. Our lan­guage fol­lows our tech­nol­o­gy. And today, hands down, the tech­nol­o­gy of the day is the com­put­er. From its hum­ble begin­nings mid-century of last cen­tu­ry, to today where we all car­ry com­put­ers around in our pock­ets and we slav­ish­ly look at them all the time, now com­put­ers are the lens through which we see our brains. 

And these metaphors can lead us astray and lead us to think the wrong things about the brain. So if you think about the cen­tral pro­cess­ing unit being sep­a­rate from mem­o­ry, that’s not actu­al­ly how our brain works. And as appeal­ing as it might be to get a mem­o­ry upgrade to our brains, that’s just not the way it’s going to happen. 

But I would argue that this metaphor is very dif­fer­ent. And the rea­son is that com­put­er sci­ence is a field actu­al­ly of applied math­e­mat­ics that lets us rea­son about the algo­rithm, which is what we’re com­put­ing, sep­a­rate­ly from the imple­men­ta­tion, which is the hard­ware we use to do that com­pu­ta­tion. And what com­put­er sci­ence does is it gives us an equiv­a­lence where we can think about dif­fer­ent kinds of algo­rithms run­ning on hard­ware that maybe was­n’t the orig­i­nal hard­ware that it ran on. If we under­stand the algo­rithms of the brain, can we think about run­ning that oth­er hard­ware that we have, sil­i­con hard­ware. And this is a qui­et rev­o­lu­tion that’s hap­pen­ing right now. 

So, there’s some­thing called deep learn­ing, or neur­al net­works. It’s actu­al­ly quite an old tech­nol­o­gy, but in the last five years it’s been mak­ing incred­i­ble strides dri­ven by the avail­abil­i­ty of incred­i­bly advanced com­pute and the avail­abil­i­ty of tons of data. So just five years ago it would’ve been unthink­able that a com­put­er vision sys­tem would be able to rec­og­nize an object except on a sort of blank background. 

But nowa­days, this is a sys­tem from Andrej Karpathy and Fei-Fei Li’s lab. Now we can have com­put­ers that look at a pic­ture and can actu­al­ly build a cap­tion. So here, this com­put­er just looked at the pic­ture and gen­er­at­ed the cap­tion Man in in black shirt is play­ing gui­tar.” Or con­struc­tion work­er in orange safe­ty vest is work­ing on road.” So this is amaz­ing what’s hap­pened in just the last five years.

We also have very high-profile things like Google DeepMind’s AlphaGo that beat the world grand­mas­ter in Go. This was basi­cal­ly one of the last games that humans were any good at rel­a­tive to com­put­ers. It was very exciting.

And then we even have com­put­ers start­ing to make art. It’s not all good art but you know, we can start to see that all these domains that we thought as being sole­ly human are now increas­ing­ly being encroached upon by machine learn­ing systems. 

And of course this has caused a tec­ton­ic shift in the field. So there’s been mas­sive invest­ment by indus­try into this field. Billions of dol­lars from the likes of Google, and Apple, and Baidu. Basically an entire aca­d­e­m­ic field has been pri­va­tized and brought in. And some­times I feel like I’m going be the only per­son left study­ing this. And if you’re a rep­re­sen­ta­tive of one of these com­pa­nies and you’d like to buy out my lab, we can talk later. 

Of course not every­one thinks this is a good idea, or this is a good thing. So Elon Musk is famous for say­ing that build­ing arti­fi­cial intel­li­gence is equiv­a­lent to sum­mon­ing the demon.” Which is awe­some when your life’s work is com­pared to sum­mon­ing the demon.” But thank­ful­ly there are oth­er peo­ple, cool­er heads that have com­ment­ed on this. Like Stephen Hawking say­ing that arti­fi­cial intel­li­gence would end mankind. 

But I’m here to tell you that we’re not quite there yet. So this is an image from the 2015 DARPA Grand Challenge for robot­ics. And basi­cal­ly what this is is robots that have to oper­ate in envi­ron­ments that are uneven terrain. 

And you know, this is hard. And I’m not knock­ing any of these robots. These are amaz­ing robots. But a lot of the things we take for grant­ed and we think are sim­ple, are only sim­ple because we have the solu­tion to the prob­lem in our heads. And evo­lu­tion gave us our brains and that’s what we have. 

There are also oth­er exam­ples. So before I showed you these won­der­ful cap­tions that seem mirac­u­lous, that these com­put­ers were able to gen­er­ate. But if you dig a lit­tle bit deep­er and you choose your images care­ful­ly, some­times you find fun­ny things. 

So, this image is a man rid­ing a motor­cy­cle on a beach,” which sounds like fun. 

This is an air­plane parked on the tar­mac at the air­port.” I would say that pilot needs to be fired. 

And a group of peo­ple stand­ing on top of a beach,” which sounds like a fun day out on the weekend. 

So, there’s a sense in which the sys­tems are tru­ly amaz­ing. And I don’t mean to belit­tle them. But there’s a sense in which we haven’t got­ten the whole sto­ry yet, and there’s some­thing still miss­ing. These sys­tems aren’t real­ly under­stand­ing the way that we con­ven­tion­al­ly think about understanding. 

So, what my lab does, and what I’m inter­est­ed in doing, is going back to the brain to squeeze out some more inspi­ra­tion. What are we miss­ing from the brain, that we can build into our arti­fi­cial systems? 

And luck­i­ly for me, around 2014 a big fish got inter­est­ed in the same prob­lem. So IARPA, the Intelligence Advanced Research Projects Activity, which is the high-risk, high-reward arm of the intel­li­gence com­mu­ni­ty of the United States, which is anal­o­gous to DARPA is the defense ver­sion and they’re famous for fund­ing the cre­ation of the Internet, they start­ed a pro­gram that was basi­cal­ly right up my alley. They basi­cal­ly pro­posed my research pro­gram, and a pro­gram called Machine Intelligence from Cortical Networks, or MICrONS.

And the goal of MICrONS is three­fold. One is they asked us to go and mea­sure the activ­i­ty in a liv­ing brain while an ani­mal actu­al­ly learns to do some­thing, and watch how that activ­i­ty changes. Two, to take that brain out and map exhaus­tive­ly the wiring dia­gram” of every neu­ron con­nect­ing to every oth­er neu­ron in that ani­mal’s brain in the par­tic­u­lar region. And then third, to use those two pieces of infor­ma­tion, those two exper­i­men­tal datasets, to build bet­ter machine learn­ing. To find what deep learn­ing and neur­al net­works today are miss­ing, so that we can close that gap. 

So let it nev­er be said that IARPA is unam­bi­tious. This is an incred­i­bly dif­fi­cult thing they’ve asked us to do, but for­tu­nate­ly I was able to put togeth­er a dream ream to work on. This is an enor­mous under­tak­ing. So we are cross­ing twelve labs in six insti­tu­tions, with a heavy con­cen­tra­tion of work at Harvard and MIT. We’re gonna work on this for five years, $28.7 mil­lion. And by the time we’re done we’ll have col­lect­ed two petabytes of data, which is one of the largest neu­ro­science datasets ever collected. 

Across this team, we have exper­tise in neu­ro­science, and physics, in machine learn­ing, and in high-performance com­put­ing. So this is real­ly a moon­shot effort, of the ambi­tion sort of scale of the Human Genome Project to real­ly take a real crack at reverse engi­neer­ing the brain. 

So, I’m gonna walk you through a lit­tle bit of how this goes. The exper­i­ment starts on the sec­ond floor of the Northwest Labs, where my lab is. And this is going to be a sort of unusu­al epic jour­ney that a brain is gonna take. 

So we start not with humans, but with rats. This one’s slight­ly larg­er than life-size on this screen. The rea­son we’re look­ing at rats is we need to walk before we run. We’re not ready to do this exper­i­ment with humans yet. Finding human vol­un­teers is also some­what chal­leng­ing because we take the brain out as part of this. So we start with a rat. In many cas­es, rats born in my lab­o­ra­to­ry for this pur­pose. And if you think that rats are dumb I just want to share with you an anecdote. 

A few years ago, a group that was study­ing inva­sive species released a rat onto a desert­ed island with var­i­ous pest con­trol mea­sures plant­ed on the island, and they put a radio col­lar on the rat to try and test how easy it was to erad­i­cate an inva­sive rat infes­ta­tion. So this was an exper­i­ment, they were inter­est­ed in sort of the ecol­o­gy of the situation. 

And this exper­i­ment went for a while. They tracked the rat for a while. And it was on this island here. After a week, the radio col­lar sig­nal dis­ap­peared. And they went, they scoured the island, they could­n’t find the rat any­where on the island. Even though the item was cov­ered in traps. 

And it turns out the rat had decid­ed to swim on the open ocean to an adja­cent island and was lat­er found hav­ing swam sev­er­al miles through the open ocean. So these are scrap­py crea­tures. These are not dumb ani­mals. And what we want to do is we want to take that scrap­pi­ness and that intel­li­gence and under­stand how that learn­ing hap­pens and under­stand how that scrap­py brain works. 

And we do that in the con­trolled set­ting in my lab­o­ra­to­ry. So these are where we train rats to do tasks. So this is basi­cal­ly a video arcade for rats. So each one of those box­es is a computer-controlled train­ing rig. And we put the rat in, and then a com­put­er takes over and trains the rat to do pret­ty much any­thing we want it to do. 

So this is what it looks like inside. And you can see the lit­tle lick tube here that the ani­mal can lick to give us respons­es. We have some sen­sors. And then there’s a mon­i­tor, and what we can do is we can show the ani­mal dif­fer­ent stim­uli, or dif­fer­ent objects on a screen, and we can train them to do dif­fer­ent things. And then we can ask, how does their brain look before they learn how to do that task ver­sus how does it look after they learn how to do that task. 

Here’s a lit­tle video of a rat doing a task. So just to ori­ent you, here’s the ani­mal’s nose. And you can see the rat’s hap­pi­ly lick­ing here. And then objects appear on the screen. And then when he makes the right response, he gets a reward of juice that he likes. And then he gets it wrong, he gets a lit­tle short time out. So we can basi­cal­ly the train the ani­mal to do these video games, and we can ask what changes in the brain when the ani­mal knows how to do the task ver­sus when the ani­mal does­n’t know how to do the task. 

We’re not just inter­est­ed in train­ing rats, even as fun as that is. What we real­ly want to do is we want to look at the brain and see how the brain changes. So this is a rat’s brain. And we want to look at it while it’s still in the ani­mal’s head, while the ani­mals actu­al­ly doing some­thing. So we need a tech­nol­o­gy to be able to sort of peer inside the brain.

And that’s what this is. This is a two-photon exci­ta­tion micro­scope. So this is a micro­scope that’s pow­ered by a very pow­er­ful invis­i­ble laser. And we shine it into the brain. We actu­al­ly have the world’s fastest two-photon micro­scope now, from our col­lab­o­ra­tor Alipasha Vaziri, that can basi­cal­ly record movies of the activ­i­ty of large num­bers of cells at indi­vid­ual cell res­o­lu­tion and see the pat­terns of activity.

So this is what this looks like. So, you can see these flash­ing green dots. Every time one of those flash­es hap­pens, that’s a neu­ron fir­ing in response to some­thing in the envi­ron­ment. So you’re watch­ing a rat, or in this case actu­al­ly this is a mouse, hav­ing a thought. So we can actu­al­ly see thought, and look at the pat­terns of activ­i­ty, and we can see how those pat­terns change as we go from an ani­mal that does­n’t know how to do some­thing to an ani­mal that does know how to do something.

Now we’re gonna go one step fur­ther, and we’re gonna take that brain out, and we’re gonna basi­cal­ly recon­struct all of the wiring between all of the neu­rons in the brain. So take the brain out, we soak it in heavy metals—in this case osmi­um, and then we put it in a FedEx box and we ship it to Argonne National Laboratory. And in par­tic­u­lar we ship it to the Advanced Photon Source. So the Advanced Proton Source, this is an elec­tron ring that slams an elec­tron into a fil­a­ment of met­al, and then pro­duces incred­i­bly bright, brief puls­es of x‑ray radi­a­tion. And this is basi­cal­ly the world’s most advanced CT machine. So if you’ve gone to a hos­pi­tal and had a CT of your head, per­haps after an injury, this is basi­cal­ly the same thing but on an incred­i­bly small micro­scop­ic scale. 

And what this lets us do is to see inside a piece of the brain. So if we have a cylin­dri­cal sort of core of the brain, we can look inside of it with­out cut­ting it, and then we can look and see every sin­gle cell in the brain, and we can see some of the vas­cu­la­ture, the blood ves­sels that serve this, and then also some of the wiring. And this give us a high-resolution pic­ture of the brain that we can ori­ent our­selves. But that’s not enough because IARPA asked us to actu­al­ly fig­ure out every sin­gle con­nec­tion and every sin­gle wire between every neu­ron in the brain. 

So what we need to do is we need to put it back in a FedEx enve­lope, send it back to Cambridge to the lab of Jeff Lichtman, who’s a close col­league of mine, to do some­thing called serial-section elec­tron microscopy. So, here we wan­na actu­al­ly see indi­vid­ual con­nec­tions, and these are incred­i­bly small. These are so small in fact that you lit­er­al­ly can’t see them with light. The wave­length of light is too big to inter­act with how small these things are. So we need to cut them—it’s sort of like imag­ine a big bowl of spaghet­ti, but on a nano scale. And we need to slice it up into tiny slices and then image it. And in this case we use elec­trons to image it. 

And what you’re see­ing now is the world’s most sophis­ti­cat­ed deli slicer. So, this block here is a brain, a piece of a brain that’s been embed­ded in plas­tic. And then it’s slow­ly carv­ing off a slice of the brain which is then being col­lect­ed onto this tape. 

And to give you a sense of how thin these slices are, if we blew up a human hair, so just a hair out of our head, it’s about twen­ty, thir­ty microns wide. So this is sort of a very very very zoomed- up pic­ture of the shaft of a hair. This is about ten microns, this white bar, so about about a hun­dredth of a mil­lime­ter. And then if you want­ed to see how big blood cells were, that’s about how big blood cells are. And then if we zoom in even fur­ther, then this line is how thin the slices we’re cut­ting with that deli slicer are. So we’re. So these are thir­ty bil­lionths of a meter thin.

And then we col­lect them on tape. So basi­cal­ly we have miles of tape that’re col­lect­ing these sec­tions of this brain. And then we spool them up. Jeff’s lab cuts them up and then put them on sil­i­con wafers, and then we just have a cat­a­log of this ani­mal’s brain. So every piece of their brain cut into thirty-nanometer-thin slices and then put on to these wafers. 

And then we image of them in this, which was at the time when it was built the world’s fastest elec­tron micro­scope, which then images at four nanome­ter res­o­lu­tion and pro­duces about two petabytes of data for a cubic mil­lime­ter of brain. 

And this is what the images look like. And then they can be recon­struct­ed. So we take all of these images and you can see we can iden­ti­fy each lit­tle sort of thing here. So what you’re see­ing in these cross-sections are indi­vid­ual wires going from one nerve cell to anoth­er nerve cell in the brain. And then by using com­put­er vision tech­niques, we can basi­cal­ly recon­struct all of these pieces. 

So then we take all that data, almost two petabytes of data, and of all places it then goes to 1 Summer Street, above the Macy’s in Downtown Crossing in Boston. It turns out Harvard rents a data cen­ter space there. I actu­al­ly just went to vis­it them recent­ly. And the final rest­ing place of this ani­mal’s brain then, at Harvard, is this. This is a two-petabytes stor­age array. So this is a bunch of hard dri­ves. We’re stor­ing what remains of this ani­mal’s brain in dig­i­tal form there. And then from there IARPA wants us to deliv­er the brain up to the cloud. So this is a forty gigabit-per-second Brocade switch, the Internet 2 infra­struc­ture. This is like the fast, fast Internet. And we upload that ani­mal’s bring to the cloud. 

You know, this idea of bring­ing upload­ing has sort of cap­tured a lit­tle bit of the sort of pop­u­lar imag­i­na­tion. So you know, Time, and Focus and…magazines have start­ed to latch onto this. What if we could upload our brain? Maybe that’s the path to immor­tal­i­ty. Back in the 80s, William Gibson wrote a book called Neuromancer that sort of explored these themes of brain upload­ing. And there’ve been more recent works of art, film, that have explored this idea of like upload­ing the brain. And what I can tell you is that way before humans upload their brain, it’s going to be rats that get their brains up into the cloud first. 

This is inter­est­ing to hear because it then drags in— Some peo­ple are tak­ing this idea so seri­ous­ly that here’s an exam­ple of a woman who recent­ly was dying, had a ter­mi­nal dis­ease, and she decid­ed that what she want­ed to do was to pre­serve her brain in the hopes that peo­ple like us would fig­ure out how to lat­er upload her brain and recon­sti­tute it. And using sim­i­lar tech­niques to pre­serve her brains is what we used to pre­serve our rat’s brain. 

Now, if you’re excit­ed about the idea of upload­ing your brain, I have good news, I have bad news, and I have neu­tral news. So, the good news is there’s noth­ing in prin­ci­ple that stops us from doing this. Now, there are sci­en­tists who if you ask them they’ll say that’s crazy, we can’t pos­si­bly upload brains, nev­er gonna hap­pen… It could hap­pen. I’m just gonna put that out there. There’s noth­ing in prin­ci­ple that stops us from being able to dig­i­tize a brain. 

Now the bad news is we have no idea how to do that yet. And it’s going to be a long time before we fig­ure that out. And it’s not even clear that we’re col­lect­ing all of the data we need from the brain to do that. But these are the first steps. This is what the first steps look like towards under­stand­ing enough to be able to take a brain and put it into dig­i­tal form. 

Now, I also promised you neu­tral news. And the neu­tral news is well before we get any­where close to think­ing about upload­ing a brain, many oth­er things are going to hap­pen first that’re going to have huge impacts on our world. 

So, this notion of the Fourth Industrial Revolution that’s been very promi­nent at this meet­ing, you know, if we can cap­ture more of what makes brains smart, and adapt­able, and able to learn, there’s a huge frac­tion of employ­ment that’s just not gonna stick around. So if we look at jobs like clean­ing, fac­to­ry inspection…you know, lots of dif­fer­ent kinds of fac­to­ry automa­tion jobs that just basi­cal­ly involve the abil­i­ty to see the world and inter­pret it cor­rect­ly and the abil­i­ty to use your hands to enact some­thing in the world, those jobs are grad­u­al­ly gonna erode and go away as we start to build robots. 

And we’re already see­ing this. So you know, things like the Roomba for clean­ing, and indus­tri­al robots. You might think of these as sort of the insect brains of automa­tion. There’s not a lot of smarts here, but there does­n’t need to be a lot of smarts.

And then already we’re start­ing to see much more sophis­ti­cat­ed robots, even since that 2015 video I showed you of all those robots falling over. Already the robots are get­ting a lot bet­ter. And we’re start­ing to get more flex­i­ble robots that are made to work with humans. So, as we start to learn what we’re miss­ing in our machine learn­ing tech­nolo­gies, we’re gonna see a big shift in how employ­ment works. 

And you know, one of the areas that’s super hot right now is self-driving cars. I would sub­mit that the brain pow­er of a rat, prop­er­ly applied, is suf­fi­cient to dri­ve a car. I’m not say­ing that peo­ple who dri­ve cars are rats…please… But you know, a rat has a lot going on in it’s brain. There’s no rea­son the car needs to chase cheese. Like, if we under­stand how this works we can start to tack­le these problems. 

The urban dri­ving prob­lem is quite dif­fi­cult and I think it’s gonna be a long time before we solve it. But high­way dri­ving is per­haps clos­er and with­in reach. And peo­ple are already start­ing to look at trucks, hav­ing self-driving trucks deliv­er goods in a more effi­cient way. 

Unfortunately if you look at a map— And sor­ry this is a very US-centric view, because I’m from the US. But if you look at the most com­mon occu­pa­tion by state in the United States, quite a few states have truck dri­ver” as the most com­mon occu­pa­tion. So as we start build­ing sys­tems that can repli­cate what our brains can do, we’re gonna have to find some­thing for those brains to do. 

The good news is we’ve done this before. So if you look at a plot of the per­cent­age of the American work­force that’s engaged in agri­cul­ture, back in the 1840s was about 70%, and we’ve basi­cal­ly tak­en that down to almost zero. This time maybe will be dif­fer­ent but you know, we have to think about how these tech­nolo­gies as they increase are going to affect things. 

And I think one of the rea­sons why I’m excit­ed to be here at the World Economic Forum is because I think we need to start dia­logues with many dif­fer­ent kinds of stake­hold­ers, peo­ple with many dif­fer­ent kinds of exper­tise. And already in this project, we’re engag­ing neu­ro­science, we’re engag­ing physics, we’re engag­ing com­put­er sci­ence, high-performance com­put­ing. But we also need to start engag­ing law. We need to start engag­ing busi­ness lead­ers. We need to start engag­ing pol­i­cy, and ethics. And I think that the chal­lenges that lie ahead—the oppor­tu­ni­ties are enor­mous that this tech­nol­o­gy enables. But we also have to think very seri­ous­ly about those con­se­quences. So, thank you for your time.