David Cox: So my name is David Cox, and I’m a pro­fes­sor of biol­o­gy and com­put­er sci­ence at Harvard. And I’m gonna tell you today about an excit­ing new fron­tier at the inter­sec­tion of neu­ro­science and com­put­er sci­ence. And part of what’s excit­ing about this is that neu­ro­science and com­put­er sci­ence are to the fastest-moving fields that we have today. And for many of you when you think about neu­ro­science, the first thing that comes to mind might be med­i­cine, and health. But I’m gonna argue, I’m gonna try and con­vince you that actu­al­ly neu­ro­science is much big­ger than that, and that the stakes are much larger.

So, sci­ence is about under­stand­ing the world around us. But it’s also about under­stand­ing where we fit into that world. And it’s human nature to look at our­selves and try and under­stand how we fit in. And more than just how we fit in, how we’re spe­cial. And there are many things that we could think about being spe­cial that are dif­fer­ent about humans from the rest of the world. And we might even be tempt­ed to think that we’re some sort of pin­na­cle of evo­lu­tion. But it turns out that biol­o­gy teach­es us oth­er­wise. We’re just one out of mil­lions of species on this plan­et, each of which is exquis­ite­ly adapt­ed to its niche. 

We’re not the most numer­ous species. We’re not the largest. We’re not the fastest, or the strongest. We’re not the longest-lived. We’re not the most resilient. So what, if any­thing, makes us special? 

Arguably, the thing that makes us unique is our com­plex­i­ty. But not com­plex­i­ty in some gener­ic sense. Nature is rife with com­plex­i­ty. What makes us spe­cial is the com­plex­i­ty of our brains. We more than any oth­er species can learn, and adapt, and shape our envi­ron­ments. Pass on cul­ture. And we’ve spread to every cor­ner of the plan­et, and even beyond it. Every work of art, every edi­fice of our civ­i­liza­tion, is born of activ­i­ty in our brains and born the com­plex­i­ty of our minds. 

And mean­while, we’re slaves to that com­plex­i­ty. If that com­plex­i­ty strays even just a lit­tle bit we can col­lapse under­neath it and have men­tal dis­or­ders, and dis­ease, and at the same time all of the great things that our com­plex­i­ty is able to pro­duce also pro­duces all the bad things that are fac­ing us today. So, I would argue that under­stand­ing the brain is tan­ta­mount to under­stand­ing who we are. And I think we should all be inter­est­ed in neu­ro­science. I may be biased. 

And we’ve been think­ing about it for a long time. So what is it about the brain? What is it about this com­plex­i­ty? How does it work? And inter­est­ing­ly, when we look out into the world, often­times we look at it through the lens of our own tech­nol­o­gy of our day. So, in the 17th cen­tu­ry Descartes, a great philoso­pher and math­e­mati­cian, thought about the brain in terms of the tech­nol­o­gy of the day, which was hydraulic tech­nol­o­gy. So he believed that the seat of the soul was the pitu­itary gland, and that flu­ids ani­mat­ed our body much like a hydraulic sys­tem would. 

Fast-forward to the 19th cen­tu­ry, Sigmund Freud used the anal­o­gy of the tech­nol­o­gy of his day, the steam engine. Talked about pres­sure being released and built up. Talking about the men­tal states of our minds being dri­ven by the engines of our con­scious and subconscious. 

And we fast-forward to the 20th cen­tu­ry, the era of radio and elec­tron­ics. And all of a sud­den we start talk­ing about our men­tal process­es these ways. We talk about wave­lengths, and crossed wires, and channels. 

And now today, we have com­put­ers. So, increas­ing­ly neu­ro­sci­en­tists talk about cir­cuits. And we talk about brains pass­ing infor­ma­tion, pro­cess­ing infor­ma­tion. We talk about net­works. As our tech­nol­o­gy advances, so too do our metaphors. And it’s very easy to be led astray by our metaphors. There are many ways in which our brains are not like the com­put­ers we have on our desks or in our pock­ets. But what’s dif­fer­ent about com­put­ers is that this metaphor is actu­al­ly more than just a metaphor. Computer sci­ence gives us the for­mal tools to eval­u­ate a com­pu­ta­tion­al sys­tem, a sys­tem that process­es infor­ma­tion. So, even when we’re faced with some­thing that has a dif­fer­ent imple­men­ta­tion, we can sep­a­rate out what’s computed—an algo­rithm, from how we com­pute it—an imple­men­ta­tion. And this gives us tremen­dous pow­er to rea­son about com­pu­ta­tion­al sys­tems, includ­ing ourselves. 

So why is this impor­tant? Well, first of all health today, men­tal health, is in many ways the last fron­tier of neu­ro­science. Increasingly, we’re able to treat many of the dis­eases and dis­or­ders that afflict human­i­ty, but men­tal dis­or­ders, dis­eases, are in many ways the sort of last fron­tier. And part of the rea­son for that is that our tools that we use to treat them are rel­a­tive­ly crude. So most of these pills—this is Prozac—are small mol­e­cules that tar­get mol­e­c­u­lar sys­tems, recep­tors, that act through­out the brain and in fact through­out the body. Prozac actu­al­ly also works on the heart. Because they have such dif­fuse effects, it’s very hard to tar­get their action, and it’s very hard to pre­vent off-target effects. In many ways it would be like going to your IT depart­ment with your com­put­er to have them fix it and all they could do is to change the sil­i­con prop­er­ties of the com­put­er chips inside. They might be able to fix the prob­lem some of the time, but that’s not real­ly the right lev­el of analy­sis, the right approach to fix­ing that prob­lem. Instead, you real­ly need to under­stand the soft­ware of the sys­tem. And if we could under­stand the soft­ware of the brain, then com­plex dis­or­ders like schiz­o­phre­nia, and obses­sive com­pul­sive dis­or­der, and depres­sion, which aren’t caused by any sort of overt obvi­ous dam­age to the brain but are prob­a­bly more like mis­wirings and prob­lems in the soft­ware. And increas­ing­ly, we’re start­ing to get to the point where we do under­stand at a com­pu­ta­tion­al lev­el some of the codes that the brain uses that we can then inter­face with. 

So on the left here we have a cochlear implant, which is one of the ear­li­est sort of bion­ic implants. It’s a series of elec­trodes that are insert­ed into the cochlea of the ear, and you can restore hear­ing in some cas­es. So this is a direct inter­face to our ner­vous system.

On the right, we have BrainGate. So this is an excit­ing new tech­nol­o­gy but also at a very crude sort of infan­cy stage. Here this woman is quad­ri­pleg­ic, and that thing on her head is actu­al­ly an elec­trode array that’s insert­ing elec­trodes into her brain, and then those elec­trodes are read­ing activ­i­ty from her motor cor­tex and then using it to move an arm. So increas­ing­ly, we can inter­face with the brain, if we under­stand how it works.

Now, there’s an even bold­er and broad­er set of things we can do if we can under­stand at a com­pu­ta­tion­al lev­el how the brain works. So, if we could take those codes, if we real­ly under­stood how the brain works, we should be able to build it. And the famous physi­cist Richard Feynman once said, That which I can­not build, I do not under­stand.” So, that’s real­ly the mantra of what my lab does. And through this lens we’re basi­cal­ly look­ing and ask­ing can we reverse engi­neer the brain? Can we study the brain’s wiring and cir­cuit­ry so that we can build com­put­er sys­tems that work the same way.

So, the con­se­quences this might not be imme­di­ate­ly obvi­ous. So let’s just take a moment to think about all the dif­fer­ent jobs in the world. So, here we have some fac­to­ries mak­ing cars, mak­ing iPhones. We have peo­ple sweep­ing the street. We have peo­ple look­ing at poul­try. A sur­pris­ing frac­tion of the world’s jobs most­ly require a work­ing visu­al sys­tem, so we can see and under­stand what we’re see­ing, and a work­ing motor sys­tem, so we can take hands and we can move them and we can manip­u­late our environment. 

But at the point at which we can recre­ate these abil­i­ties in com­put­ers and in robots, a lot’s going to change. So, here are some very crude robots that are sort of the advance guard in this new rev­o­lu­tion. We have the iRobot Roomba, which is basi­cal­ly replac­ing some­body sweep­ing the floor. It has a very sim­ple brain in it, per­haps more like an insect than like a per­son. We also have these indus­tri­al robots which have been around for quite a while. But what these require is a highly-controlled envi­ron­ment where the robot needs to move—it needs to have the thing be where it needs to be at the moment it needs it to be there. And it’s all a very highly-choreographed, very highly-controlled system. 

But increas­ing­ly we’re find­ing robots now that are gonna break that mold. So already we have this advance guard of Asimo, which is a bipedal walk­ing robot made by Honda. It does­n’t have a pur­pose per se oth­er than to be a show­case for robot­ics. But peo­ple are already start­ing to think about using robots like this for domes­tic ser­vant kind of roles. We also have this robot Baxter which is from Boston, a start­up com­ing out of MIT. And what Baxter is is it’s robot with two hands that can be trained along­side a human to per­form tasks. And increas­ing­ly beyond the indus­tri­al robots that I showed you in the car fac­to­ry, this sys­tem can adapt to dif­fer­ent con­di­tions, it has some rudi­men­ta­ry vision. So, as we imag­ine to have an under­stand­ing of the brain to be able to build more and more com­plex abil­i­ties into our com­put­ers, then we’re gonna see a renais­sance in robot­ics, and that’s real­ly going to change just about every­thing about our economy. 

There are also jobs that aren’t jobs cur­rent­ly. So there a lot of things we’d like to do, that only humans can do, but that we can’t scale up to a scale that we need. So this is an exam­ple that’s lit­er­al­ly close to home for me. The Boston Marathon bombers plant­ed a bomb, they were caught by many many cam­eras. So it turns out now near­ly every store­front, every shop has a cam­era in it, peo­ple were tak­ing pic­tures of the event. They were doc­u­ment­ed mul­ti­ple times mov­ing around, drop­ping the bomb… But inter­est­ing­ly, even after the fact when the author­i­ties col­lect­ed togeth­er all of the images, it was­n’t pos­si­ble to find out who they were. They had pic­tures of them, they had pret­ty much— Right in this spot you can see one pic­ture there. Here’s anoth­er pic­ture. And this turns out that this is one of the bombers right here. And then lots and lots of these photos. 

But it turns out that face recog­ni­tion soft­ware was not use­ful, at all, in dis­cov­er­ing these bombers, even though we had many pic­tures of them and we had pic­tures to match against the tech­nol­o­gy that we have cur­rent­ly for doing machine vision, for hav­ing com­put­ers look at images and under­stand them, was­n’t up to the task. Now, we know that humans can do this task, because the friends of these broth­ers saw these pic­tures online and then went and destroyed some evi­dence. So they were clear­ly able to do it. But what we weren’t able to do is to deploy at scale the kind of human resources that we’d need. This is just some­thing that com­put­ers can do well and humans can’t. So if we can build human abil­i­ties into machines, then scal­a­bil­i­ty becomes not an issue anymore. 

So, we want to study the brain, we want to reverse engi­neer it. That’s an awful­ly big piece to bite off all at once. So, it turns out the human brain has 100 bil­lion neu­rons in it. And it has 100 tril­lion con­nec­tions. So we can’t just under­stand it all at once. So what we do and what many oth­er labs do is to focus on one sub­sys­tem in par­tic­u­lar. And for a vari­ety of rea­sons, I study vision. 

Now, obvi­ous­ly vision from the exam­ples I gave you has sort of indus­tri­al rel­e­vance that’s hard to argue with. But in addi­tion it’s one of our most nat­ur­al sens­es. We as pri­mates use our vision all the time. We’re very good at vision. And we frankly take it for granted. 

So if we look at an image like this one, even if you haven’t seen this structure—this is close to where I live—instantly with­out any effort you’re able to read out all kinds of infor­ma­tion about the scene. So you can tell that this is a cas­tle, you could tell me which way the wind is blow­ing, you could prob­a­bly tell me how cold it was that day. 

If I take anoth­er pic­ture like this camel, even if you haven’t been to the Gobi Desert and you’ve nev­er seen a camel like this before you instant­ly rec­og­nize that this is a camel. You could prob­a­bly tell me what it would sound like to walk on the ground in the scene. 

So, all of that you got instant­ly from the image and you don’t have to exert any effort. And one of the things that’s frus­trat­ing frankly about study­ing vision is that every­one thinks it’s easy. Because you just look at things and you see them. But the rea­son it’s easy is because you have the solu­tion to the prob­lem in your head and it evolved over hun­dreds of mil­lions of years. 

So let me give you some insight on why this is actu­al­ly so hard for com­put­ers to do even if it’s easy for humans to do. So, for one thing here’s an object in the world. That’s one I care about quite a bit; this my daugh­ter. This is pre­sum­ably the first pic­ture you’ve ever seen of my daughter. 

But if I see anoth­er pic­ture in a slight­ly dif­fer­ent pose, dif­fer­ent light­ing, every­one can instant­ly rec­og­nize that this is the same per­son, this is the same thing in the world. But at the pix­el lev­el these images have almost noth­ing to do with each oth­er. The col­ors are dif­fer­ent. The arrange­ment of pix­els is dif­fer­ent. Computers have a very hard time telling those two things are the same thing. And we can also deal with incred­i­bly rich and com­pli­cat­ed occlu­sions, and dif­fer­ent views and light­ing, so we can instant­ly rec­og­nize that. And it’s frus­trat­ing at some lev­el to try and build com­put­er vision sys­tems, sys­tems that can do what we can do, because we take it so much for grant­ed and it’s actu­al­ly such a hard problem. 

So, any giv­en object in the world can cast infi­nite­ly dif­fer­ent images on your reti­na. And actu­al­ly it turns out the con­verse is true as well. So, any giv­en image in the world can cor­re­spond to infi­nite­ly many dif­fer­ent objects in the world. So has any­one fig­ured out what’s going on in this image? Who says mag­nets? No, no mag­nets today. 

So this is actu­al­ly an illu­sion. And actu­al­ly many of these illu­sions are play­ing on this tricky piece about vision. So, any giv­en object can cast infi­nite­ly many dif­fer­ent images, because we can change your view and light­ing. But any giv­en image could actu­al­ly cor­re­spond to infi­nite­ly many dif­fer­ent objects. And this par­tic­u­lar illu­sion was con­struct­ed to take advan­tage of that. I mean one inter­pre­ta­tion of me look­ing out of at this audi­ence is that I’m stand­ing inside a sphere and you’re all just paint­ed on that sphere in this par­tic­u­lar arrange­ment. It’s not a good inter­pre­ta­tion of the world, but it’s a valid one. There’s actu­al­ly no proof that that’s not the answer. And this is what we call in sci­ence an ill-posed prob­lem. We have a three-dimensional world out­side, and we’re mea­sur­ing it with a two-dimensional struc­ture. Our reti­nas are a two-dimensional struc­ture, so we have to make infer­ences. We have to be guess­ing about what’s in the world, and our visu­al sys­tem is very good at guess­ing the right thing. It gets it right more often than wrong, and that’s why visu­al illu­sions are so com­pelling, is because they vio­late those usu­al­ly very good assumptions. 

The oth­er thing about vision is we’re con­stant­ly deal­ing with incred­i­bly com­plex and ambigu­ous infor­ma­tion. So here we have a street scene. And I think all of you could prob­a­bly make an esti­mate of how many peo­ple rough­ly are in this image. And I think we’d all agree that there are peo­ple on the oth­er side of the street as well, right. So there’s peo­ple in the fore­ground, we can see them some­what clear­ly. There’s also peo­ple in the background. 

If we zoom in on part of that back­ground, this is what you were actu­al­ly look­ing at. This is exact­ly the same infor­ma­tion just blown up a lit­tle bit. 

And if cov­er we this up, you were cer­tain­ly able to rec­og­nize that there were peo­ple on the oth­er side street, but you did­n’t actu­al­ly have any infor­ma­tion to prove that, or to give you that impres­sion. The infor­ma­tion you used to know that there were peo­ple on the oth­er side of the street was the con­text. You were able to inte­grate a mod­el of know­ing about how street scenes work, know­ing how people—where they should be, where the heads would be, and you were able to infer a lot of things and per­haps even to the lev­el of almost hal­lu­ci­nat­ing the impres­sion that there were these faces even though you could­n’t see them. There was­n’t actu­al­ly any real infor­ma­tion. So, these are amaz­ing abil­i­ties that we don’t yet know how to build into computers. 

So what do we know, though, about biol­o­gy? So, if we take an image in the world, the pho­tons are pro­ject­ed onto the reti­na, which is a two-dimensional lay­er of tis­sue on the back of the eye that trans­duc­ers the pho­tons into an elec­tri­cal sig­nal that goes across the optic nerve to the brain. Now the brain is a massively-parallel com­put­er made up of 100 bil­lion ele­ments in humans. And each neu­ron, so each com­pu­ta­tion­al ele­ment, is actu­al­ly a com­put­er unto itself. So it takes inputs in, and it puts out­puts out, over some hun­dred tril­lion con­nec­tions between these neurons. 

Now, it’s not just dif­fuse­ly orga­nized, there’s actu­al­ly a very inter­est­ing struc­ture to the visu­al sys­tem. It’s arranged hier­ar­chi­cal­ly. So, infor­ma­tion comes in at the back of the brain into an area called V1, and then suc­ces­sive­ly infor­ma­tion is sent to waysta­tions where it’s processed and trans­formed. And these areas are called V1 for Visual Area 1, V2, V4 (don’t wor­ry about where V3 went), and then there’s an area called TE, which is the tem­po­ral cor­tex. And there are actu­al­ly quite a few more visu­al areas. It turns out that in vision there’s a seg­re­ga­tion between our pro­cess­ing of what some­thing is and our pro­cess­ing of where it is and how fast it’s mov­ing and things like that. So some of the oth­er V” num­bers cor­re­spond to that oth­er stream of processing. 

So, it’s inter­est­ing what hap­pens. If you record from the neu­rons in these areas and mea­sure their activ­i­ty you find that neu­rons in area V1 are pri­mar­i­ly con­cerned with small, sim­ple struc­tures like lit­tle edges. And then if we go up to the high­est lev­els in area TE, also called Area IT, we find real­ly inter­est­ing neu­rons. So here’s a fig­ure from the 1980s from Robert Desimone. And what he did was he showed a mon­key with an elec­trode in its brain, in this area, images of a face and an images of scram­bled faces. So they have com­pa­ra­ble com­plex­i­ty, visu­al­ly, rough­ly speak­ing, but this one forms togeth­er to form a face and that one does not.

And what you see above is the fir­ing of the neu­ron in the brain. So, you don’t need to wor­ry too much about what this means oth­er than up means more fir­ing, and that way [indi­cates to the right] means for­ward in time. And this lit­tle brack­et shows you when the stim­u­lus was up. So with­out think­ing too hard about it, you can clear­ly see that this neu­ron seems to like faces. It fires when you see faces. And this is actu­al­ly quite mag­i­cal when you have an elec­trode record­ing from a neu­ron and you’re show­ing stim­uli and fig­ur­ing out what the cell fires in response to. It’s prob­a­bly a lit­tle bit more com­plex than just say­ing that this is a face neu­ron? But at the same time it’s rea­son­able to say that this neu­ron rep­re­sents the face. 

So then, we take all this infor­ma­tion and then what my lab and oth­er labs do is try to take inspi­ra­tion from the nat­ur­al sys­tem and what we can glean what we know about the nat­ur­al sys­tem, and then build an arti­fi­cial sys­tem that shares the same struc­ture and shares aspects of the same pro­cess­ing. And where the nat­ur­al brain is made up of bil­lions of neu­rons, the arti­fi­cial sys­tem is built up of arti­fi­cial neu­rons that are basi­cal­ly func­tions. So what we need to do is study the sys­tem, fig­ure out how to build ver­sions like that. And then we can deploy these in a vari­ety of con­texts. My lab uses these for face recog­ni­tion, face detec­tion. We all use them for robot nav­i­ga­tion and a vari­ety of oth­er dif­fer­ent tasks. 

So the prob­lem is that the pro­cess­ing pow­er of the brain is actu­al­ly quite a bit more than the pro­cess­ing pow­er of a com­put­er. So it’s at least petaflops of com­pu­ta­tion­al pow­er in the brain, which is remark­able also con­sid­er­ing that it only dis­si­pates some­where between fif­teen and twen­ty watts. So, it’s using about as much pow­er as your lap­top and yet it’s as pow­er­ful com­pu­ta­tion­al­ly as some of the most pow­er­ful super­com­put­ers in the world. So that’s an inter­est­ing fact in and of itself. 

Eventually we’ll fig­ure out how to get that pow­er effi­cien­cy, but in the mean­time what we do is we build up large clus­ters. We use lots of com­put­ers to try and mim­ic the pow­er or the abil­i­ties of a brain, and we’ll wor­ry about the pow­er lat­er. We’ll fig­ure out— And this is again one of the great things about com­put­er sci­ence, is you can divorce the algo­rithm from the implan­ta­tion. We can fig­ure out how to imple­ment it effi­cient­ly later. 

And you may have heard sto­ries about how there was a group asso­ci­at­ed with IBM that claimed that they had assem­bled enough com­pu­ta­tion­al pow­er to sim­u­late the brain of a cat. This was big news about five years ago. And it’s a bit of a curi­ous claim, but it’s an illus­tra­tive one. So it’s sort of like say­ing you took alu­minum and bolts and put them togeth­er, and you got an air­plane. I don’t par­tic­u­lar­ly want to fly on an air­plane if some­body just told me I’ve assem­bled enough alu­minum and enough bolts to build an air­plane.” I’d actu­al­ly like to see that plane fly. 

So, in the case of this cat, if it’s not chas­ing mice and catch­ing mice, our job sort of isn’t done. And this is real­ly the hard part. So you’ll find peo­ple claim­ing that they’ve built these super­com­put­ers and we can final­ly sim­u­late brains. The ques­tion is what does that brain do? And does that brain actu­al­ly do the impor­tant things and the inter­est­ing things that brains can do, or does it just have sort of a vir­tu­al seizure. And there’s actu­al­ly a huge European Union project, a multi-billion euro project, aimed at sim­u­lat­ing a huge brain but not nec­es­sar­i­ly with a whole lot of empha­sis on what the brain’s going to do. And there’s dif­fer­ences of opin­ion about whether that’s a good idea.

So, this is actu­al­ly an incred­i­bly ripe time to be in this area. It turns out we’ve been study­ing these arti­fi­cial brain-like sys­tems, they’re called arti­fi­cial neur­al net­works, for a very long time. In fact in the 40s, the first neur­al net­work ideas sort of were born, and in the 80s they became a big thing, and then in the 90s they had­n’t quite deliv­ered yet so the whole thing col­lapsed and there’s some­thing called the AI win­ter. But today is actu­al­ly a real­ly sweet time to be in this busi­ness, because the sys­tems have got­ten pret­ty good. And you might’ve heard sto­ries like these. 

So Google just bought a com­pa­ny called DeepMind for half a bil­lion dol­lars, and that was entire­ly based on this tech­nol­o­gy of build­ing brain-inspired com­pu­ta­tion­al sys­tems. And mean­while Google, and Baidu, and Twitter, and Facebook are basi­cal­ly hir­ing up huge a huge frac­tion of the field. Actually, Mark Zuckerberg showed up this past year at one of our field­’s major con­fer­ences and basi­cal­ly hired every­one in sight. So there are peo­ple at Google now who are claim­ing, sort of back of the enve­lope, that per­haps 10% of the best peo­ple in the field now work for these com­pa­nies. So it’s an unprece­dent­ed pri­va­ti­za­tion of an entire aca­d­e­m­ic field. And at some lev­el that’s at least an indi­ca­tion that some of the smart mon­ey at least thinks that there’s some some gas here. 

But the inter­est­ing thing about it is that this was­n’t real­ly dri­ven by some con­cep­tu­al advance that hap­pened. It’s much more dri­ven by com­pu­ta­tion­al pow­er and the avail­abil­i­ty of big data. So Google and YouTube alone col­lect hun­dreds of hours of video per minute. So that’s just a huge, huge amount of data. And they have the com­pu­ta­tion­al resources, they have the serv­er farms, to run it. And in many ways the sys­tems that are now avail­able that Google’s get­ting so excit­ed about buy­ing and that are win­ning a lot of these aca­d­e­m­ic bench­marks and chal­lenges over in the aca­d­e­m­ic field, what’s changed isn’t so much that we’ve under­stood some­thing new about the brain—a lot of our insights were from the 80s. But what’s changed is now we have huge amounts of data. 

But at the same time, a lot of the tasks that we’d like to solve, like the Boston Marathon bomb­ing, like that com­plex street scene, we still aren’t able to do. We aren’t able to do them just with lots and lots of data and just with lots and lots of com­pute pow­er. We need more infor­ma­tion. We need more clues about how the sys­tem is organized. 

So for­tu­nate­ly, there’s sort of two huge tidal waves that are on a col­li­sion course with one anoth­er. So on hand, we have all this data, and we have an unprece­dent­ed amount of com­pute pow­er, and we have some real trac­tion where we’re try­ing to get use­ful appli­ca­tions com­ing out of these com­put­er algo­rithms. But on the oth­er hand, neu­ro­science is going through an absolute rev­o­lu­tion in new tools and tech­niques. And my lab is a bit unusu­al in that we try and actu­al­ly do both. So in addi­tion to build­ing com­put­er algo­rithms of the sort that Google’s inter­est­ed in, we also want to go into the brain and look for clues about what we should build next and get data that we can use to con­strain the algo­rithms that we build.

So this is rough­ly speak­ing how we reverse engi­neer a brain. So imag­ine if you had a com­pet­ing prod­uct that one of your com­pet­ing com­pa­nies pro­duced and you did­n’t know how it worked but you real­ly want­ed to know how it worked, you might buy the prod­uct, open it up— There are laws against that in some places but you might do it any­way. You open it up, you put some oscil­lo­scope probes in, and you try to fig­ure out how it works. You reverse engi­neer the sys­tem. And rough­ly speak­ing we can do the exact same thing with nature. It just so hap­pens that instead of being a com­pet­ing prod­uct it’s actu­al­ly usu­al­ly a warm-blooded fur­ry crea­ture. Or a human.

So what we have here are some of the— This is pret­ty much the ear­li­est tech­nol­o­gy for reverse engi­neer­ing the brain. This is a tung­sten micro­elec­trode. So this is basi­cal­ly a wire that you can hook up to an ampli­fi­er. These are two neu­rons, and this gets down to about twen­ty microns, or twen­ty thou­sandths of a mil­lime­ter at the very tip. And then tra­di­tion­al­ly what you do is you go and you park an elec­trode next to a neu­ron, and you lis­ten to it. You lit­er­al­ly put the ampli­fi­er to a speak­er, and you can lis­ten to the cell. And the way cells com­mu­ni­cate with each oth­er is by some­thing called action poten­tials, or spikes. And they’re lit­tle pop­ping nois­es. So you can actu­al­ly hear, as you stim­u­late the cell or you show an image, you can hear a lit­tle tak tak tak tak tak of the cell fir­ing, and that’s what this lets you do.

Now, what’s excit­ing is that through all kinds of inno­va­tions in oth­er indus­tries, we now increas­ing­ly have access to much bet­ter ver­sions of this tech­nol­o­gy. So this is a kind of elec­trode array that we use in my lab­o­ra­to­ry. This is a sil­i­con micro­ma­chined elec­trode array. So it’s got an array. You can see these lit­tle dots…are irid­i­um elec­trode record­ing pads, so it sort of like one of those except now we can have dozens or hun­dreds of them. And then this basi­cal­ly sticks into the brain and we can wire­tap a much larg­er number. 

And then this is some­thing new that we’re devel­op­ing, or start­ing to use my lab. These are car­bon micro wires. So these are each five microns in diam­e­ter, or about a twen­ti­eth of the width of a human hair. We can get huge num­bers of these now into brains, we can snake them in. And then because they’re so flex­i­ble, they’re actu­al­ly almost impos­si­ble to see with the naked eye because they’re so small. They can kind of float in the brain. the brain’s always puls­ing because there’s blood flow­ing through it, but these guys can sort of float in the brain and then we can get iso­la­tions for a very long peri­od of time.

Now, these are the old-school tech­nolo­gies, frankly. That was the updat­ed ver­sion of old tech­nolo­gies. But there’s actu­al­ly quite a num­ber of new, excit­ing tech­nolo­gies that’re also avail­able. So this pic­ture I just learned was tak­en by from Feng Zhang who just gave the Betazone pre­sen­ta­tion. I stole it with­out know­ing it was his, and…anyway, thanks Feng. 

So this is an exam­ple of opto­ge­net­ics. So what this is, is par­tic­u­lar­ly to researchers Karl Deisseroth and Ed Boyden at Stanford and MIT respec­tive­ly, devel­oped a way to intro­duce ion chan­nels, pro­teins from oth­er species, and in some cas­es engi­neered ver­sions of those pro­teins from oth­er species, into neu­rons. And then what this lets you do is it lets you shine light on the cells, and then you can either turn them on or turn them off. So it’s a lit­tle bit like installing an on/off switch in neu­rons. And because these are tar­get­ed with genet­ic tech­nolo­gies, you can tar­get spe­cif­ic kinds of cells. There’s dif­fer­ent cell types in the brain. So we can give cer­tain cells a kick, we can turn off cer­tain cells, we can start to manip­u­late the cir­cuits. So again this is for reverse engi­neer­ing the brain. These are the kinds of things you want to be able to do. You want to be able to selec­tive­ly probe dif­fer­ent parts of the cir­cuit, and see what happens. 

In addi­tion we also have new opti­cal tech­nolo­gies for record­ing the activ­i­ty. So I showed you the old-school thing, which was putting an elec­trode in and mea­sur­ing the elec­tri­cal poten­tials near the cell. But there’s actu­al­ly quite a few new tech­nolo­gies. So this is a pic­ture of an instru­ment in my lab. It’s called a two-photon exci­ta­tion micro­scope. And what we do…so the rat rough­ly speak­ing would go right here. And there’s this lit­tle wheel he can run on. And then this is a pow­er­ful laser that we shine into his brain. 

And the rea­son shin­ing a laser into the brain works in this case and lets us see activ­i­ty is because we’ve also intro­duced a genetically-encoded cal­ci­um indi­ca­tor. So, when cells fire cal­ci­um rush­es into the cell, and then these genetically-encoded flu­o­rophores will light up when the cel­l’s active. 

So if you look here you just saw there was a cell that’s sort of glow­ing. You’re watch­ing right now a series of cells—each one of these round bod­ies is a cell—in the rat’s visu­al cor­tex. And as it lights up and gets dim­mer, you’re watch­ing the activ­i­ty of the cell. So when the cell fires it gets brighter, and we can use this tech­nol­o­gy to record from hun­dreds of cells. And crit­i­cal­ly we can record from the same cells over long peri­ods of time. So you can imag­ine learn­ing not just how the brain works sort of in its final steady state, but you can also start to study how the brain changes over time. And this is the kind of thing that when we’re build­ing machine learn­ing tech­nolo­gies we real­ly like to see them learn­ing in action.

And then there are oth­er excit­ing tech­nol­o­gy. So this is some­thing that’s hap­pen­ing just down the hall from me. So this is Bobby Kasthuri and his advi­sor Jeff Lichtman. And what we see here is an elec­tron micro­graph of a brain. So, what we can do is we can take the brain that I just showed you, where we were watch­ing the activ­i­ty, we can take the brain out— So this is one thing that’s not great to do in humans but you can do it in rats. And we take the brain out, and we’ve imaged those cells so we know what their activ­i­ty was like. But then we can actu­al­ly slice with a very fine knife the brain, and we can look at the very fine struc­tures of the tis­sue. So this is the exam­ple one image, but it’s actu­al­ly part of a large vol­ume of tis­sue that’s imaged this way. And the rea­son you need to use an elec­tron micro­scope is because these fea­tures are actu­al­ly too small to image with light. So the wave­length of light would actu­al­ly be some­thing like this. So it’s just, light is too big to inter­act with how small these things are. 

But with an elec­tron micro­scope, we can slice, and recon­struct, and then we can trace the wiring. So we can lit­er­al­ly get the wiring dia­gram from the very same cells that we were just imag­ing to get their activ­i­ty. And this is an incred­i­bly pow­er­ful tech­nique that Jeff Lichtman has large­ly pio­neered, and that we’re join­ing forces to use.

And here’s an exam­ple of a sin­gle cel­l’s den­drite, which is the input process. And then all of the oth­er process­es con­nect­ing onto it. So I told you there were 100 tril­lion con­nec­tions. These are the con­nec­tions to just one of those neu­rons, in one place. And you can see this incred­i­ble com­plex­i­ty of dif­fer­ent kinds of stuff that’re connected. 

And real­ly this tech­nol­o­gy is game-changing because it means that we can know every­thing about how the brain is hooked up. We can seg­re­gate out dif­fer­ent kinds of processes—axons, which are the out­puts; den­drites which are the inputs; also a num­ber of dif­fer­ent things like glial cells that are sup­port­ing cells; there’s uniden­ti­fied stuff which I’m fas­ci­nat­ed by, but uh…[shrugs]…there you go. So, it’s real­ly an amaz­ing time to be think­ing about neu­ro­science and think­ing about how this all fits together. 

So the oth­er thing I should men­tion, a lot of peo­ple want to see this work being done in humans, and I think this is actu­al­ly a mis­take. So I men­tioned that we’re doing this in rats and, there real­ly is a huge advan­tage to that. So, imag­ine you were an alien com­ing to Earth, and you did­n’t know any­thing about what cars were. You saw these things mov­ing around, but you weren’t quite sure what they were. A sen­si­ble thing to do would be to get a car and take it apart and try to fig­ure out how it works. 

Now, you could choose my car. So I took a pic­ture of the inside of my car. This is a 2007 Prius. It’s a pret­ty good car. It’s a com­pli­cat­ed car. It’s got a very com­pli­cat­ed pow­er­train, it’s got a motor and an engine. It’s got thir­teen com­put­ers on board, computer-controlled fuel injec­tion. It’s a great car. It’s a mar­vel of engi­neer­ing. But if I were try­ing to study cars, it might not actu­al­ly be the right car for me to start with. 

I might pre­fer to start with the car I learned to dri­ve on, which is a 1980 Ford Pinto. That’s a ter­ri­ble car. It’s not a good car at all. But it’s got a car­bu­re­tor, it’s got big parts, it’s got spark plugs. Just because it’s a less-good system…it might be less good at what it does, it’s actu­al­ly a bet­ter sys­tem to start with. So this is sort of the train­ing wheels for under­stand­ing. And what I’m going to argue is that you know, there’s a dri­ve to study in humans because we’re humans and we need to study the neu­ro­science of humans. But real­ly we’re not at that stage yet. We’re not quite there yet. 

So what we’re going to do instead is we’re gonna find the Ford Pinto of nature, which is the rat. And actu­al­ly, call­ing a rat a Ford Pinto is total­ly not fair, because they don’t explode, and they’re actu­al­ly quite won­der­ful at what they do. Nearly every organ­ism on Earth is won­der­ful at what it does, and if it weren’t won­der­ful what it does, it would be replaced by some oth­er crea­ture that could do what it need­ed to do bet­ter. But it’s true that their brains are much sim­pler. Again this idea that this com­plex­i­ty is what makes us dif­fer­ent? Like, now we need to dial that back. We need to look at some­thing that’s sim­pler. And in sheer num­bers, the num­bers of neu­rons are much small­er. Where the brain is sort of two pounds of stuff in our head, a rat’s brain is about this big. So when we do things like con­nec­tomics, or we do imag­ing, we can actu­al­ly start to make some traction. 

And this is just to show you what my lab looks like. So, because we’re study­ing neu­ro­science, which is real­ly the biol­o­gy of behav­ior, we’ve built all these rigs to con­trol the behav­ior. So basi­cal­ly what I’m telling you is I have an army of trained rats that live in my lab. And these are the rigs we train them in. So it’s basi­cal­ly a series of high-throughput box­es. We can take the ani­mals, we can put them in. They basi­cal­ly play lit­tle video games. And then we teach them to do stuff so that when then we go and mea­sure their activ­i­ty in their brain, or we mea­sure changes in the activ­i­ty of their brain, we can do that with respect to an actu­al thing that the ani­mal’s doing. 

And this is just to give you a sense of what this looks like. So this is a mon­i­tor where we’re show­ing the ani­mal dif­fer­ent objects, if you’re inter­est­ed in object recog­ni­tion. This is a rat down here. So you can see there he is with his whiskers, and he’s lick­ing. And this is basi­cal­ly PlayStation 4 for rats. This is about as good as it gets. 

So, he’s just touch­ing these lit­tle capac­i­tive sen­sors and they put out lit­tle bits of juice. And what this lets us do is to have very fine con­trol over the behav­ior, in a very high-throughput way, and we can mix this with all these tech­nolo­gies and bun­dle this all up to try and build—he’s adorable, isn’t he?—build up an under­stand­ing of how his brain works. [the screen in the train­ing rig flash­es sev­er­al times] He just got one wrong. 

So, this is an amaz­ing time to put all this stuff togeth­er. And we’re actu­al­ly start­ing to assem­ble a team— I mean, this is an enor­mous under­tak­ing. So the con­nec­tomics alone, if we were to take a mil­lime­ter cubed of tis­sue and slice it up and try and imag­ine it, that’s one and a half petabytes of data. So an enor­mous quan­ti­ty of data to record from all those neu­rons, an enor­mous quan­ti­ty of data, an enor­mous under­tak­ing, bring­ing all togeth­er all the machine learn­ing exper­tise that we need to bring togeth­er. So what we’re doing now is we’re assem­bling a team across Harvard and MIT and a few oth­er insti­tu­tions, basi­cal­ly to do a very seri­ous take on this reverse engi­neer­ing of the brain, part­ly dri­ven by inter­est now from the gov­ern­ment. So the Intelligence Advanced Research Projects Administration, which is basi­cal­ly the intel­li­gence ver­sion of DARPA, which you may have heard of, is now putting out a chal­lenge for groups like ours to assem­ble and real­ly take seri­ous­ly this idea that we can take all of these tech­nolo­gies which are right on the cusp, go to the very fron­tier of what we’re able to do with them, put all that infor­ma­tion togeth­er, and real­ly make a front-on push. 

This is not an easy task for us to under­take. It’s going to take mon­ey. It’s going to take aca­d­e­m­ic coop­er­a­tion. It’s going to take pri­vate cor­po­ra­tion. We’re increas­ing­ly work­ing with Google now, because Google’s one of the only enti­ties in the world that can deal with this much data time, and we’re col­lab­o­rat­ing with them now. And we’re going to have to bring that all togeth­er to make this push. But the sense that this real­ly is the sort of, the key, the crux of our human­i­ty even if we’re study­ing it in rodents, real­ly makes it sort of the one of the great­est chal­lenges of our time, sort of to go to the fron­tier and see if we can fig­ure out how these brain sys­tems work. And then fig­ure out if we can build them our­selves. So with that, I will close. Thank you.