Technology can help to build small com­put­ers inspired from the brain, and so here is the moti­va­tion for my work. Making our proces­sors small­er and faster has turned them into ener­gy ogres. If we were to increase their speed, they would self-destruct, heat­ing local­ly as much as a nuclear reactor.

If we want to con­tin­ue increas­ing the per­for­mance of our com­put­ers, we need to rethink the way we com­pute. And our brains are won­der­ful proof that impres­sive com­pu­ta­tions can be car­ried out with a very low pow­er bud­get. We are able to rec­og­nize peo­ple we bare­ly know in just a frac­tion of a sec­ond, and we do this much faster than any super­com­put­er while con­sum­ing one thou­sand times less pow­er. Therefore it is very rel­e­vant to take inspi­ra­tion from our brain to build fast and energy-efficient cog­ni­tive com­put­ing hardware. 

But then, what is the dif­fer­ence between our brains and our com­put­ers? Well, every­thing. Their build­ing blocks, their archi­tec­ture, and their prin­ci­ples of com­pu­ta­tion are dif­fer­ent. Our proces­sors are made of bil­lions of tran­sis­tors. So inside, mem­o­ry and pro­cess­ing are spa­tial­ly sep­a­rat­ed. And data is con­tin­u­ous­ly trans­port­ed back and forth, which con­sumes a lot of pow­er. In addi­tion, our com­put­ers are sequen­tial. They do most­ly one thing at a time, slow­ing them down.

In com­par­i­son, in the brain, there are a hun­dred bil­lion neu­rons and one thou­sand times more synaps­es. Memory, pro­vid­ed by the synaps­es, is entan­gled with the pro­cess­ing pro­vid­ed by neu­rons, which all work in par­al­lel. And so this entan­gle­ment and this par­al­lel pro­cess­ing are two rea­sons the brain is low-power and fast.

So, our com­put­ers are excel­lent at pre­cise arith­metic. Our brains excel at cog­ni­tive tasks. So how can we build cog­ni­tive hard­ware that will be com­pat­i­ble with our cur­rent com­put­ers? How can we catch the essence of bio­log­i­cal synaps­es and neu­rons, and map it to hardware? 

For synaps­es, there is some con­sen­sus. We know that it is very impor­tant to repro­duce their plas­tic­i­ty. In oth­er words, their abil­i­ty to recon­fig­ure the strength with which they con­nect to neu­rons. Indeed, we know that it is this plas­tic­i­ty that allows mem­o­ries to be formed and to be stored in the brain. So a chip with plas­tic synaps­es could learn and adapt to the chang­ing world. For exam­ple, the exact same chip could learn to detect dis­eases if pro­vid­ed with med­ical infor­ma­tion, or to detect traces of pol­lu­tants if pro­vid­ed with water com­po­si­tion. This would be very useful.

So, how can we fab­ri­cate elec­tron­ic nano-synapses, and [not] research plas­tic­i­ty? It is very inef­fi­cient to imple­ment these prop­er­ties with tran­sis­tors. There are some sub­stan­tial efforts to devel­op nov­el mate­ri­als and nov­el physics to emu­late arti­fi­cial nano-synapses. The best solu­tion today are nano-resistors called mem­ris­tors.” These nano-fuses can reg­u­late the elec­tri­cal cur­rent flow between between two neu­rons. If they let a lot of cur­rent flow, the synapse is active and a mem­o­ry is imprint­ed. If they let just a trick­le of cur­rent flow, the synapse is inactive.

Researchers know how to fab­ri­cate tens of these mem­ris­tors. But the chal­lenge is to build dense arrays of hun­dreds of mil­lions of these arti­fi­cial nano-synapses, and to con­nect them to elec­tron­ic neu­rons. And the main trend today to fab­ri­cate these elec­tron­ic neu­rons is to assem­ble tens of tran­sis­tors, which takes a lot of space on silicon.

So how can we fab­ri­cate nano-neurons? Well, the approach we are using in my lab­o­ra­to­ry is to devise tiny elec­tron­ic oscil­la­tors. Indeed, the brain has its own rhythms, and we know that for some com­pu­ta­tions, neu­rons behave as small oscil­la­tors that vibrate dif­fer­ent­ly depend­ing on the sig­nals incom­ing from oth­er neu­rons. We want to build chips that can learn and process infor­ma­tion through the com­plex vibra­tions of these elec­tron­ic oscil­la­tors, cou­pled with mem­ris­tor synapses.

Are you famil­iar with djem­bes, the African drums? Well, when the drum­mer strikes the edge or the cen­ter of the drum, the skin vibrates dif­fer­ent­ly, pro­duc­ing a dif­fer­ent sound. So you could say that these drums are able to clas­si­fy. By lis­ten­ing to the sound, which is a result a the com­pu­ta­tion, you can know if the drum­mer has struck the cen­ter or some­where around the edge. So, our chips would work in a sim­i­lar way, but with nan­otech­nol­o­gy and pro­duc­ing much more com­plex and crazy vibra­tions, allow­ing much more com­put­ing power.

So, this is a field of research that is still in its infan­cy. But if we suc­ceed, we could make sense of Big Data with a very low ener­gy cost. We could give robots a small embed­ded brain, to make them more autonomous. And we could build smart pros­the­ses for med­ical appli­ca­tions. And in addi­tion, by try­ing to build sys­tems inspired from the brain, we could learn more about the brain itself.

So the ques­tion I would like to ask you today is, what would you do? How can you use in your every­day life or in your busi­ness, a small chip that can do pow­er­ful pat­tern recog­ni­tion with a very low pow­er budget?

Thank you.