When my son was two years old, one day I showed him the char­ac­ter E” on a piece of paper. The next day he would point to the dif­fer­ent Es in the street, includ­ing this huge upside‐down E paint­ed on the foot­ball field.

I was amazed that he could learn and gen­er­al­ize from just one exam­ple. When my lit­tle daugh­ter saw this Picasso paint­ing, she screamed, Face!” right away, even those she had nev­er seen such a dis­tort­ed face before.

Of course, being my chil­dren they are nat­u­ral­ly real­ly smart, but how is that pos­si­ble? Computers today aren’t this intel­li­gent. Computers can only do things that they are trained for. For exam­ple, after the Boston Marathon bomb­ing, human FBI agents had the come in to watch hours of sur­veil­lance tape to iden­ti­fy the bombers. Computers can­not do that because they don’t even know who and what to look for.

We can train com­put­ers to learn to rec­og­nize objects by giv­ing them mil­lions of exam­ples with the cor­rect answers. A human baby, on the oth­er hand, learns to rec­og­nize many con­cepts and objects all by them­self sim­ply by inter­act­ing with a few exam­ples in the real world.

My research at Carnegie‐Mellon involves look­ing inside the brain to study what is going on at the lev­els of indi­vid­ual brain cells and cir­cuits when the brain is see­ing and learn­ing to rec­og­nize objects. We want to use this knowl­edge to make com­put­ers see and learn like humans.

For humans, see­ing is a cre­ative process. There is a dis­tinc­tion between what our eyes take in, which are frag­ments of the world, and what we per­ceive. We rely heav­i­ly on our expe­ri­ence and knowl­edge to make up the image that we see in our mind.

These pic­tures illus­trate what I mean. On the left you see a red translu­cent sur­face, but real­ly there’s no red sur­face. Only frag­ments of the black rings have been turned red. On the right you see a white tri­an­gle, but in real­i­ty it’s not there. It is an illu­sion.

What we see in our mind is our inter­pre­ta­tion of the world. The brain fills in a lot of miss­ing details to make up the most prob­a­ble men­tal image that can explain what comes into our eyes. We see with our imag­i­na­tion, and we cre­ate the image that we per­ceive in our mind.

Creating this image in our brain involves the inter­ac­tions of many lev­els of brain cir­cuits in the visu­al cor­tex, the part of the brain that is respon­si­ble for pro­cess­ing visu­al infor­ma­tion.

During per­cep­tion, infor­ma­tion flows up and down across the dif­fer­ent lev­els to inte­grate glob­al and local infor­ma­tion. We have observed that at the high lev­el, neu­rons can see the white tri­an­gle but only fuzzi­ly.

Neurons at the low­er lev­el can see more clear­ly, but ini­tial­ly they only see a frag­ment­ed view of the world. But after a brief moment, they start to rep­re­sent the white tri­an­gle as well. We believe that the brain cre­at­ed this image as a way to check whether its inter­pre­ta­tion of the world is cor­rect.

This is the result of a com­put­er pro­gram we wrote based on the same prin­ci­ples. Given an image, the pro­gram has to inter­pret its 3D struc­tures. For each inter­pre­ta­tion, it can imag­ine what it expects to see. And if the imag­ined image match­es the input image, I explain the image, the pro­gram knows he got the right answer.

So the brain, when it is not inter­pret­ing and per­ceiv­ing the world using this process, it can use the same cir­cuit to imag­ine how an object might look like under dif­fer­ent sit­u­a­tions. So in this way, it can actu­al­ly gen­er­ate a huge amount of big data based on a few exam­ples to train itself.

So our nat­ur­al impulse to day­dream, to imag­ine the future, and to do cre­ative things, and our innate need for artis­tic expres­sion and art‐making, could actu­al­ly all be byprod­ucts of the same pro­gram and process run­ning con­stant­ly in the brain. So these activ­i­ties could be key to the devel­op­ment of our abil­i­ties to learn from a few exam­ples and to rec­og­nize objects that we have nev­er seen before. So we want to give this capac­i­ty of cre­ativ­i­ty and imag­i­na­tion to com­put­ers so that they can learn auto­mat­i­cal­ly like a baby.

As a part of this large‐scale Apollo Project of the Brain” recent­ly launched, we at Carnegie‐Mellon are study­ing actu­al neur­al cir­cuits that enable our visu­al sys­tems to see and imag­ine with imag­i­na­tion. And we want to put this to work in com­put­ers.

So we hope this work will not only help to us under­stand the brain bet­ter, but to make more flex­i­ble and intel­li­gent robots. And the ques­tion I want to leave with you is, can you imag­ine what machines can do if they had the pow­er of imag­i­na­tion? Thank you.

Further Reference

Tai Sing Lee home page at Carnegie-Mellon University

2016 Annual Meeting of the New Champions at the World Economic Forum site


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.