Leo M. Lamber: Good after­noon, and wel­come to the Areté Medallion cer­e­mo­ny. Today we have the hon­or of bestow­ing the inau­gur­al Imagining the Internet Areté Medallion to Dr. Vint Cerf, Vice President and Chief Internet Evangelist for Google, and a mod­ern pio­neer who changed the way we live our lives. As an archi­tect and code­sign­er of the Internet, Dr. Cerf imag­ined a world that did­n’t exist and devel­oped tech­nolo­gies that under­gird our dai­ly lives, the glob­al knowl­edge econ­o­my, and end­less pos­si­bil­i­ties for cre­at­ing a health­i­er, more edu­cat­ed, and more con­nect­ed world. 

The Areté Medallion was estab­lished to hon­or inno­va­tors, change agents, thought lead­ers who have ded­i­cat­ed their lives as pub­lic ser­vants and have ini­ti­at­ed and sus­tained work that ben­e­fits the greater good of human­i­ty through con­tri­bu­tions that enhance the glob­al future. 

Areté” is Greek for moral virtue and striv­ing for excel­lence in ser­vice to human­i­ty. In ancient Greek, the term areté was used to describe peo­ple stretch­ing to reach their fullest poten­tial in life, embody­ing good­ness and excel­lence. Innovation and entre­pre­neur­ship are essen­tial 21st cen­tu­ry skills and crit­i­cal goals of Elon’s student-centered learn­ing environment.

It is fit­ting that Dr. Cerf is the very first recip­i­ent of the are Areté Medallion, based on his con­tri­bu­tions to glob­al pol­i­cy devel­op­ment and the con­tin­ued spread of the Internet. He’s been rec­og­nized with the US Medal of Technology, the Alan Turing Award, known as the Nobel Prize of com­put­er sci­ence, and the pres­i­den­tial Medal of Freedom, and has served as found­ing President of the Internet Society, mem­ber of the National Science Board, and on the fac­ul­ty of Stanford University. His exem­plary ser­vice to the devel­op­ment of our dig­i­tal lives stands as a tremen­dous exam­ple for Elon’s stu­dents, who are devel­op­ing them­selves as change-makers, build­ing new ini­tia­tives and part­ner­ships, and mak­ing their com­mu­ni­ties, from Burlington to Beijing, bet­ter through lead­er­ship and service.

I’d like to call on Professor of Communications and Director of the Imagining the Internet Center Janna Anderson to intro­duce Dr. Cerf. A widely-recognized author­i­ty on our dig­i­tal future, since the Fall of 2000 Professor Anderson has led Elon’s internationally-recognized efforts to doc­u­ment the devel­op­ment of dig­i­tal tech­nol­o­gy. Nearly four hun­dred Elon stu­dents, fac­ul­ty, staff, and alum­ni have par­tic­i­pat­ed in research through the Center, and Janna’s stu­dents have pro­vid­ed mul­ti­me­dia cov­er­age at con­fer­ences in Athens, Hong Kong, Rio de Janeiro, among many oth­er places. Thank you, Professor Anderson. Your pro­lif­ic schol­ar­ship, out­stand­ing teach­ing, and pas­sion­ate men­tor­ing of hun­dreds of Elon stu­dents have pro­vid­ed all of us with a clear vision for the future. Please wel­come pro­fes­sor Janna Anderson.

Janna Anderson: Thank you, and wel­come to every­one here for fam­i­ly week­end. Who’s here for fam­i­ly week­end? Alright! Beat Villanova. Go! Who’s going shop­ping today? Anybody going shop­ping, yeah? Taking the kids out to din­ner? Yeah? 

We’re real­ly excit­ed for the priv­i­lege of hav­ing Vint Cerf here today. He was telling us he was in San Francisco just yes­ter­day. He’s going to be in Brasilia tomor­row. And his trav­el sched­ule would kill a twen­ty year-old. So, he needs an award just for all the trav­el he does, real­ly. He’s an amaz­ing guy. I could talk for­ev­er about Vint, so I’m going to read my script so that I don’t stay here for­ev­er and give him a chance to talk. 

It’s an hon­or for us to award Dr. Vinton Gray Cerf the inau­gur­al Areté Medallion, rec­og­niz­ing all of the work he’s done over the past few decades, most­ly being excel­lent in pub­lic ser­vice. The most impor­tant thing is work­ing for glob­al good. In the 1970s and 80s, Vint Cerf and a bril­liant group of engi­neers, brought to life a com­mu­ni­ca­tions net­work that has become many orders of mag­ni­tude more pow­er­ful in the years since than they had ever imag­ined it would become. 

Bob Kahn and oth­ers devel­oped the TCP/IP pro­to­col suite and the ear­ly net­work, and in the decades that fol­lowed, mil­lions of appli­ca­tions emerged. Billions of peo­ple, more than half the world, are car­ry­ing a con­nec­tion to glob­al intel­li­gence in their pock­ets today. It was­n’t just they that did it, but it was the infor­ma­tion inven­tion, the com­mu­ni­ca­tions net­work that they cre­at­ed, that inspired every­one to get togeth­er and make these things hap­pen. Everyone whose life has been enriched by being con­nect­ed to the Internet please wave your phone, your tablet, your your PC, in wel­come to one of the great engi­neers of all time, right here. 

In a few min­utes, we’re going to be treat­ed to a won­der­ful talk by this amaz­ing engi­neer and pub­lic intel­lec­tu­al who was one of the first to imag­ine IP on every­thing. For near­ly five decades, he has lived a life of non­stop pub­lic ser­vice. He is now work­ing to make it pos­si­ble for every item on earth to be con­nect­ed in what is called the Internet of Things. And he is a clas­sic t‑shirt to prove it. It reads IP on every­thing.” He had that on his Facebook site— Is it still your Facebook home­page pro­file pho­to? Engineers have a great sense of humor. You’ve got to hang out with them more often if you don’t already, right? 

He knew from the start that a net­work is only as good as the peo­ple who build it and use it. And so it was­n’t just the engi­neer­ing feat that is amaz­ing in what this man has done in his life­time. He has involved him­self in the lead­er­ship posi­tion in every orga­ni­za­tion that has been found­ed to try to fig­ure out how we’re going to make this thing work, not only tech­ni­cal­ly but social­ly, to serve human­i­ty as well as it can. 

Many who know him believe that Vint may also be the first human to ever have been cloned. As I men­tioned before he’s every­where all the time. He has been award­ed dozens of hon­orary degrees. He’s been named a fel­low of or elect­ed pres­i­dent of just about every orga­ni­za­tion he’s ever been a part of. And he’s work­ing now to solve glob­al prob­lems such as gen­der inequal­i­ty, the dig­i­tal divide, and the chang­ing nature of jobs in an era of arti­fi­cial intel­li­gence. And as you can see—well, you will see soon, he’s going to be talk­ing not only about the Internet of Things but about arti­fi­cial intel­li­gence as we move into the future today.

So he’s been work­ing to solve these prob­lems, but every time any­body asks him to do almost any­thing, he’ll just…he’ll do that thing. He’s one of those peo­ple who gets things done, and he gets them done right away. If you ask him a favor, he’ll do it. I’ve asked him sev­er­al times over the years, I’ve been so, Oh, I don’t know if I should ask him this…” And he just moves for­ward and he does it. He checks it off the list. He par­tic­i­pates. He’s out there. If you ask Vint, he’s going to be there for you, and that’s one of the rea­sons why he’s so pop­u­lar not only in the engi­neer­ing com­mu­ni­ty but among all the peo­ple who are online and doing things.

So it’s not pos­si­ble to list every good deed he’s done. It has­n’t prob­a­bly been doc­u­ment­ed. Maybe if you went through his email account you could find all of his yeses and and see all the things he’s done. But it’s why he’s on the road 80% of the time. It’s amaz­ing, really. 

So, he and Tim Berners-Lee, I think, are the Paul McCartney and Mick Jagger of com­mu­ni­ca­tions for us. The rock, okay! They rock, man. So you have to be excit­ed about this because this is a once-in-a-lifetime oppor­tu­ni­ty to spend time with him. Where we are today is great­ly due to their com­mit­ment to do good and tire­less­ly work for a bet­ter future. They did­n’t just build that thing and then step back and say, Dude. Yeah. I real­ly did a great job there.” And they did­n’t mon­e­tize that thing. They did­n’t try to take advan­tage of it as a busi­ness oppor­tu­ni­ty. They thought it up, invent­ed it to con­nect peo­ple, to infor­ma­tion, to each oth­er, to every­thing. It’s amaz­ing. So learn your les­son. Follow their lead. Do good for everyone. 

The ancient Greeks used the term areté to describe a per­son who has reached the great­est effec­tive­ness with dig­ni­ty, char­ac­ter, and dis­tinc­tion. Vint Cerf embod­ies areté. The peo­ple of Elon University and the Imagining the Internet Center are pleased to hon­or him with Elon’s inau­gur­al Areté Medallion.

Vint Cerf: Thank you very much. You know, I have received a num­ber of awards, but what’s impor­tant about this one is this is the first time it’s been giv­en, and that’s a big hon­or, to be sort of the per­son who receives the first instance of it. Of course maybe we’ll find a typo in here, too. You nev­er know. That has hap­pened at least once before.

I’m very very pleased to be on this cam­pus. It’s a beau­ti­ful cam­pus. I got a chance to walk around this morn­ing, see the new con­struc­tion, which is a lit­tle bit behind sched­ule but nonethe­less very excit­ing. Especially the new facil­i­ties exper­i­ment­ing with teach­ing tools. For exam­ple the gigan­tic touch­screen and things like that. 

One of the things that real­ly caught my atten­tion is that in some of the class­rooms that have been designed with this large 96″ high-res touch­screen dis­play is that the stu­dents who are in there will also have the abil­i­ty to push things onto the screen from their lap­tops or from their mobiles. And it dawned on me that in most cas­es, like this one, I have mate­r­i­al to show you but I don’t get this feed­back that we might get if we were col­lab­o­rat­ing on what­ev­er is up there on the screen. And this idea of edu­ca­tion becom­ing a col­lab­o­ra­tive expe­ri­ence I think may be very impor­tant. And so you’ll be explor­ing that in this new school. And it will be very inter­est­ing to see what comes out of that. I think Lee Rainie will cer­tain­ly want to assess what hap­pens as a result, if pos­si­ble. Because we may learn a lot from that. So I’m real­ly excit­ed to await the open­ing of the new school and the facil­i­ties that it has there. 

But what I’d like to do this after­noon is talk a lit­tle bit about arti­fi­cial intel­li­gence and the Internet. I want to warn you of a cou­ple of things. First one is I don’t con­sid­er myself an expert in arti­fi­cial intel­li­gence. We have some pret­ty remark­able peo­ple at Google who are, and I don’t want you to blame them for my kind of light­weight under­stand­ing of what’s going on.

But I do have a few exam­ples of things that are start­ing to hap­pen. And you can see that AI nor­mal­ly stands for arti­fi­cial intel­li­gence, but I’ve often con­clud­ed it stands for arti­fi­cial idiot.” And the rea­son is very sim­ple. It turns out that these sys­tems are good, but they’re good in kind of nar­row ways. And we have to remem­ber that so we don’t mis­tak­en­ly imbue some of these arti­fi­cial intel­li­gences and chat bots and the like with a breadth of knowl­edge that they don’t actu­al­ly have, and also with social intel­li­gence that they don’t have.

If any of you have read Sherry Turkle’s book called Alone Together, the first part of the book is about peo­ple’s inter­ac­tion with human­i­form robots, that is robots that have a some­what human appear­ance. And often, peo­ple will imbue the robot with a kind of social intel­li­gence that it does­n’t have. If the robot appears to ignore that per­son, they get insult­ed or depressed. And of course the robot does­n’t have a clue about what’s going on. And so we tend to project onto these human­i­form things more depth than is real­ly going on. So we should be very care­ful about that.

There is one oth­er phe­nom­e­non. The clos­er you get to life­like, the more creepy it gets. And seri­ous­ly, if you’ve seen some of the robots that are being made in Japan, that are being used in hotel lob­bies and things like that and depart­ment stores, they look like they’re kin­da dead bod­ies that are ani­mat­ed, you know. Sort of zom­bies. And so it’s a very fun­ny phe­nom­e­non that if it’s too life­like it gets real­ly dis­turb­ing. And so it could very well be that we need some­thing which is not ter­ri­bly pre­cise and not high-fidelity. Those of you who’ve ever seen a movie with Robin Williams called The Bicentennial Man will appre­ci­ate the trans­for­ma­tion as he goes from a very stylized-looking robot to one which is very near­ly human. And of course even­tu­al­ly he ends up being essen­tial­ly human. And that was his objec­tive. It’s sort of a Pinocchio story.

Okay, so let me give you an exam­ple of how arti­fi­cial intel­li­gence does­n’t always work. This is an exam­ple of lan­guage trans­la­tion. I was in Germany at the time, and I was pulling up a weath­er report. And it was being trans­lat­ed by the the Google trans­la­tion sys­tem. And you will see in the low­er left-hand side, prob­a­bil­i­ty of rain was 61%. Probably of snow was zero. Probability of ice cream was 0%.

And you know, I thought well what was that? And it turns out that they meant hail. And the German word for hail is eis.” But that’s also the word for ice cream. And absent the con­text that was nec­es­sary, the trans­la­tion sys­tem trans­lat­ed this as ice cream.” And so I showed this slide to my German friends, think­ing you know, Can you tell me more about this ice cream storm that you have here that I’ve nev­er expe­ri­enced?” So that’s just a tiny lit­tle exam­ple. But it shows how easy it is for some­thing which is try­ing hard to under­stand nat­ur­al lan­guage to get it wrong.

On the whole I will say that the Google’s trans­la­tion sys­tems are actu­al­ly pret­ty good. And there have been occa­sions where I have gone to a page in Germany or France or Spain where the web pages are auto­mat­i­cal­ly trans­lat­ed, and they hap­pened so fast that a cou­ple of times the trans­la­tion has been so good and so quick that I did­n’t notice that it was­n’t orig­i­nal­ly in English, that it was in some oth­er lan­guage. So I’m very proud of what Google has been able to do. But as I say, every once awhile goes clunk.

So here’s anoth­er thing which we are very proud of at Google and that’s speech recog­ni­tion. And I want to dis­tin­guish speech under­stand­ing from speech recog­ni­tion. The abil­i­ty to fig­ure out that words were spo­ken and what those words were, and how to spell them and turn them into text is what speech recog­ni­tion is about. 

We have had extra­or­di­nar­i­ly good results with speech syn­the­sis, where we’ve tak­en text and turned it back into speech. And of course the dis­play of text. That can be quite help­ful for exam­ple for a per­son who is hear­ing impaired and needs cap­tion­ing, for exam­ple, to see a video or to car­ry on a conversation.

Google Maps is now com­mon­ly accessed by voice request. In fact when Gene Gabbard, my good friend who’s here—we go back many years at MCI—was com­ing here, we use them my lit­tle lit­tle Google Maps pro­gram, and I spoke the address that we were going to. And it’s quite shock­ing how well the sys­tem works. One thing that helps it is that the vocab­u­lary that it’s expect­ing is the names of streets and towns and cities and so on; coun­tries. And that nar­rows to some extent the recog­ni­tion prob­lem because you’re draw­ing from a vocab­u­lary of a cer­tain type. And that helps improve the qual­i­ty of the recognition.

We also are able to tie things togeth­er. You might want to exper­i­ment with this. If you were to say some­thing like, Where is the Museum of Modern Art in New York City?” and it answers with an address or some­thing like that. And if you then said, How far away is it?” You haven’t specif­i­cal­ly men­tioned any­thing; the ref­er­ence to it” is the muse­um. It remem­bers things like that. So we’re start­ing to tie con­ver­sa­tion togeth­er. And one of the big objec­tives at Google has been to achieve a kind of inter­ac­tive inter­face, for exam­ple with Google Search, where you’re not sim­ply ask­ing for search terms or just a cer­tain sen­tence or a state­ment or ques­tion, but rather that there is an inter­ac­tive engage­ment. And the inter­ac­tion is intend­ed to help you fig­ure out whether the search engine under­stands what you’re look­ing for.

I noticed some­thing very inter­est­ing a few months ago. Maybe you have as well. I used to just type search terms into the Google search engine. Now I type a com­plete ques­tion, if that’s what I’m look­ing for an answer for. And I found that it was doing bet­ter in respond­ing when I asked a gram­mat­i­cal­ly well-formed ques­tion. And I asked the nat­ur­al lan­guage guys you know, What’s going on?” And they said, Well, we had been doing a lot of work with just the search terms, but now we’re pay­ing a lot more atten­tion to sen­tence struc­ture, to semantics.” 

There’s some­thing we have called the Knowledge Graph. It has about a bil­lion nodes in it. And it asso­ciates con­cepts and nouns with each oth­er. So if two con­cepts are relat­ed, the branch between the two nodes says how they’re relat­ed. We use that a lot, espe­cial­ly in our search algo­rithms. So when we read the ques­tion that you’ve writ­ten, we also ask our own knowl­edge graph how do the turns in that ques­tion relate to things we know about in this knowl­edge graph, and it expands the search to a broad­er space and there­fore we hope gives you more com­plete cov­er­age. If you said, Where is Lindau,” for exam­ple, and then you said, Are there any restau­rants there?” that’s the ellip­sis kind of thing, and we’re hop­ing to refine that even further.

Here’s anoth­er exam­ple. This is the the German trans­la­tion of what the Semantic Web is all about. And the trans­la­tion of that looks like this. And at the time that I was putting this exam­ple togeth­er, the trans­la­tion hap­pened auto­mat­i­cal­ly, and it hap­pened so fast that I actu­al­ly did­n’t notice that it start­ed out in German. And here you see the trans­la­tion in English. So we’re get­ting bet­ter and bet­ter not only at doing the trans­la­tion but pro­duc­ing sen­tence struc­tures that look natural.

So now there’s this ques­tion about the Deep Web. And here, in a sense this is kind of like dark mat­ter in the uni­verse. We know it exists but we can’t see it. If it isn’t in HTML, the stan­dard web search engine can’t see it. These are for exam­ple data­bas­es that are full of infor­ma­tion. And in the case where there isn’t any meta­da­ta describ­ing what’s in the data­base, then we might not encounter it dur­ing a search. We cer­tain­ly can’t index it because usu­al­ly the way these sys­tems are acces­si­ble is you keep ask­ing ques­tions of the data­base and that derives its con­tent to you. But we can’t send the knowl­edge robot, or the lit­tle spi­der, around the Internet ask­ing bazil­lions of ques­tions of all the data­bas­es in order to infer what’s in it.

So one of the things that we would need is a way of get­ting these data­bas­es to describe them­selves in some sense, or the cre­ators to describe them in a way that gives us seman­tic knowl­edge of what’s in the data­base. And I want to dis­tin­guish Deep Web from dark web. Dark Web is a whole oth­er thing. That’s where all kinds of ille­gal con­tent and mal­ware is found. But Deep Web is a per­fect­ly legit­i­mate notion. It’s just that we have trou­ble find­ing things in it. So I have this gen­er­al feel­ing that we have a long way to go to make our arti­fi­cial intel­li­gence mech­a­nisms capa­ble of see­ing what’s in the World Wide Web or what’s in oth­er things that are not nec­es­sar­i­ly web-based so as to help us find and use that content. 

Here’s our Knowledge Graph, just as an exam­ple. I’m not sure whether that’s ter­ri­bly read­able to you. I apol­o­gize for that, and I think try­ing to blow it up prob­a­bly won’t work. But what what you’re see­ing basi­cal­ly is descrip­tors of con­cepts that’re being asso­ci­at­ed with each oth­er. And those descrip­tors are what we use in order to ana­lyze what we’re find­ing in the net. Now, this is not just a ques­tion of using it for trans­la­tion, for exam­ple, or using it for search. We use it to infer the con­tent of con­ver­sa­tion. For exam­ple if you ask what is the sta­tus of a par­tic­u­lar flight on an air­line, our knowl­edge graph knows that air­lines use air­planes; they know that you go from one place to anoth­er; there’s a whole lot of knowl­edge that’s asso­ci­at­ed with that. So if you start ask­ing ques­tions like, What kind of air­plane am I fly­ing?” on that flight, the sys­tem under­stands that ques­tion and can try to respond if it has the data that’s available.

Ray Kurzweil is now work­ing at Google and has been there for sev­er­al years. I don’t know if you’ve read any of his books. One of them is called The Singularity is Near. And the Singularity from his point of view is the point at which com­put­ers are smarter than peo­ple. And he’s hop­ing that by 2029 that com­put­ers will be so smart that he’ll be able to upload him­self into a com­put­er and then go explore the galaxy. That’s not why we hired him. We hired him to work on the Knowledge Graph that would help us with our AI appli­ca­tions. But he’s quite excit­ed about the expo­nen­tial abil­i­ty to gain knowl­edge and to infer things from it. 

Some peo­ple are wor­ried, for exam­ple, that once a com­put­er knows how to do some­thing, it will learn how to do it bet­ter than any human can. And so for any par­tic­u­lar thing humans can do, if the com­put­er ever fig­ures out how to do it, then we’re doomed. Now I think that’s too pes­simistic. I have a much more opti­mistic view of a lot of this stuff, which is that these machines are there to help aug­ment our think­ing ability.

There’s a man you should know if you don’t. His name is Douglas Engelbart. Doug used to run the Augmentation of Human Intellect Project at SRI inter­na­tion­al in Menlo Park, California. This was many years ago in the mid to late 60s. And his view was that com­put­ers were there to help us think bet­ter. And he liked the idea of col­lab­o­rat­ing among peo­ple and facil­i­tat­ing the col­lab­o­ra­tion through the use of the com­put­er. He invent­ed a sys­tem which for all prac­ti­cal pur­pos­es was a World Wide Web in a box. It was called the oN-Line System (NLS). It had, in the sys­tem, hyper­link­ing. So he invent­ed the notion of con­nect­ing doc­u­ments to each oth­er by click­ing on— Clicking? He invent­ed the mouse so you could point to some­thing on the screen and then click to say pay atten­tion to that place that I’m point­ing to.” 

He had a very elab­o­rate edi­tor which allowed peo­ple to cre­ate doc­u­ments, and the whole idea there was that knowl­edge work­ers would cre­ate con­tent by cre­at­ing doc­u­ments and could ref­er­ence each oth­er. People could col­lab­o­rate. We were using a pro­jec­tor, a much much old­er one than what you have here, that was called a GE Light Valve that cost fifty thou­sand dol­lars. It was half the size of a refrig­er­a­tor. But we were using it to project some­body typ­ing on a key­board, build­ing a doc­u­ment with this oN-Line System. And I remem­ber John Postel, who used to be the Internet Assigned Numbers Authority, was tak­ing notes. And while I was talk­ing, he was typ­ing away, and his words were show­ing up on the screen. And I stopped to see what I was going to say next. And then I real­ized that noth­ing would hap­pen if I did­n’t say any­thing. So, it was an odd feed­back loop.

This whole point, though, is that that Ray is very very inter­est­ed in try­ing to cod­i­fy seman­tic infor­ma­tion. One of the ques­tions I asked him, which I don’t have an answer to yet, is whether this knowl­edge graph and the mech­a­nisms that go with it, includ­ing infer­ence, are suf­fi­cient­ly rich that we could get the com­put­ers run­ning these text inges­tion sys­tems to just read lots and lots and lots of books and then essen­tial­ly form knowl­edge and knowl­edge struc­tures by read­ing. And you know, con­sid­er­ing the vast range of mate­r­i­al that’s in writ­ten form, you could imag­ine a com­put­er essen­tial­ly learn­ing from that. 

And I don’t know how far we can get with this, but we have dis­cov­ered that machine learn­ing is effec­tive for cer­tain class­es of activ­i­ty. Classification, for exam­ple. You know, the abil­i­ty to dis­tin­guish cats and dogs and oth­er things in images, or dis­tin­guish­ing a can­cer­ous cell from a non-cancerous one. A lot of those kinds of machine learn­ing tasks are actu­al­ly quite eas­i­ly done, or read­i­ly done these days. More com­plex kinds of recog­ni­tion might turn out to be a lot harder.

But at some point I kept think­ing well maybe the com­put­er can start learn­ing itself. There is a guy whose name is Doug Lenat. He’s at University of Texas Austin. He has been work­ing prob­a­bly for the last twen­ty years, maybe twenty-five years, on some­thing he calls Cyc,” which stood for ency­clo­pe­dia. What he was try­ing to do there is to get the com­put­er to learn by read­ing. And his idea was that if you took a para­graph of text and you then man­u­al­ly incor­po­rat­ed the infor­ma­tion that the com­put­er would need to rec­og­nize and under­stand the text, that even­tu­al­ly he’d build up a suf­fi­cient cor­pus that the machine would­n’t need help from you any­more, it would just learn from reading.

So an exam­ple of a sen­tence was that, Wellington learned that Napoleon had died, and he was sad­dened.” Well, in order to under­stand that sen­tence in its entire­ty you have to under­stand that Wellington and Napoleon were human beings; humans are born, they live for a while and they die. You have to know that Wellington and Napoleon were great adver­saries. You might have to know that Wellington defeat­ed Napoleon. And you also have to under­stand that great adver­saries some­times form a cer­tain degree of admi­ra­tion for each oth­er despite their oppo­si­tion. And all of that would go into under­stand­ing the depth of that one sen­tence, so you can imag­ine hav­ing to encode all that oth­er infor­ma­tion so that the com­put­er would have that to draw on to ful­ly under­stand what’s going on. So in some sense, that’s a lit­tle bit about what the Knowledge Graph is try­ing to do. 

Well, anoth­er thing that AI has been reg­u­lar­ly asso­ci­at­ed with is games. I guess I should describe for you how AI was treat­ed at Stanford University in the 1960s when I was an under­grad­u­ate there. Basically if it did­n’t work, it was arti­fi­cial intel­li­gence. And if it did work it was engi­neer­ing. So every time we fig­ured out how to get some AI thing to work, when it final­ly worked it was­n’t AI any­more. Mostly because AI at the time was kind of filled with heuris­tics that some­times worked and some­times did­n’t, where­as engi­neer­ing is sup­posed to get it right.

Well, I’m sure many of you are very famil­iar with these games that we’ve got­ten com­put­ers to play. IBM Deep Blue played Garry Kasparov. And the inter­est­ing thing, he was beat­en sev­er­al times by by Deep Blue, and it was con­sid­ered at the time—this is like 1997—a huge accom­plish­ment, a mile­stone in arti­fi­cial intelligence. 

One very amus­ing inci­dent, though, hap­pened where the com­put­er made a move that Kasparov could not under­stand. I mean, it made no sense what­so­ev­er. And he he was clear­ly con­cerned about it because he thought for quite a long time and he had to play the endgame much faster than oth­er­wise, and in the end it turned out it was a bug. It was just a mis­take. The com­put­er did­n’t know what it was doing. But Kasparov assumed that it did, and lost the game as a result.

AlPhaGo, on the oth­er hand, was a sys­tem that was built by our DeepMind com­pa­ny in London. They were train­ing our spe­cial com­put­ing sys­tems for this sort of neur­al net­work to play Go. And they played lit­er­al­ly tens of mil­lions of games. Sometimes they had mul­ti­ple com­put­ers play­ing against each oth­er. Sometimes they played against Lee Sedol, who was the expert that it defeat­ed four times out of five.

I’m not a Go play­er and so I can’t express very well what hap­pened in the first game, except I am told that at move 37, there was con­ster­na­tion among the knowl­edge­able Go play­ers because the machine did some­thing that did­n’t make any sense to them at all. And in this case, it was not a bug. It turned out the machine had found a par­tic­u­lar tac­tic which much lat­er in the game turned out to allow the machine to cap­ture a sig­nif­i­cant por­tion of the board.

So that was four games out of five that Lee Sedol lost. I have to give a lot of cred­it to him for being will­ing to take this risk. We basi­cal­ly told him that we’d give him a mil­lion dol­lars if he won three out of the five games, and he lost four out of the five. I don’t know whether they gave him the mil­lion dol­lars any­way, frankly. We should have, I think, if we didn’t.

But I want to empha­size how lim­it­ed these kinds of capa­bil­i­ties are. We should not over­state the impor­tance of this in the broad­er sense of intel­li­gence as you and I might think of it. Checkers, for exam­ple, and tic-tac-toe were fair­ly easy tar­gets way back in the 1960s, as very sim­ple, straight­for­ward games to play. The train­ing of these things involves play­ing the same games over and over and over again and feed­ing back to the machine learn­ing algo­rithm whether or not they won or lost. And so this is not the same kind of men­ta­tion that you do when you’re play­ing chess or Go. It’s adjust­ment of a bunch of para­me­ters in a neur­al network. 

And what is inter­est­ing about this is that we have the DeepMind sys­tems play a whole bunch of dif­fer­ent kinds of board games, and it learned to play the games with­out being told what all the rules were. They only had sim­ple feed­back like you lost the game;” that’s one pos­si­bil­i­ty. Or this movie is ille­gal.” And after a while, the learn­ing algo­rithm adjust­ed a lot of the para­me­ters in the neur­al net­work until it could learn to play the game suc­cess­ful­ly. So there are lots of oth­er games like the Atari video games and Pong and Breakout; these are all very sim­ple kinds of games. But the com­put­er learned how to play them sim­ply by play­ing them and being giv­en very very sim­ple feedback.

Now, some of you will remem­ber IBM Watson’s play on Jeopardy. This is as you all know a ques­tion and answer sys­tem. There is a pro­gram­ming sys­tem called Hadoop (and maybe some of the com­put­er sci­ence folks here are famil­iar with that). It’s an imple­men­ta­tion of the MapReduce algo­rithm. That’s what we use at Google to do a fast search of the index that Google gen­er­ates of the World Wide Web. In prin­ci­ple, what hap­pens is that you may repli­cate data across a large num­ber of proces­sors, and when you ask a ques­tion like, Which web pages have these words on them?” that ques­tion goes to tens of thou­sands of machines, each of which has a por­tion of the index at its dis­pos­al. All the machines that find web pages with those words on it raise their hands. That’s the map­ping part. Then you sweep all that stuff togeth­er in order to see what the response is to the query.

Then you have the oth­er prob­lem. You got ten mil­lion respons­es, you have to fig­ure out what’s the right order in which to present them to the users. And that uses anoth­er algo­rithm, the first one of which was called PageRank. This is Larry Page’s idea where he just took all the web page respons­es to a query and said, Which pages had more point­ers going to it?” (ref­er­ences), and rank ordered them by ref­er­ence. That’s the PageRank. And of course that’s turned out to be too crude now, and many peo­ple know about that, so they try to game the sys­tem, so now we have about two hun­dred and fifty sig­nals that go into the order­ing of the response that comes back. But again, the basic prin­ci­ple is to spread the infor­ma­tion out in par­al­lel, look for hits,” and then bring it back togeth­er and then do this sorting.

Another thing which Watson did was of course to ana­lyze the query that came to it and then gen­er­ate hypothe­ses for what ques­tion was it that this descrip­tor rep­re­sent­ed an answer to. It was look­ing for all kinds of sup­port­ing evi­dence and scor­ing of its var­i­ous pos­si­ble hypothe­ses and answers, and then it had to syn­the­size an answer. Not only was it applic­a­ble to Jeopardy, which is a par­tic­u­lar game, but it also is now being used for health diag­nos­tics and ana­lyt­ics. And you can seen the sort of tool here, and the pos­si­bil­i­ties across quite a wide range of dif­fer­ent appli­ca­tions where we’re accu­mu­lat­ing sub­stan­tial amounts of data about which we can reason.

At Google, we believe that spe­cial hard­ware is in fact worth invest­ing in. And so in the case of our DeepMind com­pa­ny, we’ve devel­oped a TensorFlow chipset. That sys­tem is phys­i­cal­ly not avail­able to the pub­lic, except we put these gad­gets into our data cen­ters. And we made an API avail­able to the pub­lic. So if peo­ple are curi­ous and inter­est­ed in writ­ing AI pro­grams, TensorFlow is open to you to try things out. And we encour­aged that. We’ve taught class­es in it. You’ll find online sup­port for this.

The phys­i­cal Tensor Processing Unit is not avail­able. I asked why not. Why not share those with oth­er peo­ple? And they said the API is there to cre­ate a sta­ble lay­er of inter­ac­tion with this device. But the rea­son they don’t want the hard­ware to get out is not that that’s a big secret but that they plan to evolve it. And so the whole idea was to take advan­tage of what peo­ple do with the TensorFlow pro­gram­ming sys­tem and then feed that back into the ten­sor chip design until we can make it per­form more effectively.

IBM has a sim­i­lar kind of neur­al net­work that is called True North. And these are all emu­lat­ing the way our brains sort of work, right. So we have neu­rons with lots and lots of den­drites. The neu­rons touch many many oth­er neu­rons. The den­drit­ic con­nec­tions are either exci­ta­to­ry or inhibito­ry. And the feed­back loop basi­cal­ly is based on expe­ri­ence. So if you do some­thing and the feed­back is that did­n’t work,” then the exci­ta­to­ry and inhibito­ry con­nec­tions’ para­me­ters get adjust­ed in the chipset. And they get adjust­ed in our brains by some elec­tro­chem­i­cal process and mod­i­fi­ca­tions of the synapses.

So what’s very pecu­liar about these kinds of chipsets is that after all the train­ing gets done and a set of para­me­ters have been set in con­se­quence of all the inter­ac­tion, we’re not sure why it works. Just like we’re not quite sure why our brains work. And so for exam­ple, if you could open up one of these neur­al chips and look at all the para­met­ric set­tings on all the lit­tle sim­u­lat­ed neu­rons and ask the pro­gram­mer, If I change this val­ue from .01 to .05, what will hap­pen?” And the answer is, Beats the hell out of me.” And so we actu­al­ly don’t quite under­stand in a deep sense what what’s going on oth­er than this inter­est­ing bal­anc­ing of para­me­ters as a con­se­quence of feed­back of learning. 

It’s a lit­tle unnerv­ing to think that we’re build­ing machines that we don’t under­stand. On the oth­er hand, it’s fair to say that the net­work, the Internet, is at a scale now where we don’t ful­ly under­stand it, either. Not only in the tech­ni­cal sense like what’s it going to do or how is it going to behave, but also in the social sense, how is it going to impact our soci­ety? And that ques­tion of course goes dou­ble, I think, for arti­fi­cial intelligence.

So if we look at the neur­al net­works that are avail­able now, most of the expe­ri­ences with them are try­ing things out to see for which appli­ca­tions they actu­al­ly work well. So for image recog­ni­tion and clas­si­fi­ca­tion they actu­al­ly are quite good. We can tell the dif­fer­ence between dogs and cats. We can some­times look at a scene and sep­a­rate objects in it. That’s real­ly impor­tant for robot­ics because you want the robot to look at a scene and notice that there are dif­fer­ent things like there’s a glass here, and there’s a screen here and, the two are not the same thing; it’s not a two-dimensional thing. Sometimes you can fig­ure that out because if you have mul­ti­ple views of some­thing and you can see that one thing moves, and I can see around the screen, and see that there’s anoth­er object there. But this image recog­ni­tion is very impor­tant to a lot of appli­ca­tions, includ­ing robotics.

And here’s an inter­est­ing exper­i­ment that I’m now free to tell you about. It was not pub­li­cized for a while. We took our TensorFlow Processing Unit and we start­ed train­ing it against the cool­ing sys­tem of one of our data cen­ters. Now, if you think about it, I don’t know if you seen pic­tures of data cen­ters. They’re big, big things. I mean, I’m a pro­gram­mer type and for me a com­put­er is my lap­top. It’s a lit­tle hard to believe when you walk into a data cen­ter you need this much iron in order to oper­ate at the scale that Google does. So for me it’s a lit­tle weird to see these eight-inch pipes of cold water flow­ing, fans blow­ing, and you know, megawatts, tens, hun­dreds of megawatts of pow­er going in and out.

So, you walk into this thing and it’s clear that you’re gen­er­at­ing a lot heat. I had even pro­posed at one point that because there was so much heat being gen­er­at­ed and heat ris­es, that we should put piz­za ovens at the top of the racks and then go into the piz­za busi­ness. But they said no, the cheese drops down and mess­es up all the cir­cuit boards. That was a dumb idea so we did­n’t do that.

Anyway, we were using man­u­al con­trols rough­ly on the order of once a week. So we’d gath­ered a lot of data about how well we had man­aged to cool the sys­tem and then we would adjust the para­me­ters man­u­al­ly. Well, we decid­ed well maybe we can get the neur­al chips to learn how to make more effi­cient the cool­ing sys­tem. You know, min­i­mize the amount of pow­er, how much water flow, and so on. And so we tried that out at one of the data cen­ters, and after a time, we found that we could reduce the cost of the cool­ing sys­tem by 40%. Not four per­cent, forty per­cent. So of course the next reac­tion is, Well let’s do that with all the data cen­ters.” So this is a kind of dra­mat­ic exam­ple of how the machine’s abil­i­ty to respond very quick­ly to a large amount of data and to learn to adjust para­me­ters quick­ly makes a huge difference.

There’s anoth­er sit­u­a­tion in a com­pa­ny that is exper­i­ment­ing with fusion, as in fus­ing hydro­gen togeth­er to make heli­um, or fus­ing pro­tons togeth­er with boron in order to make car­bon. Which is a real­ly inter­est­ing process because it’s not radioac­tive. Instead of tak­ing hydro­gen and tri­tium, for exam­ple, and try­ing to fuse that (you gen­er­ate a bunch of neu­trons and it becomes very radioac­tive), this par­tic­u­lar design cre­ates a plas­ma of boron and then you feed pro­tons into it and the fusion cre­ates car­bon which then splits into three dif­fer­ent alpha par­ti­cles which are not radioac­tive. So it’s pret­ty cool.

Anyway, the prob­lem is that the plas­mas in fusion sys­tems are very unsta­ble. And human beings sim­ply can’t react fast enough. This is the one thing about Star Trek that I always found com­plete­ly bogus. You know, they’re fly­ing around a half light speeds and every­thing, and the cap­tain is say­ing, On my mark, fire.” Come on. That’s five hun­dred mil­lisec­onds of round-trip time delay and every­thing. It would­n’t work. But it’s no worse than the sci­ence fic­tion movies you see with the astro­nauts that’re orbit­ing Saturn and they’re hav­ing real-time com­mu­ni­ca­tion with the guys in Houston. You know, Houston we have a prob­lem.” That’s baloney, but it would make for a bor­ing movie if you had to wait for sev­er­al hours for the inter­ac­tions. So this is sort of artis­tic license.

But what was inter­est­ing about the plas­ma fusion case is that the plas­ma could be sta­bi­lized by using this kind of a neur­al net­work to detect and respond to insta­bil­i­ty. And so we’re start­ing to see all kinds of inter­est­ing pos­si­bil­i­ties here with the neur­al chips. Robotic con­trols sim­i­lar­ly. The abil­i­ty to use these things to sta­bi­lize motion, for exam­ple. And to do so quick­ly so that instead of inch­ing your way for­ward to pick up the glass, you have a much more smooth abil­i­ty to see things, fig­ure out how to approach them, and pick them up. And obvi­ous­ly I’ve men­tioned game learn­ing already, and machine learn­ing gen­er­al­ly speak­ing, are all areas in which these neur­al net­works are start­ing to be applied.

So I want to leave you with a sense on this top­ic that there is a wide-open space now for explor­ing ways of using neur­al net­works or using hybrid sys­tems, neur­al net­works as part of it, tak­ing advan­tage of the machine learn­ing train­ing and the con­ven­tion­al com­put­ing and meld­ing those togeth­er. I can’t overem­pha­size enough that this is essen­tial­ly all soft­ware. And there is no lim­it, there’s no bound­ary on soft­ware. It is a lim­it­less oppor­tu­ni­ty. Whatever you can imag­ine, you may in fact be able to pro­gram. And so this leaves you with an enor­mous amount of space in which to explore the abil­i­ty to cre­ate these sys­tems to inter­act with the real world, to inter­act with the vir­tu­al world. 

One of the most impor­tant recent devel­op­ments over the last decade, I would say, is apply­ing com­put­ers to sci­en­tif­ic enter­prise, whether it’s gath­er­ing large amounts of data and ana­lyz­ing it, or sim­u­lat­ing what might hap­pen. So for exam­ple, a cou­ple of years back, maybe three years? the Nobel Prize in chem­istry was shared by three chemists who had not done wet­land exper­i­ments. They had used com­put­er sim­u­la­tions to pre­dict what was going to hap­pen with mol­e­c­u­lar inter­ac­tion. And the fact that the com­put­er was capa­ble of doing enough com­pu­ta­tion to do that, and the results were in fact demon­stra­ble by mea­sure­ment, sug­gests that we’ve entered a very new space of sci­en­tif­ic enter­prise where com­put­ing has become part of the heart­land of a lot of it. So the astro­physi­cists and the chemists and the biol­o­gists are all hav­ing to become com­put­er pro­gram­mers or find­ing some to help them do their work.

I thought I would fin­ish up with some exam­ples of arti­fi­cial intel­li­gence at work. And so let’s try our self-driving cars to start with.

[Cerf’s com­ments dur­ing play­back appear below, with time­stamps approx­i­mate to the source videos.]

[0:34] This is an old­er mod­el of our self-driving cars. The new ones are cuter. And they don’t have steer­ing wheels, brakes, and accel­er­a­tors. So that’s right, you don’t need any hands because there’s noth­ing to grab.
[1:07] This is one of our blind employees.

So we’re hope­ful that these self-driving cars— We’ve now dri­ven a cou­ple mil­lion miles in the San Francisco area with these things. There have been a few acci­dents, none of them seri­ous, although one of them was kind of amus­ing. The car came up to an obstruc­tion in the lane it was in, and there was this big bus to the left of it. And so appar­ent­ly the car decid­ed that if it just kind of inched its way around the obstruc­tion that the bus would get out of the way, except it did­n’t. And so we had this enor­mous col­li­sion at 3mph with the bus.

There was anoth­er sit­u­a­tion where we were mon­i­tor­ing the car as it was dri­ving, remote­ly, and it stopped at an inter­sec­tion and did­n’t go any­where. And so we tapped into the video to find out what was hap­pen­ing there. And there was a woman in a wheel­chair in the inter­sec­tion chas­ing a duck with a broom. And I con­fess to you I would­n’t know what to do and the car just sat there say­ing there’s stuff mov­ing; I don’t want to run into it.” So I would have done the same thing.

So I thought that you might find it amus­ing to see what we’ve done at Boston Dynamics. Boston Dynamics was acquired by Google a cou­ple of years ago. We are actu­al­ly going to sell it again to anoth­er com­pa­ny. But some of its robot­ic work has been sim­ply aston­ish­ing, and so I’m going to show you videos of sev­er­al of the Boston Dynamics robots. These were large­ly devel­oped under DARPA sup­port, the Defense Advanced Research Projects Agency, which fund­ed the Internet effort and the ARPANET and many oth­er things as well. So let’s see what their Atlas robot looks like. 

[0:18] Now this is real­ly impres­sive. Uneven ground cov­ered with snow and ice.
[0:45] That must be the 90 proof oil that it’s been con­sum­ing. The recov­ery is incred­i­ble, though. I mean real­ly when you think about it.
[1:22] Sort of wait­ing for it to go, Ta da!” right.
[1:27] Okay, now this is…you’re about to see robot abuse.
[2:17] I’m wait­ing for the, Alright, you lit­tle twit.” That’s impres­sive. It got back up again.
[2:30] And now he’s get­ting the hell out of there.

Do you notice a cer­tain kind of affin­i­ty that you formed because of this large­ly human­i­form thing? 

[0:20] What the hell was that thing?
[0:21] Here we go again. That’s why we’re sell­ing Boston Dynamics, you know. They’re just a bunch of mean people.
[0:40] You can almost believe this is a dog.
[0:48] Now, inter­est­ing design here. Uneven ground. And these are real­ly low-level rou­tines that allow it to move. This is not like it’s think­ing through every sin­gle step.
[1:03] And you can see they miss and they recov­er. So, we learned a lot about how to make these things work with­out hav­ing think through every sin­gle motion.
[1:22] Best buddies.
[1:48] The idea here was to invent robots that could car­ry things, so from a mil­i­tary point of view these need to be pack-bots, in a sense.
[1:56] Now that’s the big horse. And what you’re hear­ing is the noisy motor that runs it. Eventually the mil­i­tary decid­ed it prob­a­bly was­n’t the best thing to use out in the bat­tle­field because you’d sig­nal to every­body that you had this big damn thing out there. So so much for stealth.

Oh, this is worth watching:

[0:15] You won­der what this dog is thinking.
[0:26] The lit­tle guys always take on the big guys.

There are about fifty-two or some­thing of these videos avail­able on YouTube if you haven’t already encoun­tered them. I think I have one last one. Yes, this one:

[0:05] Okay, this is not what I thought I was going to show you, and I haven’t seen this one so I have no idea what’s about to hap­pen. This is what hap­pens when you go to a URL where the des­ti­na­tion turns out not to have what you thought it was.
[1:25] I’m wait­ing for it to back up against the tree and squirt some oil on it.
[1:30] I’m going to stop this one and see whether I can find the— Well, I don’t know. This this might be it. If it isn’t then we’ll sort of fin­ish up.

Okay, I don’t see the one that I want­ed to show you, which I can’t find is the one where a tiny lit­tle dog got built that was small­er than Spot. But I’m afraid… Okay.

Well I think you get the idea first of all that we have a lot of fun with this stuff. But most impor­tant, these devices actu­al­ly have the poten­tial to do use­ful work. And peo­ple who are ner­vous about robots and say they’re going to take over and so on I think are over­ly pes­simistic about this. It’s my belief, from the opti­mistic point of view, that these devices, the pro­gram­ma­ble devices of the Internet of Things, and the arti­fi­cial agents that we build will actu­al­ly be our friends and be useful. 

However I think we also need to remem­ber that they are made out of soft­ware. And we don’t know how to write per­fect soft­ware. We don’t know how to write bug-free soft­ware. And so the con­se­quence is that how­ev­er much we might ben­e­fit from these devices and pro­gram­ma­ble things in gen­er­al, we also have to be aware that they may not work exact­ly the way they were intend­ed to work or the way we expect them to. And the more we rely on them the more sur­prised we maybe when they don’t work the way we expect.

And so this sort of says how we’re going to have to adjust to this 21st cen­tu­ry where we are sur­round­ed by soft­ware, and that the soft­ware may not always work the way we expect it to. Which means that we have to build in a cer­tain amount of antic­i­pa­tion for when things go wrong. And we hope that the peo­ple who write the soft­ware have thought their way through enough of this so that the sys­tems are at least safe, even if they don’t always per­form as advertised.

And so that’s my biggest wor­ry right now gen­er­al­ly with the Internet of Things, with robot­ic devices, and every­thing else. That we pre­serve safe­ty and reli­a­bil­i­ty and top of our list of require­ments, after which we need to wor­ry about pri­va­cy and the abil­i­ty to inter­work among all the oth­er devices.

So this is ear­ly days for all of these tech­nolo­gies. And we won’t see the end of this, pre­sum­ably. Our great-grandchildren will. And in the long run I am con­vinced that they will do us a lot of good. I sure hope I’m right, because as I get old­er I might rely on these lit­tle bug­gers. Thank you all very much for your time.

Further Reference

Lee Rainie also host­ed a dis­cus­sion and Q&A ses­sion with Dr. Cerf ear­li­er that day.

Overview post about the award ceremony.