Golan Levin: Hello, every­one. And wel­come back to Art && Code: Homemade, dig­i­tal tools and crafty approach­es. I’m thrilled to wel­come you now to the ses­sion with Irene Alvarado, who is a design­er and devel­op­er at Google Creative Lab, where she explores how emerg­ing tech­nolo­gies will shape new cre­ative tools. She’s also a lec­tur­er at NYU’s Interactive Telecommunications Program. Irene Alvarado.

Irene Alvarado: Hi, every­one. Very excit­ed to be here. I’m gonna talk today about a behind-the-scenes sto­ry of a par­tic­u­lar tool called Teachable Machine.

So as Golan said, I work at a real­ly small team inside of Google. I mean, we’re less than 100 peo­ple, which is small for Google size. And some of us in the Lab, the work that we do includes cre­at­ing what we call exper­i­ments to show­case and make more acces­si­ble some of the machine learn­ing research that’s going on at Google. And a lot of this work shows up in a site called Experiments with Google. And time and time again we’re sort of blown by what hap­pens when you low­er the bar­ri­er for exper­i­men­ta­tion and access to infor­ma­tion and tools. And so today I want to tell you about one par­tic­u­lar project, and espe­cial­ly the approach that we took to cre­at­ing it. So why am I speak­ing at a home­made event when I work at a huge cor­po­ra­tion? I think you’ll see that my team tends to oper­ate in a way that’s pret­ty scrap­py, exper­i­men­tal, col­lab­o­ra­tive. And this par­tic­u­lar project hap­pens to be a tool that oth­er peo­ple have used to cre­ate a lot of home­made projects. 

So to begin with, let me just talk about what it is. It’s a no-code tool to cre­ate machine learn­ing mod­els. So you know, that’s a mouth­ful. So I think I’m just gonna give you a demo. 

So, this is sort of the home page for the tool. It’s called Teachable Machine. And I can cre­ate dif­fer­ent types of what we call mod­els” in machine learn­ing. It’s basi­cal­ly a type of pro­gram. The com­put­er has learned to do some­thing. And there’s dif­fer­ent types. I can choose to cre­ate an image, an audio, a pose one… I’m just gonna go for image, it’s the eas­i­est one. And I’m just gonna give you a demo so you see how it works. 

I have this sec­tion here where I can cre­ate some cat­e­gories. So I’m going to cre­ate three cat­e­gories. I could call this face” or some­thing like that. And I’m gonna give the com­put­er some exam­ples of my face. So I’m going to do some­thing like this, give the com­put­er some exam­ples of my face. 

Then I’m gonna give the com­put­er some exam­ples of my left hand. And then I’ll give the com­put­er some exam­ples of a book. It hap­pens to be a book that I like. 

And then I’m gonna go to this mid­dle sec­tion to sort of train a mod­el. And right now it’s tak­ing a while because all this is hap­pen­ing in my brows­er. so, the tool is very very pri­vate. All of the data is stay­ing in my brows­er through a library called TensorFlow.js. So one of the main points here was to show­case that you can cre­ate these mod­els in a real­ly pri­vate way. 

And so now the mod­el’s trained. It’s pret­ty fast. And now I can try it out. I can see that it can detect my face. Let’s see, it can detect the book. And it can detect my hand.

And some inter­est­ing stuff hap­pens. When I’m half in/half out, you see that the mod­el’s try­ing to fig­ure out okay, is it my face, is it my left hand… You can sort of learn things about these mod­els like the fact that they’re probabilistic. 

And you know, so far maybe not so spe­cial. I think what real­ly unlocked the tool for a lot of peo­ple is that you can export your mod­el. So I can upload this to the Web, and then if you know how to code you can sort of take this code and then essen­tial­ly take the mod­el out­side of our tool and put it any­where you want. And then you can build what­ev­er you want with it, a game, anoth­er app, what­ev­er you want. 

We also have sort of oth­er con­ver­sions, right. So you can cre­ate a mod­el and not just have it on the Web. You can put it on mobile. You can put it in Unity, in Processing, in oth­er for­mats and oth­er types of platforms. 

And just a lit­tle note here, of course this is not just myself. I worked on this project with a lot of very tal­ent­ed col­leagues of mine. Some of them are here. And I’m gonna show­case a lot of work from oth­er peo­ple in this talk so as much as pos­si­ble I’m going to give cred­it to them. And then at the end I’ll show a lit­tle doc­u­ment with all the links, in case you don’t cap­ture them. 

Our projects start as experiments

So, I want to empha­size that a lot of our projects have real­ly hum­ble ori­gins. You know, it’s just one per­son pro­to­typ­ing or sort of hack­ing away at an idea, even though it might seem like we’re a real­ly big com­pa­ny or a real­ly big team. 

And just to show you how that’s true, you know, ori­gin of this project was actu­al­ly a real­ly small exper­i­ment that looked like this. The inter­face sort of looks the same, but it was real­ly sim­pli­fied. Like you had this sort of three-panel view, and you could­n’t real­ly add too much data. It was a real­ly real­ly sim­ple project. But more­so than that, even though it was tech­ni­cal­ly very sophis­ti­cat­ed I think our ideas at the time were very…we were kind of explor­ing real­ly fun use cas­es. And just to show you how much that’s true, I’ll show you a project that one of the orig­i­na­tors of this idea, Alex Chen, tried out with his kid. 

So you can see he’s basi­cal­ly cre­at­ing these paper mâché fig­urines with his kid, and so he’s train­ing a mod­el that then trig­gers a bunch of sounds. So it was real­ly kind of fun at the time just try­ing a lot of dif­fer­ent things out. 

And then we start­ed hear­ing from a lot of teach­ers all over the world who are using this as a tool to talk about machine learn­ing in the class­room, or talk about sort of the basics of data to kids. And then we final­ly heard even from some folks in pol­i­cy. So Hal Abelson is a CS pro­fes­sor at MIT, and he was using the tool to con­duct sort of hands-on work­shops with policymakers. 

Our projects grow with the right collaborators

So, we had a hunch that maybe this sil­ly exper­i­ment could become some­thing more but real­ly did­n’t know how to trans­form this into an actu­al tool. We also don’t know nec­es­sar­i­ly what the best use cas­es would be. This is where the project took a real­ly real­ly inter­est­ing turn, because essen­tial­ly we met the per­fect col­lab­o­ra­tor to push us into mak­ing it a tool. 

And that per­son, his name is Steve Saling. He was intro­duced to us by anoth­er team at Google who had been work­ing with him. And Steve is this amaz­ing per­son. He used to be a land­scape archi­tect and he got ALS, which is Lou Gehrig’s dis­ease. And he sort of set out to com­plete­ly reimag­ine how peo­ple with this con­di­tion get care and he cre­at­ed this space that…everything is API-fied. So he’s able to order the ele­va­tor with his com­put­er, or turn on the TV with his com­put­er. It’s real­ly amaz­ing. And so he actu­al­ly found the orig­i­nal Teachable Machine, and some­one else sort of was using it with him. And we basi­cal­ly got intro­duced to him and the ques­tion was well you know, can we fig­ure out if this could be use­ful to him and in what way. And how do we just get to know Steve and what he might want. 

So, a lit­tle pause here to say that folks like Steve, they aren’t able to move or in Steve’s case he’s not able to com­mu­ni­cate. So he uses some­thing called a gaze track­er, or a head mouse. And he essen­tial­ly sort of looks at a point on the screen and can sort of press click” and type a word or a let­ter. So he’s able to com­mu­ni­cate but it’s real­ly real­ly slow. 

And so the thought was okay, can we use a tool like Teachable Machine to train a com­put­er to detect some facial expres­sions and then trig­ger some­thing. And this of course is not new. The thing that was sort of new was not for me to train a mod­el for him, but for him to be able to load the tool on his com­put­er and train it him­self. Like sort of put that con­trol on Steve. 

And specif­i­cal­ly, we basi­cal­ly went down to Boston and worked with him quite a lot. He became sort of the first big user of the tool and we made a lot of deci­sions by work­ing with Steve and sort of fol­low­ing his advice. 

And one of the things that the tool sort of allowed us to explore was this idea of imme­di­a­cy. So, what were some cas­es were Steve want­ed sort of a real­ly quick reac­tion, and how could the tool help for this? 

And one use case was he real­ly want­ed to watch a bas­ket­ball game, and he real­ly want­ed to be able to cheer or to boo depend­ing on what was hap­pen­ing in the game. And that was some­thing that he was not able to do with his usu­al tools because you have to be real­ly fast in cheer­ing or boo­ing when some­thing hap­pened. And so, he trained a sim­ple mod­el that basi­cal­ly he could sort of open his mouth and trig­ger an air horn. So that was one example.

So we kind of immersed our­selves in Steve’s world, and by get­ting to know him we got to know that maybe oth­er ALS users can find some­thing like this use­ful. So we start­ed explor­ing audio. Like, could we have anoth­er input modal­i­ty to be audio to poten­tial­ly help with peo­ple who sort of had lim­it­ed speech? And that led us to incor­po­rate audio into the tool. So I actu­al­ly have a lit­tle exam­ple here that I also want to show, just so you guys see how this works. I’m load­ing up a project that I had cre­at­ed before­hand from Google drive. 

This is some data that I had col­lect­ed before­hand, some audio data. So there’s three class­es. There’s back­ground noise, there’s a whis­tle, and there’s a snap. And let’s see if it works. 

So you can see the whis­tle works. You can see the snap works. So same thing here. I can kind of export the mod­el to oth­er places. 

But the inter­est­ing thing here is that the audio mod­el itself actu­al­ly came from this engi­neer named Shanqing Cai, and he cre­at­ed that audio mod­el for peo­ple like Marc Abraham who also have ALS through explor­ing with him the same idea, like how can I cre­ate mod­els for peo­ple like that so that they can trig­ger things on the com­put­er. So the tech­nol­o­gy itself also came from this explo­ration of work­ing with users who have ALS.

And you can’t see what’s hap­pen­ing here but essen­tial­ly Dr. Abraham has sort of emit­ted a puff of air, and with that puff of air he’s been able to turn on the light. 

So we decid­ed that okay, this could be use­ful to oth­er peo­ple who have ALS, and we decid­ed to essen­tial­ly open up a beta pro­gram for oth­er peo­ple to tell us if they had oth­er sim­i­lar uses. And try­ing to think about maybe oth­er anal­o­gous com­mu­ni­ties that could find some inter­est in Teachable Machine. 

And that’s how we met the amaz­ing Matt Santamaria and his mom Cathy. And Matt had come to us and told us that he was play­ing with the tool and she want­ed to try to use it for his mom. So he actu­al­ly cre­at­ed a lit­tle pho­to album that would sort of change pho­tos for his mom and he could load them remote­ly. But his mom, because she did­n’t have real­ly too many motor skills because she had a stroke, she was­n’t able to con­trol the pho­to album. 

So we actu­al­ly worked with Matt, and you can’t see it here because it’s just an image but we actu­al­ly worked with Matt to cre­ate a lit­tle pro­to­type of his mom being able to say pre­vi­ous” or next,” and then train­ing an audio mod­el that then was able to change the pic­ture that was being shown on the slideshow. And that was just sort of a one-day hack explo­ration that we did with Matt. 

And then ulti­mate­ly he sort of kept explor­ing poten­tial uses of Teachable Machine with his mom, and he cre­at­ed this tool called Bring Your Own Teachable Machine, which he open sourced and has shared online. And what it allows you to do is to put any type of audio mod­el and then link that to basi­cal­ly a trig­ger that sends a text to any phone num­ber. So, pret­ty cool to see what he did here. 

And then final­ly, just see­ing that the tool sort of end­ed up being use­ful in a lot of anal­o­gous com­mu­ni­ties, once we launched it pub­licly we saw a lot of uses out­side of acces­si­bil­i­ty. So I want­ed to show you a few of my favorite ones today. 

This is a project by researcher Blakeley Paine. She used to study at MIT Media Lab. And she was real­ly inter­est­ed in explor­ing how to teach AI ethics to kids. So she open sourced this cur­ricu­lum. You can find online. And it just has a bunch of exer­cis­es, real­ly inter­est­ing exer­cis­es, that she takes kids through. 

So this one for exam­ple explains to them the dif­fer­ent parts of a machine learn­ing pipeline. So in this case col­lect­ing data, train­ing your mod­el, and then run­ning the mod­el. She sort of gives them dif­fer­ent data sets. So you can see here in the picture—it’s lit­tle blur­ry but you can see the kids got forty-two sam­ples of cats, and then sev­en sam­ples of dogs. And so the idea’s for them to train a mod­el and then see okay, maybe the mod­el’s work­ing real­ly well in some cas­es, maybe it’s work­ing real­ly poor­ly in some cas­es. Why is that? And have a con­ver­sa­tion with them about AI ethics and bias and sort of how train­ing a mod­el is relat­ed to the type of data that you have.

Here’s anoth­er exam­ple of her work­shop. She sort of asks kids to label their con­fi­dence scores. And you’d be sur­prised. We were invit­ed to join one of the work­shops and these kids sort of estab­lish a pret­ty good kind of insight into the con­nec­tion between how the mod­el per­forms and the type of data that it was given. 

It’s not just Blakeley. There’s oth­er orga­ni­za­tions that have cre­at­ed les­son plans. This one is called ReadyAI​.org. And again, they kind of use these sim­ple train­ing sam­ples. In a lot of school you can’t use your web­cam. So a lot of them sort of have to upload pic­ture files or pho­to files in order to use the tool. And that was a use case that we found out through edu­ca­tion, that we have to sort of enable not hav­ing a webcam. 

Sorting Marshmallows with AI: Using Coral + Teachable Machine [an excerpt of this video plays mut­ed as Alvarado describes the project]

And then more in the realm of hard­ware, there’s a project called Teachable Sorter, cre­at­ed by these amaz­ing designer/technologists Gautam Bose and Lucas Ochoa. And what it is is this very sort of pow­er­ful sorter. So, it uses this accel­er­at­ed hard­ware board; it’s kind of like a Raspberry Pi. And they essen­tial­ly train a mod­el in the brows­er and then they export it to this hard­ware. And they both were super super help­ful in sort of cre­at­ing that con­ver­sion between web and this lit­tle hard­ware device. 

Now, this is a very com­plex project so they made a sim­ple ver­sion that’s open source and you run instruc­tions online. And it’s a Tiny Sorter. So a sorter that you can put on your web­cam. And so again, you can train a mod­el with Teachable Machine in the brows­er, and then export it to this lit­tle Arduino and then sort of attach this to your web­cam and sort things. 

In a dif­fer­ent vein, there’s this project called Pomodoro Work + Stretch Chrome exten­sion. And what it is is it’s a Chrome exten­sion that reminds you to stretch. And the way it works is this per­son trained a pose mod­el that basi­cal­ly detects when peo­ple are stretch­ing, right. 

There’s a team in Japan that inte­grat­ed Teachable Machine with Scratch. Scratch is a tool for kids to learn how to code. It’s a tool and a lan­guage envi­ron­ment for chil­dren to learn how to code. And unfor­tu­nate­ly a lot of the tuto­ri­als are in Japanese, but the app itself sort of works for every­body. So you can take any mod­el and run it there. 

And then a lot of oth­er just real­ly cre­ative projects. Like this one is called Move the Dude. So it’s a Unity game that you con­trol with your hand. And your hand essen­tial­ly is what’s mov­ing the lit­tle char­ac­ter around. So again, because these mod­els work on the Web you can kind of make a Unity WebGL game and export it.

Lou Verdun on Twitter [1], [2]

And then just one last exam­ple. A lot of design­ers and tin­ker­ers start­ed using the tool to just play around with funky design. So Lou Verdun cre­at­ed these lit­tle sort of explo­rations. In this one he’s doing dif­fer­ent pos­es and match­ing those pos­es to a Keith Haring paint­ing. And then in this one it’s sort of like a cook­ie store? And then you can draw a cook­ie and get the closest-looking cookie. 

And then this Github user SashiDo cre­at­ed a real­ly awe­some list of a ton of very inspir­ing Teachable Machine projects, in case you want to see more of them. 

Start with One

So, just to sort of go back to the process of how we made this tool. We took a bit of time to think about this way of work­ing and infor­mal­ly start­ed call­ing it Start with One” amongst our­selves, amongst my col­leagues. And it’s not a new idea, you know. This is inclu­sive design. I’m not invent­ing any­thing new. Just the words Start with One sort of was a way to remind our­selves that…you know, of the tech­ni­cal nature of it. Like, you can just choose one com­mu­ni­ty, one per­son, and sort of start there. And we’re real­ly just try­ing to cel­e­brate this way of work­ing, this ethos. Start with one was just an easy way to remem­ber that. 

An animation beginning with "one person" in a circle and progressively larger circles containing it labeled "one community," "analogous communities," and finally "many more people."

So, just com­ing back to this chart for a sec­ond. This idea of like a tool that we cre­at­ed for one per­son end­ed up being use­ful for a lot more peo­ple. But I want to clar­i­fy that the goal is not nec­es­sar­i­ly to get to the right of this chart. A lot of the projects that we make, they just end up being use­ful for one per­son, or for one com­mu­ni­ty. And that’s total­ly okay. And it’s not nec­es­sar­i­ly the tra­di­tion­al way of work­ing at Google, but it’s okay for my team. 

And you know, this idea of sort of start­ing with maybe the impact or the col­lab­o­ra­tion first rather than the scale, it does­n’t mean it’s the only way of work­ing, it does­n’t mean it’s the best way of work­ing. It’s just a way of work­ing that has worked real­ly well for my very small team. 

So there are a lot of oth­er projects this sort of fit into this bill, and if you’re curi­ous about them you can see some of them in this page, g.co/StartWithOne. It’s also page where you can sub­mit projects, right, so if any of you have a project that fits into this ethos, you’re wel­come to sub­mit there. And you know, right now that times are real­ly hard for peo­ple, I think it’s easy to be crip­pled by what’s going on in the world. I’ve cer­tain­ly felt that way. Maybe even a lit­tle pow­er­less at times. And for me, when I think of Start with One, I remem­ber that you know, even small ideas can have a big impact if you apply it in the right places. And I don’t just mean prag­mat­ic, right. Like, human need is also about joy and curios­i­ty and enter­tain­ment and com­e­dy and love. So it’s not just a prac­ti­cal view of this. 

So, just a lit­tle reminder for all of you, I sup­pose, to look around you, to col­lab­o­rate with peo­ple who are dif­fer­ent than you. Or even peo­ple who you know real­ly well—your neigh­bors, your fam­i­ly. I think the idea is to offer your craft, and col­lab­o­rate with a one, and solve some­thing for them. You know, even if you’re start­ing small you’d be real­ly sur­prised by how far it can get. 

That’s it. There’s this link, tinyurl​.com/​t​e​a​c​h​a​b​l​e​-​m​a​c​h​i​n​e​-​c​o​m​m​u​n​ity I’ll paste in the Discord. And you can find all the oth­er links in my talk through that link, in case you did­n’t have time to copy it down. Thank you very much. 


Golan Levin: Thank you so much, Irene. 

Irene Alvarado: I’ll keep this up for a lit­tle bit just in case.

Levin: Yeah, that’s great. Thank you so much. It’s beau­ti­ful to see all these dif­fer­ent ways in which a diverse pub­lic has found ways to use Teachable Machine in ways that are very per­son­al and often you know, scaled to what a sin­gle per­son is curi­ous about. Like you know, a father and a child, or peo­ple with dif­fer­ent abil­i­ties who can use this in dif­fer­ent ways to make ease­ments for them­selves. It’s real­ly amazing.

I’ve got a cou­ple ques­tions com­ing from the chat. So, you men­tioned this Start with One point of view. Is this a phi­los­o­phy that’s just your sub-team with­in Google Creative Lab? Or does Google Creative Lab have a man­i­festo or set of prin­ci­ples that guide its work in gen­er­al? And if so what sort of guides the work there, and how do you fit into that?

Alvarado: Yeah, great ques­tion. No, I would­n’t say it’s like a gen­er­al man­i­festo, or even of the lab itself. I would say it’s a way of think­ing with­in my sub-team of the lab.

And again, I do want to say it’s not like I’m talk­ing about any­thing new. Like inclu­sive design and codesign…been talk­ing about this for ages. It’s real­ly just like a short-word, key­word, for us to refer to these types of projects. But I would say the Creative Lab does pride itself in basi­cal­ly embark­ing on close col­lab­o­ra­tions. So we tend to see that projects where we col­lab­o­rate very closely—like not mak­ing for peo­ple but mak­ing with people—end up being bet­ter. So, not all projects can be done in that way nec­es­sar­i­ly, but I would say a good amount are.

Levin: So I know that there’s a sort of coronavirus-themed project that uses the Teachable Machine, by I think it’s Isaac [Blankensmith], which is the Anti-Face-Touching Machine, where he makes a machine that when­ev­er he touch­es his face it says, No” in a computer-synthezized voice. It’s super home­made and it sort of trains him to stop touch­ing his face.

But I’m curi­ous, how has the pan­dem­ic changed or impact­ed the work that your team is doing, or that peo­ple that who are work­ing with Teachable Machine are doing, or that the Creative Lab is doing. How have things shift­ed in light of this big shift around the world?

Alvarado: Yeah. I mean, that’s a good ques­tion. I don’t know if I have an extreme­ly unique answer to that except to say that we’ve been impact­ed like any­body else. I mean, luck­i­ly am in an indus­try where I can work from home, so I feel incred­i­bly lucky and priv­i­leged to be able to be at home, safe, and not be a front­line worker. 

It’s changed the fact that cer­tain­ly col­lab­o­ra­tions are hard­er to do. Like you saw in the pic­tures with Steve, we like to go to where peo­ple are and sit next to some­one and actu­al­ly talk to them and not be on our com­put­ers. And that has cer­tain­ly got­ten hard­er. But we’re try­ing to make do with things like Zoom, just like every­body else.

I’d say the hard­est thing is not col­lab­o­rat­ing with peo­ple in per­son. It just takes like a shift? because there’s so much Zoom fatigue. And you don’t get to know peo­ple out­side of your col­leagues. Like when I’m try­ing to know a col­lab­o­ra­tor or some­one out­side of Google, like any­body else you ben­e­fit from going to din­ner or grab­bing cof­fee or just like be a nor­mal per­son with them, hav­ing a laugh. And that does­n’t real­ly exist any­more. Like Steve actu­al­ly, the per­son that I was talk­ing about who has ALS, he’s so fun­ny. Like he says so many jokes. He’s so restrict­ed with the words that he can say because it takes him a long time to type every sen­tence, you know, it takes two min­utes to type a sen­tence. But he’s so fun­ny. He has such a great per­son­al­i­ty and humor. And I think that would be very hard to come off through Zoom. Starting with the fact that it’s very hard for him to use Zoom because some­one else has to trig­ger it for him. So cer­tain­ly that type of col­lab­o­ra­tion would’ve been very hard to do this year.

Levin: Thank you so much for shar­ing your work and this amaz­ing cre­ative tool. I like to think that there are sev­er­al dif­fer­ent gifts that each of our speak­ers are giv­ing to the audi­ence. And one of them is kind of just the gift of per­mis­sion to be a hybrid, right. Like you are a design­er, and a soft­ware devel­op­er, and an artist and a cre­ator and an anthro­pol­o­gist and all these oth­er kinds of—you know, things that bridge the tech­ni­cal and the cul­tur­al, the per­son­al and the design-oriented, and all this together.

Another gift is just the gift of the tool that you’re able to pro­vide to all of us. Me, and my stu­dents, and kids every­where, and adults. Thank you so much.

Alvarado: Yeah I mean, final words is that it’s a feed­back loop, right? Like I was inspired by your work, Golan. Like you give a lot of peo­ple in this com­mu­ni­ty per­mis­sion to be hybrids, and I did­n’t know that that was pos­si­ble. And then every­one mak­ing things with Teachable Machine inspires us to do oth­er things or to take the project in anoth­er direc­tion. So I think we have the priv­i­lege and hon­or of work­ing in an era where the Internet just allows you to have this two-way com­mu­ni­ca­tion. And I think it makes so many things better.

So thank you, Golan, and thank every­one for orga­niz­ing the con­fer­ence like Lea, Madeline, Claire, Bill, Linda. You guys all make hybrids around the world possible.

Levin: Thanks a lot, Irene.

In a few min­utes, at 6:30, we’re going to begin a lec­ture pre­sen­ta­tion by Andrew Quitmeyer from the Digital Naturalism Laboratories in Panama. And so we’ll be see­ing Andrew pret­ty soon. Thanks, every­one. See you soon.

Further Reference

Session page