A series of illustrations demonstrating how a person in the bottom part of the Mechanical Turk cabinet could position himself to keep his presence hidden.

Right. So, this is the Mechanical Turk. It’s a tech con­fer­ence, so I’m show­ing a pic­ture of the Mechanical Turk. More con­tem­po­ra­ne­ous­ly known as The Turk, The Chess Turk, Kempelen’s Turk, or the Automaton, it was an appar­ent­ly autonomous chess-playing robot build by Wolfgang von Kempelen in 1770. For the best part of six­ty or sev­en­ty years, it entranced the nobil­i­ty of Europe. If you had a social cal­en­dar in the late 18th cen­tu­ry, this was the bomb.

And the rea­son why it was so pop­u­lar, the rea­son why it was so incred­i­ble is that nobody could fig­ure out how it worked. It was either an absolute tech­no­log­i­cal mar­vel far sur­pass­ing any­thing seen to that point, or it was mag­ic. It was pos­sessed, in one sto­ry, by the ghost of a Prussian mer­ce­nary who was ampu­tat­ed. Of course today we know that the Turk worked by means of a small chess play­er hid­den inside it an mov­ing com­part­ments that could cun­ning­ly hide that person. 

In 1804, Kempelen died and the Turk was pur­chased by a Bavarian musi­cian called Johann Mälzel. At this stage, the Turk was in a pret­ty bad con­di­tion, hav­ing spent thirty-five years tour­ing Europe, and Mälzel spent some time learn­ing its secrets and touch­ing it up and restor­ing it, and put it on per­ma­nent exhi­bi­tion in London. And this is where it gets interesting.

In 1836, a young assis­tant edi­tor from a little-known lit­er­ary jour­nal called The Southern Literary Review went to go and see the exhi­bi­tion and wrote his review of it. And he said, 

No exhi­bi­tion of the kind has ever elicit­ed so gen­er­al atten­tion as the chess play­er of Mälzel. Wherever seen it has been an object of intense curios­i­ty, to all per­sons who think. Yet the ques­tion of its modus operan­di is still unde­ter­mined. Nothing has been writ­ten on this top­ic which can be con­sid­ered as deci­sive — and accord­ing­ly we find every­where men of mechan­i­cal genius, of great gen­er­al acute­ness, and dis­crim­i­na­tive under­stand­ing, who make no scru­ple in pro­nounc­ing the Automaton a pure machine, uncon­nect­ed with human agency in its move­ments, and con­se­quent­ly, beyond all com­par­i­son, the most aston­ish­ing of the inven­tions of mankind. And such it would undoubt­ed­ly be, were they right in their supposition.
Edgar Allan Poe, Maelzel’s Chess-Player”

This young assis­tant edi­tor was Edgar Allan Poe, who two years after this essay would pub­lish his first col­lec­tion of short sto­ries. And it’s actu­al­ly kind of incred­i­ble. It’s real­ly worth read­ing. He lays out the ground­work for some of the ear­ly tropes of sci­ence fic­tion. He estab­lish­es what he calls tales of rati­o­ci­na­tion,” what we today call detec­tive sto­ries. And estab­lish­es an idea of what he called lit­er­ary analy­sis, the use of fic­tion, the use of sto­ries (in his par­tic­u­lar case hor­ror sto­ries) to reveal truths about the world. To show us insight into com­plex sys­tems and structures.

It’s no coin­ci­dence that one of the most sig­nif­i­cant ear­ly pieces of tech­no­log­i­cal crit­i­cism we have comes from a hor­ror writer. Poe did­n’t write hor­ror to tit­il­late, to shock, to scare. He wrote hor­ror because he knew it was a way of talk­ing about com­plex things in the world, a way of reveal­ing some truth, a way of guid­ing read­ers through a com­pli­cat­ed nar­ra­tive that reveals some truth about our­selves towards the end.

150 years after Poe’s incred­i­ble essay, we’re still talk­ing about tech­nol­o­gy as if it’s mag­i­cal. In the 1980s, 46% of Time arti­cles deal­ing with the sub­ject of the per­son­al com­put­er, com­pu­ta­tion­al cul­ture, and the peo­ple involved in the devel­op­ment of the com­put­er, framed those arti­cles in con­text of mag­ic and the occult. 46%.

And the rea­sons for this are com­pli­cat­ed. It’s not as sim­ple as say­ing, Well, you know, tech­nol­o­gy’s hard and it’s dif­fi­cult to explain so they use mag­ic as a metaphor.” It’s about cul­tur­al appro­pri­a­tion. The 1980s was a time of great social change, the end of the Cold War, so shift­ing social and cul­tur­al struc­tures all around the world. And out of what had pre­vi­ous­ly been the realm of experts and a weird sub­cul­ture came this thing called the per­son­al com­put­er, and peo­ple had to fig­ure out a way of deal­ing with that. And the best way of deal­ing with that was to assim­i­late it using a lan­guage that peo­ple were already famil­iar with, the lan­guage of mag­ic and the occult. Because we need these com­plex sys­tems explained. We need them struc­tured for us in order that we can actu­al­ly bring them into our lives.

A bell curve labeled with "horror" at the left side, "reasonable expectations" in the middle area, and "magic" at the right side.

And of course when mag­ic goes wrong, the nar­ra­tive of mag­ic can quick­ly turn to hor­ror. If you’ve build a tac­it nar­ra­tive that sug­gests that all tech­nol­o­gy is some­how mag­i­cal, then when it goes wrong of course it becomes hor­ri­fy­ing. We kind of con­struct, inter­nal­ly, a bell curve of what we rea­son­ably expect tech­nol­o­gy to do. So what we can expect a cer­tain tech­nol­o­gy, par­tic­u­lar­ly autonomous tech­nolo­gies and com­pu­ta­tion­al tech­nolo­gies to do, and then we struc­ture our­selves with­in that. So for instance if you took Facebook, the mid­dle of this bell curve might be your dai­ly Facebook use, a cou­ple of likes, shar­ing a GIF, spy­ing on your ex, that kind of stuff.

Towards the top end, you’d have quite joy­ous expe­ri­ences, per­haps. So, that said ex send­ing you a pri­vate mes­sage invit­ing you to go for a drink, accom­pa­nied by feel­ings of joy and per­haps slight inadequacy.

Towards the low­er end, you’d have things like some­one break­ing into your Facebook account to take your per­son­al details to get into your Amazon account to con­vince your cred­it card com­pa­ny that they’re you and steal all your mon­ey. Not so joy­ous. Bit hor­ri­ble, really.

But still with­in the frame­work of rea­son­able expec­ta­tions, because we’ve heard about these sto­ries. We know how that sys­tem works well enough to be able to bal­ance those risks and benefits.

Outside of that, you then have rea­son­able things that are unex­pect­ed. And I use rea­son­able” in the sense of ratio­nal. It’s ratio­nal­iz­able. And that might be for instance Apple last week decid­ing to brick every­one’s iPhone who’d had it repaired. Totally rea­son­able. It was ratio­nal­iz­able. It’s not mag­ic. But it was unex­pect­ed. And it kind of changed those boundaries.

Beyond that, at the far end, you have hor­ror and mag­ic. Horror, the unimag­in­able. Worse than the most dread­ed pos­si­bil­i­ty. And at the oth­er end mag­ic, the impos­si­ble. The phys­i­cal­ly impos­si­ble. And mag­ic is the thing that tech­nol­o­gy angles towards. We angle to achieve mag­i­cal things, but often end up with horror.

Still from the movie "Ringu" showing a girl with her face obscured by hair crawling out of a television set

A great exam­ple of how these expec­ta­tions work and what hap­pens when they go wrong is the 1998 Japanese super­nat­ur­al hor­ror film Ringu. The hor­ror of Ringu worked because we built a set of expec­ta­tions about how a TV, a video play­er, and a tape should work. Worst case sce­nario, video play­er chews up tape, small house fire. Not venge­ful spir­it of mur­dered teenage girl climb­ing out of tele­vi­sion to devour your soul. The hor­ror of Ringu is in that col­lapsed expec­ta­tion. The sense of uncer­tain­ty and insta­bil­i­ty that’s sud­den­ly intro­duced to both the view­ers and the victims.

And in a the­o­ret­i­cal sense there’s real­ly no dif­fer­ence between the venge­ful spir­it of a mur­dered teenage girl crawl­ing out of the tele­vi­sion to devour your soul and cats try­ing to catch mice on an iPad. To the cats, much like the girl in Ringu, the world of the screen and the phys­i­cal world are one con­tin­u­ous real­i­ty. Why should­n’t the mice come out the side of the iPad? They don’t have a frame­work for what the rea­son­able expec­ta­tions of the behav­ior of that tech­nol­o­gy are. And in a sense, the cats are kind of angled to the oth­er end of that bell curve. They’re going for mag­ic. They’re going for an iPad that can pro­duce mice from nowhere.

Sorry. On with the show.

This is an incred­i­bly impor­tant piece of footage. It’s got a French name that I’m not even going to try and pro­nounce, but in English it’s often called The Arrival of the Mail Train.” It’s a fifty sec­ond film that was made in 1895 by the Lumière broth­ers, and it’s very sig­nif­i­cant for being one of the first pieces of mov­ing image that was shown to large pub­lic audi­ences, one of the first pieces of mov­ing image to go into what would become cinemas. 

And it’s accom­pa­nied by an urban myth that we often hear. There’s a cou­ple of indi­ca­tions that myth might be true. But the urban myth says that upon see­ing it, peo­ple fled the cin­e­ma. They saw this train rush­ing towards them, and they ran out the cin­e­ma scream­ing. You see, peo­ple had expe­ri­ence of trains (Trains are big heavy met­al things that destroy things that get in their path.) but did­n’t have expe­ri­ence of mov­ing image. They had­n’t yet built a series of expec­ta­tions about what they might expect mov­ing image to do, and so they ran. They ran in fear, in terror.

This is a video that came out last year that is very sim­i­lar­ly framed, inter­est­ing­ly, but fea­tures some Colombian guys test­ing out and show­ing off the auto­mat­ic stop­ping capa­bil­i­ties of their new Volvo. Guess where this is going.

https://​www​.youtube​.com/​w​a​t​c​h​?​v​=​_​8​n​n​h​U​C​t​cO8

With pre­dictable results, right? If there’s a slid­ing scale at one end of which is Europeans run­ning out of cin­e­mas in the late 19th cen­tu­ry, and the oth­er one is cats with iPads, this is firm­ly in cats with iPads ter­ri­to­ry. This is up there. Roughly a hun­dred years of edu­ca­tion that says if a car is accel­er­at­ing towards you, get out of the way” have just gone. Gone. Because of a soft­ware gim­mick. The insta­bil­i­ty this cre­ates is incred­i­ble. That’s why we have so many dis­cus­sions about autonomous vehi­cles and things like that.

Volvo lat­er issued a state­ment say­ing the rea­son this hap­pened was because the car did not have the pedes­tri­an detec­tion soft­ware pack­age, which is option­al. It’s 2015 and pedes­tri­an detec­tion is an option­al add-on? No, that should be built into the firmware of the thing.

So these kind of col­laps­es of real­i­ty and expec­ta­tion are start­ing to hap­pen more and more. And this becomes incred­i­bly wor­ry­ing when we talk about these tech­nolo­gies being brought into the home. Because the home is a thing you need to sur­vive as a bio­log­i­cal ani­mal. It gives you shel­ter and heat and food and light and water and things like that. And now sud­den­ly these incred­i­bly fal­li­ble, desta­bi­liz­ing objects are com­ing into that envi­ron­ment. The smart fridge is the gold­en fleece of the Internet of Things, that sort of thing that every­one’s been aim­ing for since the 1970s but nev­er seems to get near to. And you can imag­ine what might hap­pen if a smart fridge sud­den­ly starts behav­ing [inaudi­ble; fol­low­ing video starts playing].

Suddenly we end up with a world of haunt­ed hous­es, where fridges are mal­func­tion­ing because of firmware fail­ures or what­ev­er. And the haunt­ed house is a real­ly well-established part of the hor­ror genre. One of my favorite haunt­ed house films is the orig­i­nal House on Haunted Hill, and also the remake, which is pret­ty good. But the thing about House on Haunted Hill is that it’s not a super­nat­ur­al thriller. It does­n’t have any super­nat­ur­al ghosts or any­thing in it. What hap­pens in House on Haunted Hill is that Vincent Price uses the super­nat­ur­al, man­u­fac­tures a haunt­ed house, in order to kill his wife and her lover. So he uses this build­ing of this air of spir­i­tu­al­i­ty and haunt­ed­ness in order to per­form a sim­ple, every­day mur­der. Not every­day, hope­ful­ly. You know.

But that’s quite a com­mon tac­tic, to cre­ate a vis­age of some­thing going on that real­ly isn’t going on, and you as the view­er don’t even find out till near the end that’s the truth. Alfred Gell, who’s an incred­i­ble tech­nol­o­gy writer, would call this a tech­nol­o­gy of enchant­ment, which he defines as,

tech­ni­cal strate­gies that exploit innate or derived psy­cho­log­i­cal bias­es so as to enchant the oth­er per­son and cause him or her to per­ceive social real­i­ty in a way that is favor­able to the social inter­est of the enchanter.
Alfred Gell, Technology and Magic”

That’s a long way of say­ing that basi­cal­ly a tech­nol­o­gy of enchant­ment is any­thing that deceives you, any­thing that cre­ates a fic­tion or a sense of real­i­ty that sim­ply isn’t true. And you can think here of every­thing from adver­tis­ing to the prob­lem of the fil­ter bub­ble on the Internet.

The title screen of the video game Doom shown on the display of a printer

So how does this start to look when it’s moved into the house? This, two years ago, was kind of an inter­est­ing exam­ple of where some secu­ri­ty experts man­aged to hack a print­er over WiFi and replace the firmware with the video game Doom. It’s a fun trick and it was kind of nice, but it expos­es the struc­tur­al prob­lems of these kinds of devices. Any device can be hacked in this way, any­thing. And if you’re putting things on it that you need to live, things that you need to eat, things that sup­ply you with heat and water and things like that, then that’s real­ly problematic.

Nest has been repeat­ed­ly hacked, sev­er­al times. It’s been shown to be very easy to hack. It’s also, more sin­is­ter­ly per­haps, been used a lot in DDoS attacks. We know that they get used as nodes when attack­ing oth­er peo­ple. So your Nest might not be haunt­ing you, but it might be haunt­ing some­one else.

And it does­n’t nec­es­sar­i­ly have to be a malev­o­lent action in order to haunt some­one. Sometimes just sim­ply ill-considered, badly-designed stuff can be real­ly haunt­ing. Samsung last year released a smart TV. The smart thing about the TV is that it’s voice-controlled. Digging through the pri­va­cy pol­i­cy, some researchers found that it said,

Samsung may col­lect and your device may cap­ture voice com­mands and asso­ci­at­ed texts so that we can pro­vide you with Voice Recognition fea­tures and eval­u­ate and improve the features.
Samsung Privacy Policy–SmartTV Supplement [lat­er mod­i­fied]

That’s the old we’re improv­ing your expe­ri­ence thing.”

Please be aware that if your spo­ken words include per­son­al or oth­er sen­si­tive infor­ma­tion, that infor­ma­tion will be among the data cap­tured and trans­mit­ted to a third par­ty through your use of Voice Recognition.
Samsung Privacy Policy–SmartTV Supplement [lat­er mod­i­fied]

That’s Samsung say­ing to you, When you’re in your liv­ing room, don’t say any­thing you would­n’t want any­one else to hear.” In your liv­ing room. That’s baf­fling that that has been designed in. It does­n’t make any sense.

Photo of the Fisher Price Smart Toy Bear

Perhaps more wor­ry­ing­ly, this was two weeks ago, Fisher Price’s Smart Toy Bear. I’m not sure why ted­dy bears need to be con­nect­ed to the Internet, but there we are. It’s the age we’re in. Fisher Price, it turned that the API they had for these smart bears that would inter­act with chil­dren, was just wide open. Anyone could access it. Anyone could get into that API and find out the name, age, gen­der, and loca­tion of any of the chil­dren with these bears. That’s pret­ty ter­ri­ble. So there was a brouha­ha. Fisher Price apol­o­gized and they said they’d rec­ti­fied the prob­lem. Their method of rec­ti­fy­ing the prob­lem was to say in the pri­va­cy pol­i­cy, again very deep,

You acknowl­edge and agree that any infor­ma­tion you send or receive dur­ing your use of the site may not be secure and may be inter­cept­ed or lat­er acquired by unau­tho­rized parties.
[NB: this is actu­al­ly from the terms of use of VTech, who had a sim­i­lar hack­ing issue around the same time as Fisher Price]

In oth­er words, you’re respon­si­ble for our ter­ri­ble API.” And there’s a whole thing here about how pri­va­cy poli­cies and end user license agree­ments are often the most over­looked bit of the design and used for estab­lish­ing the rela­tion­ship between users and devel­op­ers in com­pa­nies and stuff. And because users are the last peo­ple who ever look at those things, it’s often very exploitative.

Smart locks are per­haps the most baf­fling of all the Internet of Things projects. I don’t know what’s wrong with locks that requires them to now be made con­nect­ed to the Internet. They were fine in the first place. The August is one of the most notable exam­ples of the smart lock. It’s one of the most wide­ly sold and wide­ly reviewed. It essen­tial­ly works by detect­ing when your phone is near through Bluetooth, and then auto­mat­i­cal­ly lock­ing or unlock­ing the door accord­ing­ly. Wired when review­ing this (and they get kind of breath­less over any­thing with an LED in) said that it only worked about 80% of the time. That’s not a great stat for some­thing that keeps all your stuff safe, right? That’s pret­ty terrible. 

Beyond that, we know that phones are full of prob­lems and bugs. Bluetooth does­n’t work often. WiFi can col­lapse. Power… You still have to take your key with you with this thing—that’s the laugh­able thing—because if your phone runs out of bat­tery, you’ve got to open the lock the old way. Locks are pret­ty well-designed. They’ve got a real kind of good UX thing with the whole clunk, turn going on. It’s quite healthy, I think. So, smart locks, baffling.

So, two hun­dred years after Poe’s incred­i­ble essay (I real­ly hope every­one reads it) and we’re still talk­ing about tech­nol­o­gy in terms of mag­ic and the occult. We’re still look­ing for some mag­i­cal solu­tion through it. I had before a slide from Nest’s Magic of Home” advert, but I did­n’t want to go too deep on Nest. And that’s fine, that’s fine, that’s how we assim­i­late com­plex sys­tems and tech­nolo­gies into our lives. Because we don’t have time to actu­al­ly crit­i­cal­ly engage with them on any deep lev­el. So mag­ic is a help­ful metaphor.

But we have to be aware that when you cre­ate mag­ic or occult things, when they go wrong they become hor­ror. Because we cre­ate tech­nolo­gies to soothe our cul­tur­al and social anx­i­eties, in a way. We cre­ate these things because we’re wor­ried about secu­ri­ty, we’re wor­ried about cli­mate change, we’re wor­ried about threat of ter­ror­ism. Whatever it is. And these devices pro­vide a kind of stop­gap for help­ing us feel safe or pro­tect­ed or whatever. 

But in doing so we run this risk, and all those objects prove this, of unleash­ing a stream of use­less crap on peo­ple that isn’t mag­i­cal, and actu­al­ly just adds more hor­ror. In the 1980s, Time believed that the per­son­al com­put­er was the mag­i­cal solu­tion. It would make sense of the world. Suddenly peo­ple would have pow­er, mag­ic pow­er. Then it was vir­tu­al real­i­ty. Then the web. Then search. Then Web 2.0. WiFi. Social net­works. Big Data. Augmented real­i­ty. The Internet of Things. Open data. Civic tech. Bitcoin.

The peo­ple who tell us these things are going to save us are called evan­ge­lists. The peo­ple who crave them are called fetishists; the belief in some high­er pow­er inside of objects. But mag­ic isn’t real. Horror is. No one in this room will ever expe­ri­ence mag­ic because it’s phys­i­cal­ly impos­si­ble. But regret­tably, most peo­ple will expe­ri­ence hor­ror. The death of a loved one or a vio­lent crime, that hap­pens. That’s a real thing. 

The Wired writer, writ­ing about his expe­ri­ence with the smart lock, found one day when he got home that his house was wide open. The smart lock had mal­func­tioned and just opened. Couldn’t fig­ure out why. Turned out lat­er it was incom­pat­i­ble with anoth­er app. Again, not a prob­lem keys have. And he said some­thing quite har­row­ing. He said, 

I haven’t been able to get to sleep with­out secur­ing the chain since then. I get freaked out at the prospect of some­one walk­ing through my unlocked door and stand­ing over my wife and me while we sleep.
Joe Brown, Review: August Smart Lock”

His home had become a haunt­ed house. The unimag­in­able had hap­pened. Horror had struck him. And we can’t design for that. You can’t design for the unimag­in­able because it would be para­dox­i­cal­ly pre­scient to be able to do so. No one can do that. But being aware that in try­ing to cre­ate mag­i­cal solu­tions we unleash the pos­si­bil­i­ty of hor­ror is real­ly important. 

Ambrose Bierce is anoth­er incred­i­ble his­tor­i­cal fig­ure (Read his Wikipedia page. Just a mad­ly bril­liant guy.) dis­ap­peared in 1915, going to fight in the Mexican rev­o­lu­tion. He was also a devo­tee of Poe. He was quite famous for writ­ing a thing called The Devil’s Dictionary, which is a satir­i­cal dic­tio­nary. But he also wrote a short sto­ry that I love called The Damned Thing.”

In The Damned Thing,” there’s a small American set­tle­ment of eight or nine peo­ple who are slow­ly being killed off, and at first they blame each oth­er, they sus­pect a mur­der­er’s loose. But in the end, they real­ize that they’re being stalked by an invis­i­ble mon­ster. Perhaps the most unimag­in­able hor­ror, a hor­ror so unimag­in­able that it’s invis­i­ble. And it ends with the main char­ac­ter sort of plead­ing his san­i­ty to the sky, shout­ing his bene­dic­tion. He says,

As with sounds, so with col­ors. At each end of the solar spec­trum the chemist can detect the pres­ence of what are known as actinic” rays. They rep­re­sent colors—integral col­ors in the com­po­si­tion of light—which we are unable to dis­cern. The human eye is an imper­fect instru­ment; its range is but a few octaves of the real chro­mat­ic scale.” I am not mad; there are col­ors that we can not see.

And, God help me! the Damned Thing is of such a color!
Ambrose Bierce, The Damned Thing”

Thank you.


Nicolas Nova: Quick ques­tion for you. I think some­one in the room asked whether you’d be inter­est­ed in run­ning for may­or of San Francisco. [Tobias laughs] Well…no com­ment on that.

Tobias Revell: No, no com­ment. I don’t want to get involved in pol­i­tics at this ear­ly stage in my career.

Nova: My oth­er ques­tion was about some­thing you said in the con­clu­sion, there’s no way to design for that.” But as a design­er, how you work is relat­ed to that. You can talk a lit­tle bit about your spec­u­la­tive design prac­tice, broadly.

Revell: Well, yeah. I work in spec­u­la­tive design, which is in a sense a kind of fuzz test­ing of design. It’s mak­ing real­ly per­haps unex­pect­ed out­comes, try­ing to make the unex­pect­ed real in order to test it on peo­ple and see how they feel about it. It’s a field of design that ful­ly rec­og­nizes that in any devel­op­ment there’s trade-offs. Someone’s going to suf­fer, some­thing’s going to go wrong. And rec­og­niz­ing that and invit­ing peo­ple to real­ize that as part of this nar­ra­tive is real­ly impor­tant. It’s fine to say that tech­nol­o­gy is mag­ic, but it’s also fine to say, well we’ve all seen The Sorcerer’s Apprentice.” We know what hap­pened there.

Nova: Thank you.

Revell: Thank you.

Further Reference

Bio page for Tobias and ses­sion descrip­tion at the Lift Conference web site.

Tobias com­posed a short video set­ting the Lumière broth­ers train clip and the car auto-stopping test side by side, show­ing how both demon­strate total faith in what the tech­nol­o­gy claims to be and that it will stop, despite the obvi­ous outcome.”