Hi, every­one. Glad to be back.

23andMe is a US-based com­pa­ny that will, for a fee, ana­lyze your genet­ic ances­try. Provide some sali­va, and they’ll send you a sum­ma­ry that com­pares your DNA with thirty-one iden­ti­fied pop­u­la­tions across the world. It seems that this sort of genet­ic analy­sis is the next big thing in fam­i­ly his­to­ry, and you may have noticed that Ancestry offers a sim­i­lar ser­vice. 23andMe, how­ev­er, also pro­vides an appli­ca­tion pro­gram­ming inter­face, an API, so that devel­op­ers can cre­ate cool third-party apps with your DNA. Don’t wor­ry, you do actu­al­ly have to pro­vide per­mis­sion. So Facebook won’t auto­mat­i­cal­ly start send­ing you friend requests based on your genes. At least not yet.

But it did­n’t take long for the eth­i­cal bound­aries of this sort of ser­vice to be test­ed. One devel­op­er cre­at­ed a genet­ic access con­trol authen­ti­ca­tion sys­tem. Using it, online sites or ser­vices could restrict access to peo­ple who had a spe­cif­ic genet­ic make­up. This was an actu­al exam­ple from the guy who devel­oped it:

Screenshot of an error message stating "Authorization Status: Invalid! You are 22% of the permitted European ancestry."

The devel­op­er’s API access was quick­ly revoked, with 23andMe not­ing that their Terms of Use pro­hib­it appli­ca­tions that con­tain, dis­play, or pro­mote hate.

Four years ago, I stood on this stage describ­ing a project that I was work­ing on with Kate Bagnall called Invisible Australians. We were and are try­ing to encour­age use of the National Archive of Australia’s col­lec­tion of records that doc­u­ment the work­ings of the White Australia Policy in quite con­fronting detail.

As an exper­i­ment, I down­loaded thou­sands of images from the Archive’s col­lec­tion data­base. Most were cer­tifi­cates used in the con­trol of non-white immi­gra­tion, visu­al­ly com­pelling doc­u­ments that include both por­trait pho­tographs and handprints. 

I ran a facial detec­tion script over these images to extract the por­traits, and cre­at­ed an online resource called The Real Face of White Australia

For a week­end project, it’s had a sig­nif­i­cant impact in the dig­i­tal human­i­ties world. Some peo­ple crit­i­cize this, though, because they thought we were actu­al­ly select­ing records based on race. This was just a mis­un­der­stand­ing of both the records and the tech­nol­o­gy that we were using. Even if I’d want­ed to, I would­n’t have had a clue back then about how I would cat­e­go­rize these por­traits by race.

Now I do.

I can sign up for an account with a ser­vice like Face++ and use their API to ana­lyze an image of a per­son­’s face to deter­mine both race and gen­der. After a bit of a break, because of life, Kate and I are get­ting back to Invisible Australians. In 2011, I had about 12,000 images from one series in the National Archives. I’ve now host­ed more than 160,000 images from twenty-two dif­fer­ent series.

But the inter­ven­ing years have also brought changes in the Australian gov­ern­men­t’s treat­ment of asy­lum seek­ers. It’s brought changes in greater pow­er for secu­ri­ty agen­cies and the nor­mal­iza­tion of elec­tron­ic sur­veil­lance. When the White Australia Policy was imple­ment­ed, por­trait pho­tographs and fin­ger­prints were the lat­est in crime-fighting tech­nol­o­gy. Just over a month ago, the Australian gov­ern­ment announced its newest nation­al secu­ri­ty weapon, a nation­al facial recog­ni­tion sys­tem to be known hence­forth as The Capability. 

It’s true, it’s true. I sus­pect they already have the movie rights in mind.

The sys­tem would assist author­i­ties in putting a name to the face of ter­ror sus­pects, mur­der­ers, and rob­bers. Tools that help iden­ti­fy faces can offer pow­er­ful new means of dis­cov­ery and analy­sis with­in the hold­ings of our cul­tur­al col­lec­tions. But can those of us who work with these tools avoid engag­ing with broad­er sys­tems of sur­veil­lance, cat­e­go­riza­tion, and con­trol? For Kate and me, the par­al­lels are just too strong. History is not just about the past.

I’m talk­ing today about two relat­ed tech­nolo­gies, facial detec­tion, and facial recog­ni­tion. Facial detec­tion sim­ply tells you if there’s a face in an image. It’s a the tech­nol­o­gy that draws lit­tle box­es around faces when you’re tak­ing pho­tos, and it’s pret­ty effi­cient and well-established. This was the tech­nol­o­gy that I used to cre­ate our wall of faces. 

Basic detec­tion is now being sup­ple­ment­ed by algo­rithms that exam­ine the shape of facial fea­tures, and the char­ac­ter­is­tics of things like skin so that they can esti­mate gen­der, age, and race. They can also tell you whether the per­son is wear­ing glass­es, and the qual­i­ty of their smile.

Facial recog­ni­tion, on the oth­er hand, first detects faces with­in an image, and then search­es for those faces with­in an exist­ing set of previously-identified images. The Capability, for exam­ple, plans to take an image and look for match­es across a series of inter­linked data­bas­es such as pass­ports and dri­ving licens­es. It’s also of course what Facebook does when it tags peo­ple in the pho­tos that you share. 

Facial recog­ni­tion is a lot trick­i­er than detec­tion, but last year Facebook announced that its DeepFace sys­tem (Don’t you love all these names?) had reached a lev­el of accu­ra­cy sim­i­lar to humans. Not to be out­done of course, Google claimed it’s FaceNet tech­nol­o­gy had pushed the bar even high­er, report­ing an accu­ra­cy of over 99%. How this trans­lates to real-world appli­ca­tions such as The Capability is not clear, but I think we can safe­ly assume that the secu­ri­ty agen­cies are head­ing down the same path.

Machines have strug­gled to match humans in find­ing and rec­og­niz­ing faces because faces are so impor­tant in sim­ply being human. Faces con­nect us to our social world. But as cul­tur­al insti­tu­tions know, faces can also con­nect us through time. Looking into the eyes of a per­son, no mat­ter how far removed through time, his­to­ry, or cul­ture, affects us. We do not mere­ly see, we feel. And this I think is where The Real Face of White Australia gains its emo­tion­al power.

Last year, in anoth­er exper­i­ment, I start­ed extract­ing faces from Trove’s mil­lions of dig­i­tized news­pa­per arti­cles. The images are of a much low­er qual­i­ty than the pho­tographs of course, but still…there’s that feel­ing of connection.

A grid of many faces cropped from old photographs

Facial detec­tion is one exam­ple of a broad­er class of com­put­er vision oper­a­tions known as fea­ture detec­tion.” You can train your com­put­er to find all man­ner of pat­terns and shapes in your images. This includes things like cats and bananas, as well as the com­po­nents of the face, eyes, nose, and mouth. So of course, one night I won­dered what would a news­pa­per dis­cov­ery inter­face based on eyes look like? 

A collection of images of eyes, some expanded to show the full face of the person

Eyes on the past has been var­i­ous­ly described as both beau­ti­ful and creepy, some­thing of which I’m rather proud. It was anoth­er week­end project, an exper­i­men­tal inter­ven­tion rather than a prac­ti­cal tool. By click­ing on eyes and faces, you can find your way to news­pa­per arti­cles, but that’s not real­ly the point. I was hop­ing to say some­thing about the fragili­ty of our con­nec­tion to the past. We glimpse past lives through tiny cracks in the walls of time. These moments may be fleet­ing, but they can also be full of meaning. 

So I kept har­vest­ing faces, and I’ve now got about 6,000 from the news­pa­pers from the 1880s through to 1913. And if you’d like to play the full dataset is avail­able for down­load from the data-sharing site Figshare.

I also built my own face API to encour­age fur­ther exper­i­men­ta­tion. While ser­vices like Face++ have APIs that take your face and pull it apart, mine just gives you ran­dom faces from the past. That’s all.

Most recent­ly, I used my col­lec­tion of faces to cre­ate a Twitter bot called The Vintage Face Depot. Tweet a pic­ture of your­self to the bot, and it will send you back a new ver­sion of your­self in which your face is over­laid with a ran­dom vis­age from my exten­sive range of vin­tage faces.

Tweaking the trans­paren­cy means that the face start to blend. You are nei­ther your­self nor them, but some­one new. Each face replace­ment also comes gift-wrapped with a link to the orig­i­nal news­pa­per arti­cle. The Vintage Face Depot tells you noth­ing new about your­self. I built it about the same time as Microsoft launched their How Old bot that uses machine learn­ing to esti­mate your age. Face Depot does noth­ing clever and yet some­times the results are uncan­ny, even unset­tling. Microsoft may be able to tell you how old you are, but Face Depot asks who you are, and push­es you in the direc­tion of a past life linked mere­ly through chance.

Of course, the next obvi­ous step is to feed the results of the Vintage Face Depot to the How Old bot, or indeed to Face++‘s API:

In a sim­i­lar vein, the devel­op­er Kurt Kaiser has been pit­ting neur­al net­work against neur­al net­work, alter­ing images of him­self using Google’s Deep Dream and upload­ing them to Facebook for DeepFace to ana­lyze and tag.

Digital artists like Adam Harvey have reverse-engineered facial detec­tion algo­rithms to devise anti-surveillance fash­ion styles. He shows how you can use make-up to dis­rupt key regions of your face such as where your nose and eyes inter­sect, and effec­tive­ly ren­der your face invis­i­ble. From face, to anti-face.

While none of these inter­ven­tion pro­vide a detailed cri­tique of state sur­veil­lance, they do high­light the con­struct­ed nature of these tech­nolo­gies. By play­ing around with their para­me­ters, we under­stand bet­ter how they work. (And that last sen­tence was mod­i­fied to be more family-friendly.)

But what’s the role of cul­tur­al her­itage orga­ni­za­tions in all of this? Libraries are already lead­ing the way in sup­port­ing online pri­va­cy. But leav­ing aside the whole liv­ing in a sur­veil­lance state thing for a moment, these tech­nolo­gies don’t just find faces, they reduce us to a set of exter­nal char­ac­ter­is­tics. We become what they can measure. 

Researchers are cur­rent­ly inves­ti­gat­ing how facial detec­tion sys­tems can be used to iden­ti­fy depres­sion. The aims are wor­thy, of course, but it’s not hard to imag­ine how, like 23andMe, such sys­tems could be used to dis­crim­i­nate rather than sup­port. Other stud­ies have explored whether human observers can tell if you’re gay or prone to infi­deli­ty by look­ing at your face. Anyone remem­ber phrenology?

With mea­sure­ment comes the pow­er to cat­e­go­rize and con­trol. These are tech­nolo­gies that enable us to be judged at a dis­tance, to be iden­ti­fied as a threat or a sales oppor­tu­ni­ty just by the way we look. Facial recog­ni­tion takes this fur­ther. Not only can we be reduced to a set of externally-verifiable mea­sure­ments, but these mea­sure­ments are assumed to some­how con­sti­tute our identity.

So run­ning Face++‘s API across a large pho­to­graph­ic col­lec­tion to iden­ti­fy, for exam­ple, pic­tures of women, seems like it could be a real­ly use­ful thing to do. But we also know that the male/female bina­ry is hope­less­ly inad­e­quate in describ­ing who we are. And iden­ti­fiers are not the same things as identities.

Cultural her­itage data is glo­ri­ous­ly messy. Even as we try and wran­gle it to fit our sys­tems, we rec­og­nize the resis­tance as some­thing pro­found­ly human. Against the pow­er of sur­veil­lance, both for secu­ri­ty and for sales, we have the oppor­tu­ni­ty, the oblig­a­tion to cel­e­brate this com­plex­i­ty, to deny the mean­ing of mea­sure­ment. You can­not know me from my face. My iden­ti­fy can not be cap­tured in your database.

Let’s use Facial detec­tion to enrich our meta­da­ta, but let’s also work with artists, devel­op­ers, and activists to chal­lenge the tech­nol­o­gy’s embed­ded assump­tions about the per­fect face.

Four years ago, I showed you our wall of faces. A few weeks ago I took those 7,000 pho­tos and ran them through a pro­gram that aver­ages facial fea­tures. I expect­ed to be crit­i­cal. I expect­ed to be annoyed. But instead of see­ing some algorithmically-generated non­sense, I just saw a person.

And there’s pow­er in that. 


Further Reference

Tim’s own post­ed tran­script of this pre­sen­ta­tion. (Not dis­cov­ered until near­ly com­plet­ed here.)

Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Cash App, or even just sharing the link. Thanks.