Jonathan Zong: Hi. My name is Jonathan Zong, and I’m cur­rent­ly a stu­dent at Princeton University in the com­put­er sci­ence and visu­al arts depart­ments.

I care about the Internet. After all, I’m…kinda on it a lot? And we’re all here because we want to make the Internet a bet­ter place to be. And as many of the talks today have shown, exper­i­men­ta­tion is a great tool for find­ing effec­tive ways to improve the online expe­ri­ence.

And actu­al­ly, exper­i­men­ta­tion is so com­mon­place on the Internet now that if you use a plat­form like Facebook you’re prob­a­bly part of many exper­i­ments all the time.

So, what hap­pens if there’s an exper­i­ment that is mak­ing an inter­ven­tion that’s too risky com­pared to the pos­si­ble ben­e­fits? This hap­pened, for exam­ple, in the Facebook mood study, where Facebook altered users’ news feeds to show them more hap­py or more sad con­tent on aver­age, and mea­sured that users were more like­ly to post hap­pi­er or sad­der things them­selves depend­ing on what they were shown. And a lot of peo­ple were pret­ty upset about this.

Consent & Accountability

So I bring up the Facebook exam­ple not to wag a fin­ger at them but to high­light what we can learn from the pub­lic reac­tion to this. Experimentation that doesn’t have pub­lic account­abil­i­ty is risky, because there’s no shared con­sen­sus about what val­ues are impor­tant for the research to uphold. So, in the case of Facebook’s study, the users weren’t informed before­hand that they were part of the exper­i­ment and there was no oth­er kind of exter­nal account­abil­i­ty to com­pen­sate for that.

So if you were try­ing to do some research like the kind that CivilServant does to improve online com­mu­ni­ties, how can you hold your­self account­able to the pub­lic? Because in online research, it’s not always pos­si­ble to get the con­sent of every­one involved in an exper­i­ment before­hand. There might just be too many users in your com­mu­ni­ty, or a vari­ety of oth­er rea­sons that make it imprac­ti­cal.

Well, I’m a researcher at a uni­ver­si­ty, and what I would do is I would go and talk to an insti­tu­tion­al review board, or an IRB. Unlike Facebook, researchers at uni­ver­si­ties are required to have their research plans approved by an IRB before they can do the research, for eth­i­cal rea­sons.

And what the IRB would tell you is that if you’re going to for­go con­sent for an exper­i­ment, three con­di­tions have to be true: The study must have min­i­mal risk. It must actu­al­ly be imprac­ti­cal to obtain the con­sent. And there must be a post‐experiment debrief­ing. And debrief­ing is this process where after the exper­i­ment is over, users that were part of the exper­i­ment are informed about the exper­i­ment and they’re giv­en infor­ma­tion about what the exper­i­ment was for, what data was col­lect­ed… And debrief­ing serves this impor­tant eth­i­cal pur­pose by allow­ing peo­ple to ask ques­tions or even opt out of the research. And suc­cess­ful debrief­ing can real­ly empow­er peo­ple to make informed deci­sions about their involve­ment in research.

User inter­faces for debrief­ing

So, what if we could design user inter­faces that help us auto­mate debrief­ing and reach a large num­ber of peo­ple involved in these huge online exper­i­ments, and give them detailed infor­ma­tion about their involve­ment in research?

This is a project that I’m tak­ing on at Princeton, under the men­tor­ship of Nathan Matias. And some of the ideas that we’re test­ing out are for exam­ple gen­er­at­ing tables that explic­it­ly show what data we’re col­lect­ing and show­ing this to par­tic­i­pants; explain­ing our results visu­al­ly, so that peo­ple can under­stand how the results impact them; and of course, pro­vid­ing con­trols for peo­ple to opt out of data col­lec­tion and remove them­selves from the research.

Testing a debrief inter­face

So we’re cur­rent­ly doing a study to test these ideas about auto­mat­ed debrief­ing. And with the intent of sup­port­ing Merry and Jon’s work on copy­right enforce­ment, we’re ask­ing users on Twitter who received copy­right notices to imag­ine a hypo­thet­i­cal sce­nario where they were involved in exper­i­ments and pre­sent­ed with this debrief­ing inter­face. And we’re ask­ing them to give feed­back on whether or not they would find the inter­face help­ful to them.

And what we learn from this will not only tell us more about debrief­ing but also allow us to do things like fore­cast opt‐out rates for sim­i­lar groups of peo­ple in the future. And just gen­er­al­ly get a bet­ter sense of people’s atti­tudes about research. And this presents an alter­na­tive to Facebook’s pro­ce­dure in the mood study by set­ting up this pub­lic account­abil­i­ty struc­ture that is based around this con­ver­sa­tion about accept­able risks.

So I want to end on the thought that research ethics isn’t just a mat­ter of com­pli­ance but it’s actu­al­ly essen­tial to why we’re doing the research in the first place. Because we want to make the Internet a more pos­i­tive pres­ence in people’s lives. And togeth­er we can set com­mon pro­ce­dures and shared norms that can help us do just that. Thank you.


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.