J. Nathan Matias: I’m Nathan Matias, founder of CivilServant. And we’re here today to talk about the Internet. 

Firefighters extinguising a dumpster fire at Altus AFB

Now, in the room today are some tru­ly remark­able peo­ple. And I want you to know that in this slide there is more than just a dump­ster fire. There are also peo­ple in suits who are train­ing and ded­i­cat­ed to man­age that fire. And many of those peo­ple, when it comes to the Internet, are here in the room. You’re peo­ple who’ve been cre­at­ing and main­tain­ing our digitally-connected spaces for decades. You’re com­mu­ni­ty mod­er­a­tors, researchers, facil­i­ta­tors, engi­neers, edu­ca­tors, cre­ators, bystanders, and pro­tec­tors. People who look beyond the dump­ster fire of what­ev­er pan­ic we have of the moment, rec­og­nize that there’s some­thing deeply impor­tant worth sav­ing, worth grow­ing, and step in. 

And in fact, there are a lot of us. Even as com­pa­nies are expect­ed to do some­thing about online risks, far more of us take action than I think many of us rec­og­nize. According to a study by the Data & Society insti­tute, around 46% of American Internet users have tak­en some kind of action to inter­vene on the issue of online harass­ment. That’s almost 100 mil­lion people. 

Despite this we do live in pes­simistic times, as soci­ety con­fronts the risks that come with liv­ing in a world that is inescapably dig­i­tal­ly con­nect­ed. And so, with many things com­pet­ing for our fears, it’s easy to become par­a­lyzed or uncer­tain about what to wor­ry about most and what to do about those prob­lems. Is dis­in­for­ma­tion real­ly unstop­pable? Are our our minds real­ly being hijacked? And when we wor­ry about our hopes for soci­ety slip­ping away, how can we mean­ing­ful­ly move from fear and uncer­tain­ty to do some­thing that we’re con­fi­dent can make a difference?

I imag­ine that’s how Wikipedians must’ve felt in 2007, after a year of remark­able growth that more than dou­bled the num­ber of active con­trib­u­tors to this incred­i­bly pre­cious glob­al resource. 

Here’s the sto­ry as told by the researchers Aaron Halfaker, Stuart Geiger and their col­leagues about a moment of cri­sis that they experienced. 

So, eleven years ago the Wikipedia com­mu­ni­ty was just start­ing to reck­on with the scale and risks of van­dal­ism on the site as it became more and more relied on by peo­ple around the world. Wikipedia had been sued for false infor­ma­tion that was post­ed to this ency­clo­pe­dia that any­one can edit. And one per­son was even detained by Canadian bor­der agents after some­one added false infor­ma­tion to their bio, claim­ing that they were a terrorist. 

Now, Wikipedians learned, like so many oth­er large online com­mu­ni­ties, that their col­lec­tive endeav­or had made them a basic part of mil­lions of peo­ple’s lives, with all of the accom­pa­ny­ing mess, con­flict, and risk. And like many plat­forms, the com­mu­ni­ty built social struc­tures and soft­ware to man­age those prob­lems. They cre­at­ed auto­mat­ed mod­er­a­tors like the neur­al net­work ClueBot which detects van­dal­ism, and mod­er­a­tion soft­ware that allowed peo­ple to train and over­see those bots. Thanks to these ini­tia­tives, Wikipedians were able to pro­tect their com­mu­ni­ty, cut­ting in half the amount of time that van­dal­ism stayed on Wikipedia’s pages. It’s now impos­si­ble to main­tain Wikipedia with­out these AI systems. 

Unfortunately, this amaz­ing­ly cre­ative, suc­cess­ful effort had some side-effects. Soon after intro­duc­ing auto­mat­ed mod­er­a­tion, Wikipedia par­tic­i­pa­tion went from mas­sive growth into a slow decline. In a paper that appeared six years lat­er, Aaron, Stuart and their col­lab­o­ra­tors noticed that this dra­mat­ic change coin­cid­ed with the use of these scaled mod­er­a­tion systems. 

They were able to show that up to 30% of edits removed by these bots were of new­com­ers who they imag­ined and esti­mat­ed were actu­al­ly good-faith con­trib­u­tors, peo­ple who might oth­er­wise have brought their unique per­spec­tives to this amaz­ing shared resource of human knowledge. 

Screenshot of the WikiGalaxy project

You know, Wikipedia’s one of the great trea­sures of our times, and to save it Wikipedians orga­nized thou­sands of peo­ple in col­lab­o­ra­tion with AI sys­tems in what has got to be one of the largest vol­un­teer endeav­ors in his­to­ry to fight mis­in­for­ma­tion. Each of us relies on their endeav­or every sin­gle time we use Wikipedia. And yet, the long-term behav­ioral side-effects went unde­tect­ed for years. 

Today’s com­mu­ni­ty research sum­mit is the first pub­lic event for CivilServant, a project that start­ed here at the MIT Media Lab and the Center for Civic Media, now incu­bat­ed by Global Voices, we orga­nize the pub­lic so that as we work togeth­er for a fair­er, safer, more under­stand­ing Internet, we can also dis­cov­er togeth­er what actu­al­ly works and spot any side-effects of the pow­er­ful tech­nolo­gies and human process­es we bring to bear. 

To open up today we’ll hear from Tarleton Gillespie and Latanya Sweeney, who will set the stage for think­ing about how peo­ple’s behav­ior is gov­erned online, and how it’s impor­tant to hold the tech­nol­o­gy and the com­pa­nies behind it account­able for the pow­er they have in society. 

We’ll also hear from Ethan Zuckerman and Karrie Karahalios about ways that peo­ple orga­nize to under­stand sys­tems and cre­ate change, and some of the bar­ri­ers we may need to over­come as we try to make sense of the prob­lems, the injus­tices, the mess of a com­plex world, and try to cre­ate more under­stand­ing soci­eties online. 

After a break, we’ll show you the mis­sion of CivilServant by shar­ing light­ning talks with com­mu­ni­ties that have already done research with us and orga­ni­za­tions that are part of our mission. 

We’ll then have time to hear from a num­ber of oth­er projects that are very much peers to what we do, and hear about the amaz­ing work that they’re doing. 

When all of that ends, we’ll also have an hour for all of you to have some refresh­ments and meet each oth­er. Because as much as we are tak­ing the oppor­tu­ni­ty to share what we’ve done togeth­er, this room has an amaz­ing col­lec­tion of peo­ple with incred­i­ble per­spec­tives, wis­dom, and capa­bil­i­ties that will help us think about and shape the future of a fair­er, safer, more under­stand­ing Internet. 

Today’s sum­mit is made pos­si­ble through fund­ing of the Ethics and Governance of AI Fund, the Knight Foundation, the MacArthur Foundation, sup­port from the Tow Center for Digital Journalism. Today’s event is pub­lic, and all of these talks will be filmed and put on YouTube. You should feel free to tweet about and share what you see on this stage. And we encour­age you, with peo­ple’s con­sent, to share what you heard, attribute what peo­ple tell you. But make sure to check with some­one before you share their face, their name, or their iden­ti­ty oth­er­wise. Some peo­ple in this room because of the nature of the work they do have cer­tain pref­er­ences for how they man­age their iden­ti­ty. So we ask that you respect peo­ple’s name badges and proac­tive­ly ask for con­sent before sharing.