Jessica Rajko: Hello everybody. Thank you for being here. My name is Jessica Rajko, and I am a dancer and a designer. I coexist across many spaces at ASU, including the School of Film, Dance, and Theater, the School of Arts, Media, and Engineering, and the Human Security Collaboratory, which I codirect with two amazing women, Dr. Jacqui Wernimont and Dr. Marisa Duarte. With them, I conduct research that explores our physical interactions with technology, and explores how that affects our everyday or general ways of being in the world.
To explain a little bit why I’m interested in this area of research I want to start by telling you a story about Google Glass. So Google Glass came to the market with great hype in 2015, but as you might remember it failed to fully take off. Google expressed surprise when people began connecting its failure to surveillance concerns, citing that these fears couldn’t really be about the device’s onboard camera because there are many cameras in public and private spaces, and this is only one.
Now, logically…Google’s right, yeah? If we’re just looking at the cameras and we’re just looking at this from a design perspective, then this doesn’t make rational sense. But let me re-set the stage from the perspective of a socially-conscious movement practitioner. The voyeuristic nature of this camera is not just about the camera itself but the way in which it is worn. It is a publicly-recognizable camera, permanently facing outward, on a moving body. It is at eye level and it roves and seeks with its wearer, stopping to stare directly into your eyes during a conversation.
If you look at a Google Glass wearer you do not see two eyes but you see three. And that third eye is unblinking, and it could be recording everything. It feels alive because it is connected to a living being. It cannot recede into the background. It cannot be put away. It…just…watches.
Google Glass rubbed up against a world that was felt mostly by those who are proximate to Google Glass wearers, hence the emergence of the derogatory name “glasshole.” These negative sentiments are also likely a resurfacing of existing negative feelings towards video surveillance more broadly. For now, this camera’s not connected to a lamppost or a building but it is connected to a person. A person to whom I can voice my frustrations. A person we can kick out of a restaurant. A person we can define as a glasshole.
These are the seemingly subtle human-computer relationships that we are starting to study. Partially because we’re interested in how this can help us design better technologies. But also it helps us understand the very real and potentially harmful repercussions of our traditional design methods. Ask yourself, just because you choose to adopt a technology does that mean that you trust, value, or condone it?
Oftentimes when something like this—like Google Glass fails, the gut response is, “You know, people just aren’t ready for it yet.” But this is dangerous. Because it makes the assumption that the best plan of action is to facilitate, create, or wait for the right conditions in order to try again.
But what if we as designers took a step back and said you know, what’s wrong with my process? Why are my designs creating fear, anxiety, and paranoia? And lastly what if my practices and my own lived experience do not relate to those who I intentionally or unintentionally design for?
Rather than begrudgingly pushing society forward to be ready, I ask designers to critically reflect on the limitations of their own design practices and to remember that to design for one intersection of society—namely, affluent middle-to-upper-class white American men—does not mean that those designs will work for those who do not identify as such. Even with modifications.
This is more than bringing the right people to the table. This is about changing who gets to make decisions, and how. Because we know that our design practices do not come from neutral or acultural places. They represent the implicit and explicit identities, values, histories, habits, and cultures of those who make them. Which we know in the tech industry is predominately hetero cisgendered men who identify with Western schools of thought. If we’re going to make change we have to make room for change in our methodology. This does not mean that our existing practices methodologies are bad, or invalid. It just means that they, like all practices, have limitations.
To address these issues in my own work, I collaborate with other people who’re interested in creating research from a multidimensional perspective of what it means to be human. In this, we celebrate a multiplicity of embodied identities, including queer, feminist, racially diverse, and differently-abled perspectives, because we think it’s important.
To summarize, our work is compassionate. We do our work with an ethos of care and joy. Our work is kinesthetic, which means that we forefront movement-based and bodily knowledge. And lastly, we see ourselves as disruptors. We do work to question, interrogate, investigate, and break apart structures that we see as needing radical change.
If we want our technologies to represent all of us in society then we need to make room for radical change. This means doing wild things like inviting a socially-conscious movement practitioner to the design table. Our technologies are too much a part of our everyday lives to have only certain sociocultural perspectives and certain practices embedded within them. So we need to make room for change. Thank you.