Johanna Hedva: One of the things that was not included in my bio was that I did my undergraduate degree in this department, the Design Media Arts Department, many years ago. And I have to just start this conversation by saying I am so moved that this conversation's even happening in this space, in this department. It was not at all happening when I was a student here, and I'm really just so thrilled, like really emotional about it that we are having political conversations around our work as graphic designers, media artists, artists, cultural producers, in the 21st century in a digital world.

One of the things that just immediately struck me about all three presentations is the role of design and just visual semiotics of these things. For example, Safiya talking about these web sites that Dylan Roof would go to and they looked like, they spoke the visual language of a neutral news site or a reputable journalism site, something like this. That's a graphic design issue, right, media design. And the visuality thread is seen in all of these, how representation just [at] a very basic formal level is working here.

So one of the things that it makes me think about was, in my experience as an undergraduate, my mentor was Willem Henri Lucas, who is a very political designer, and we would often talk about what the politics were that were embedded in certain choices we were making as designers. For example typography choice is a political one. Who made it? Did you have to pay for it? Is it a corporate font? Is it used by corporations, so that it looks corporate? Are there fonts that you use in journalism sites that bring you a sense of legitimacy as a news site?

And I think that our generation is so well-equipped to be able to make media and make our own images on the Web. And I think that we can be very savvy about how those get deployed. And in the case of the Council for Conservative Citizens, they probably had a good sense of what kind of graphic design decisions they needed to make in order to seem neutral.

So I was wondering if maybe we could start by talking about how this question of structures of oppression and domination that are in place in the real world getting replicated online often happens visually. And through just very basic visual cues. Fonts, like if it's in bold, that signals something different than if it's blinking yellow.

So maybe we could talk about the visual literacy, media literacy, that's inherent in digital practices, especially the 21st century, especially for Millenials, the generation that grew up being very comfortable making media online.

Safiya Noble: I'm happy to jump in on that. I think it's a great prompt. One of the things that I like to talk about as well in this kind of issue, particularly with the example of Google, is that what we have is kind of an aesthetic of a white page, a blank white slate with an empty box that communicates simplicity. It's simple to put in a query, and it's simple to get back an answer. And yet the kinds of questions that often get put into a search engine are highly complicated, nuanced, contested kinds of ideas or concepts; that simplicity is actually not the right approach.

So we have a socialization, and I certainly see this myself now in this role teaching, kind of an expectation of a simple and immediate answer. An instantaneous "put in, get out" kind of aesthetic, and that that translates to how knowledge is produced. And of course we know that is not how knowledge is produced. People go to war over knowledge, right? So I think there's something that we have to be incredibly careful about. I talk in my work about what would it mean, for example, to make knowledge transparent and to see all of the complexities of it, and then have to make choices within that complexity rather than an aesthetic of whitespace and nothingness that communicates something. And it's a falsehood.

So danah boyd for example did studies on teenagers who moved from MySpace to Facebook. And what she found in her study [PDF] is that high school students who were still on MySpace were typically marginalized students. They weren't jocks, they weren't cheerleaders, they were kind of alternative, whatever that means, and also people of color I guess were alternative. (Aren't we?) And part of the kinds of things that students reported, especially in a 21st century era of colorblind ideology [was] students want to name in an explicit way how they felt about their peers. They would say things like, "Well, Facebook is cleaner. It has cleaner design. It's not so messy." And these were code words that she found that reflected the incredible hyperindividuality that you can express on MySpace—you got the marching ants or whatever code in there—those were signifiers of difference. That in a space like Facebook that was made as an Ivy League social networking platform, where you have a high degree of homogeneity in that kind of university environment, that the aesthetic of clean lines, of not pointing to difference and individualism, was actually valued. But it also was loaded with these kind of class and racial markers.

So I think these are the things that— Go read her work, because it's really powerful. And what happened as students left MySpace and went to Facebook, we lost a whole generation of especially girls who were learning how to code. They could do basic HTML kind of work by personalizing their MySpace pages. You guys probably weren't even born when everybody was on MySpace. So what did we lose, also, in these kinds of design choices and moves, and the discourse and rhetorics that we used around them. So I think these are absolutely design questions and issues, and they're explicitly political and loaded with these racial, class, gender dynamics.

Hedva: I just want to offer rejoinder before you all jump in, but I think always about—in arts, which is the world that I ended up moving in more than the design world, there's this word that gets tossed around a lot which is very funny to me, which is "site-specific." Like as though there is a thing that's not that, right? Like this idea of space (and artists talk a lot about space) as though it's this neutral, empty, ahistoric location. Like what about place? Place implies there's a cost of rent, there were neighbors, there's rules, there's cracks in the walls, there's street noise. There's all of these things that are part of that location.

And I think in digital "space," which we often talk about, those assumptions are carried over again. Like this idea that the cleanest digital design solution is the best is an implicit value. There are always these redesigns of social media platforms that are done, according to the company, to make things cleaner, which I just think is so funny, that that's the implicit value that we all now in this world want. Like, oh I need to clean up my Facebook feed, I need to clean up my Twitter feed. This kind of cleanliness…very strange to me.

Casey Reas: Before MySpace came to the world, everybody was not everybody at all. The few people who had a presence online were doing everything entirely from scratch. Learning from other people by looking at what they had done, but everybody was able to have an entirely unique—the system allowed for an entirely unique presence. MySpace was somewhere in between there and what Facebook is now. Twitter has largely proliferated because it's so easy to do, but we're all within the straitjacket of that character count, and it seems like we're increasingly putting ourselves into these narrow containers through which we can express ourselves. Facebook is changing a little bit now, but the only affect you can apply to something is you Like it. You have no other option. I find that just to be very interesting.

Noble: I don't want to dominate. I going to just say really quickly that I think you're right. What has happened for those of us who remember pre-digital, pre-Internet, and kind of came of age (which would be my generation) on the Web, there was certainly… The commercial Web environment was kind of late to the game. It took a while. I remember when everybody was like, you know some teenager owned AT&T dot com. Then they're like, "Oh my god, we gotta get that domain!" So that was happening all over corporate America. Everybody owned something and was subverting it. So you're right. I mean, what's now totally normalized is that we function within these corporate containers, with very little recognition of, for example, our digital labor, that just ourselves as an audience is the commodity, right? (So Dallas Smythe and the audience commodity.) We have to understand that these containers in many ways shape our behavior and socialize a particularly kind of normalcy. And it's incredibly difficult to intervene upon it.

Marika Cifor: There's now interesting activism that comes specifically from those constraints. There's an example of an activist getting getting Nike to customize his sneakers with the word "sweatshop" on it, which they didn't want to do. So kind of using the constraints as a way to speak back to that corporate system in interesting ways.

An Xiao Mina: I think what's interesting also is how these spaces and within these constraints people are finding ways to critique other forms of media. So two specific examples that I can really think of recently is that hashtag #IfTheyGunnedMeDown, where young men of color, especially black men, were using Instagram or other forms of social media to show side by side how they choose to depict themselves versus how mainstream media might depict them if they were gunned down. And I thought that was very powerful and provocative, because it was a critique of broadcast media.

And then you see this in a place like China, where you'll see propaganda memes. People will take classical Chinese propaganda and then remix that and challenge the traditional broadcast media that previously most people had to just inherit and could remix maybe on a small-scale locally with posters. But the ability to distribute that through these networks becomes very powerful. And it seems like there's this opportunity to make more public this critical discourse in media, and that critical discourse needs to go back onto the media itself that people are using. That in using and finding so much power through Instagram and Twitter to critique broadcast media we're forgetting that those platforms are themselves media, they are themselves a form of mediation, and that they themselves have their own biases that need to be critically examined. And it'd be interesting—I'm kind of thinking of this snake eating its tail—but that kind of critical critique of the very platform on which you're doing your critique, what that might look like and how we can shape a public discourse around that.

Hedva: An, one of the things I was thinking when you were talking was sort of how imperialism has now begun to function within languages on the Internet and how we can see— Well, you know that first map where the language just redraw geopolitical lines. I'm wondering if you can talk a little bit about…there's a lot of discourse in postcolonial, decolonial thought around how the colonizers' theft of the indigenous language is a function of imperialism. All of this literature that I've read about it is pre-Internet so far. And I feel like this is the first time I've started to think about how languages are still in that process, but now in a digital way. So I'm wondering if you could talk a little bit about the relationship between imperial/colonial histories and methods with language.

Mina: Sure. That's a great question. I think the history of erasing languages is very much a colonial artifact. And it's often invisible to the next generation, I think especially in the United States. My family's Filipino-Chinese. I grew up speaking only English. It just takes one generation for access to one's native tongue to just vanish and disappear. And certainly this has a very long history. And it's interesting because the very platforms that are disseminating across the world are largely being designed and developed in one part of the world, which happens to be where I live. And that inherent bias that that one region of the world, which is just one space perspective, then places pressures on other people to have to learn that language.

And I think about these language biases as being a full stack problem. In technology design we talk about the full stack. All the way down from the very code in which there's so much talk about the open source software movement and the ability to create and shape your software and your media environment. That code, the human-facing part of that, is in English. And it's simple phrases. Yes, you can learn those phrases, but imagine trying to relearn code in a language that you don't speak, and suddenly you're having to learn two languages: the programming language and then the language in which the programming language is expressed.

And then it moves up to the typography pressures. The ability to input Arabic on a mobile phone up until recently was severely limited, and Arabic speakers literally had to use Roman numerics to express their language online. Which was incredibly creative. There's these creative workarounds for the language, but as a result had to use Latin script and basically erase their script from the Internet until input systems improved.

Then it goes up from there into content and how if you want to know what's going on in the world, if you want to have access to… Here I used the Wikipedia example, but another example would be Stack Overflow. Again, such an important place for people to learn how to build the software around them, if that knowledge is only available in English and Portuguese right now, the pressure to have to learn those languages increases substantially.

And then all the way to the typography. We're talking about the political decisions around typography. In languages that use Latin letters, you have a wide variety of typography and fonts that you can use, and if you have that kind of critical knowledge about the implications of all these fonts you can really make important design decisions. But if you have access to only one or two fonts, suddenly the ability for you to create a space around the very content and the sites that you're trying to create again becomes limited and you're inheriting someone else's designs around your typography.

So these biases and these pressures exist across the board. And the very pressure to join a major network like Facebook or Twitter then pressures you to have to learn the languages in which they operate. So these kind of colonial pressures on language are very much reproduced.

Hedva: It's like code-switching but for cultural things. I was just having a conversation with my friend who's Polish the other day and he was saying because of globalization and imperialism he can talk to me about The Simpsons or something, but I won't have any idea about the Polish TV shows that he grew up watching. And I think that this way of cultural, pernicious, pervasiveness from English considered to be the universal language now…this is really getting replicated exponentially in the digital space.

Mina: And even within English-speaking cultures, the general valuation of whose English is more important and certainly when we're talking about the English-speaking Internet those people are thinking United States, Canada, United Kingdom, and maybe Australia. But many other countries, people do speak English and even then their voices and their perspectives are often devalued and underserved. So in addition to building voice I think we need a culture of also building better tools for listening and these kind of pipelines for valuation of these new voices online. Because you can shout all you want and have new voice, but if there's not enough listening those connections aren't made.

Noble: I want to add too that I think there's this economic dimension, especially in Silicon Valley, San Francisco, where truly it's a closed community of mostly men and well-documented racism and sexism and exclusion that happens, particularly in venture capital funding. So that the paradigm around design happens in the context of commodity. So what can be commodified and sold, packaged, make profit, that's a driving paradigm, a design paradigm I would argue in Silicon Valley that really affects everyone else. If you don't have millions of dollars, you actually can't implement a lot of these kinds of projects that we might offer up [as] critiques or subversions or some other kind of alternative.

So the opting out and going to other things, like, what other things? How can those things be created when they're happening in the context of really hyper-capitalism in Silicon Valley. And of course the bias that we don't see is the incredible profit margins that happen around platforms and technology designs coming out of the Valley are in direct relationship to our economic policies of globalization. So the manufacturing jobs are offshored to the places where children can make those technologies or people can live in dorms. We've read the kind of Foxconn stories where, for example, in the Congo where people who are doing the mining for coltan are living in the most extreme sexual violence, rape, assault conditions in the world according to the United Nations. So those things get outsourced. So what gets outsourced in the design of our projects is really important as a humanitarian issue.

One thing that we don't talk about that I'm starting to take up in my own work is the sustainability of these technologies as well. We have a serious design crisis around information/communication technologies and their contribution to global warming. We just saw the latest reports coming out now post the Paris bombing and just prior to that climate change issues are directly implicated in the destabilization of many nations where we are starting to see tremendous consequence as a result of that destabilization. And we're implicated, again, in these while we're fetishizing Google and Facebook. So I think that we need a radical design change. And I might ask if I were teaching an HCI class or design class with you, I would say, "How are you going to design this so that not one life is lost?" What if that were the design imperative rather than what's your IPO going to be?

So it's the paradigms within which we're designing which that are really important, too.

Reas: Safiya, I have two follow-ups with that. One is the statement that you made that people seem to trust search engines. So my question is why do you think people trust search engines. But that leads to the next question, which is this idea of proprietary and corporate data versus open systems, public systems, and why do you think that all of our data is within these proprietary algorithms and ways of storing rather than in things that are more open, more public.

Noble: These are great questions. Part of the reason we trust search engines is because if we need to find out what time Starbucks closes, it gives us the answer, right? So that's crucial information that some people need. If you need to know what time does my class start, I've got to go online and get the schedule because it's the first week of class, the room or the map…there are kinds of banal types of information that search are excellent at indexing and then providing for us. And that reinforces our trust in them.

When we start to put complex ideas into these environments, other things happen and that's where we lose our sense of making sense of what's appropriate to put in and what's not. And of course any of us up here who teach (this is a tip for anybody here who's an undergrad), your professors see it when you're papers come in and you're citing things that make no sense to a faculty member as a legitimate citation. There are certain things that, you must have Googled that because there's nothing scholarly about that and that's a no. You can't go on that. But you don't know, right? Because you can trust it for all these other types of information searches. So that's part of the challenge.

I think in our field, we might say that librarians and information professionals, we've really been focused on scholarly information and scholarly knowledge, a different type of curated and vetted information. And we have not put a lot of our attention as a field, as a practice, on the broader indexing of the web and curating the web. I'm certainly trying to implore people who do that type of work to consider that that might be an alternative way for us to think about. What if we had a search engine, I often say, that was curated by all the research universities in the world? Not just in the United States, but in the world.

There are so many forms of knowledge, for example, that we don't even have access to because of the language. Brilliant thinking that's happening from other parts of the world that's really highly inaccessible. And these are the kinds of challenges that I think we're facing in the Library and Information Science, Information Studies fields, access to knowledge has always been a key driver for millennia for people who keep the record and knowledge. But in the commercial spaces, which is where the majority of people live on the web, that has not been in the bullseye. And I think that's part of the barrier that we have to work through.

Maybe some of you will come do graduate work in Information Studies and will help us think through these, because they are design issues, too.

Hedva: Marika, do you want to jump in?

Cifor: I think that it's also that they're not a thing we're taught to read critically. They seem very neutral and objective, and even the way in which the language there is used, right? It's the things that pop up with the highest relevancy, to the top of your Google search, for example. And we pick on Google a lot, but that's because Google is more than 70% of the market share for searches in the United States, so it matters. It is a target for a particular reason. But the design itself is intended of course, to come back to that, to look neutral and unless people are taught to read that critically the way in which they're taught to read other information critically, why would you?

Noble: Also, Google, or search engines, they're a ranking format. So the already always way of thinking about ranking, especially in the context of the West or the United States is what? "We're #1," right? If it's first, it's the best. If it's on page ten thousand whatever, we don't even care. Because the paradigm of ranking is actually the design driver. So that has an incredible impact on why we believe, because it must be right.

Why porn…up until I wrote that Bitch magazine article, about six months after I wrote that article they changed the algorithm and black girls don't get pornified as badly anymore. (I can't say it's the article, but I can't say it's not.) All I'm saying is that who has the most political capital an economic capital is the porn industry on the Web. Are you kidding? It's like, we wouldn't have credit card processing and video and audio if the porn industry hadn't put a ton of money behind that so that they could sell their products. So again, we don't really think about these other economic and design drivers that are providing the context for how we receive.

Mina: It's got me thinking in terms of design and design provocations as, there's this fetishization around intuitive design and simple design, and how those can be extremely dangerous, that intuitions are founded on assumptions. And if you are designing to make things as simple and as clickable as possible without forcing people to have to think through what clicking means, what searching means, and that this whole ethos of not making the user think is a great way to hide the implicit bases and explicit biases that these systems contain. So it would be interesting I think to think about, rethink, a friction of design that would show what is happening behind this simple little box. What is this algorithm doing? How is it floating up things that are relevant? And how is your search history and your location impacting these results? You can imagine a Google that makes it difficult for you. Instead of "I'm feeling lucky," it's, "I have to work for it" or something. And what could that look like? I don't know, but it's interesting to think about a kind of contraintuitive design that can raise these questions in interesting ways.

Reas: I'll ask An a quick question related to what you were discussing. What if we assume that text is no longer the standard. I mean text was necessary at one time because of how slow bits would travel, but now we can work with different kinds of media. So yesterday we were talking a little bit about languages that aren't character-based or don't really have a written form and how working with audio directly can enable different kinds of conversations. So I was wondering, what is the state of the art of that, or where do you imagine that going, to allow other voices to come into play?

Mina: I think the most interesting to me is the WeChat example, which I showed and you saw in that picture of Leo Messi. I didn't get to talk about it; I was rushing through, but that interaction of him holding the phone like this [holds phone up to mouth] is him pressing a button and speaking into WeChat, which is a Chinese social network that started out a little like WhatsApp but is a lot more robust now. And he's speaking into the mic and then releases the button, and that sends an audio message to the recipient. It's not a transcribed audio message, it is just the audio message. WeChat was sort of a pioneer in that interaction. We now have that with iMessage and WhatsApp.

Part of the reason that this became very popular is that again the difficulty of inputting Chinese into a mobile phone. And Chinese being a non-alphabetic language, there are many forms of input, and it's interesting that the keyboard kind of stepped away. This simple of idea of even using the keyboard to input your language moved into just pressing a button to actually speak it. And that I think is one great example of how audio can change an interaction both for input and then also for listening.

Then you see this with e-reader apps, Instapaper or Kindle. And we were talking about this yesterday, that this is not just an issue of language, it's also about accessibility and also literacy. That the ability to just press a button and listen to what's there, to listen to the text or simply bypass text altogether and listen to a podcast. The rise of podcasts is an interesting example again of moving away from text as the primary mode of interaction with the Web and moving into audio and potentially video.

And I think this is going to be an urgent need as this next billion people who are coming from languages that have no written form, the ability to interact with information, and share that information with communities, I think—again these biases. Forcing oral language communities to have to write down their language. The majority of languages in the world have no written form, the majority of languages in the world are oral. And so rethinking the very interactions and focusing on audio, potentially video, again as technologies allow for this I think can be quite powerful and also just necessary, I think. Just an important way of preserving culture and not continuing this cultural imperialism that tech implicitly continues.

Hedva: I just have to…as the queer performance artist, from my position I have to talk about the body. So I wanted to talk to Marika about how especially this question of embodiment on the Internet and how I think you called it the Internet intervention into the provenance of bodies and how that functions I think can also related to this idea of moving away from text. Because if we think about differently-abled folks who might interact with information from a different place from reading text, right, I think that… I don't know. I'm in a very ambivalent position about how the body can exist in a virtual space and I just rewatched Johnny Mnemonic… See, nobody knows what this movie is.

So this was this cyberpunk vision in the 90s of what virtual reality would be. It's Keanu Reeves and one of the most…the best Keanu Reeves role. And the idea is that he can carry data in his brain. He can carry forty gigabytes [laughter] and so he's on the black market as a data carrier across international political lines, and he has to upload it into his brain. And something went wrong and he now has eighty gigabytes and so, you know.

But the implication of the body in these earlier cyberpunk films and novels, of course, was that the body would somehow kind of disintegrate into technology in a very horrific, dystopian way. He keeps saying, "I need to get online," because he needs to empty his head. Sorry for the detour but it seems pertinent.

So I'm very interested in how the body gets represented and archived and replicated and erased and all of these things. I'm wondering if you could talk a bit about that in its relationship to the Internet, the digital, virtual, space.

Cifor: I think that's an interesting and important question. And there's much written about, especially I think in the early 2000s about the kind of potential for the Internet as a space where you wouldn't be tied to your body, as a kind of freeing potential to live in a world where you could evade race and class and gender and all of the things that constrain our physical bodies, as well as issues of ability and other constraints of the physical body.

What's actually happened, of course, is…that hasn't happened. People's avatars tend to represent things about their own experience. We know that all kinds of things that happen to physical bodies in the physical world happen in the digital as well. It perhaps frees up certain kinds of harassment to happen in a greater extent if there's potential of anonymity, and apparently when given anonymity we get even more racist and sexist and classist than we feel free to be when we actually have to look at someone.

So I think that some of that potential for the body to disappear just, we still kind of interact with the digital world constrained by the same kind of paradigms in which we live. It's hard to think outside of the constraints of race, class, and gender even if you're in a world where maybe you can make your avatar look like whatever you want it to look. We still are constrained by thinking in ways that are formed in another world. And I think that it's part that temptation to draw a line between the physical and the digital world, as two separate worlds when in fact I don't see them as separate worlds. They continue their racism, classism… Like sexism in the real world, there's racism, classism, sexism in the digital world. And that some of that potential for the body to disappear hasn't happened, that of course constraints of physical bodies, when we were talking about issues especially of accessibility, still apply in a digital world; depending on your physical limitations or potentials of how Internet is also shaped in that way. So I think a lot of that kind of utopic potential for the body to disappear just is not realistic and is perhaps not even a thing we want.

I'm also really fascinated by the way in which new identities are formed by the digital, and the way in which those identities are highly gendered. If we think about the categories of people that didn't exist, like the brogrammer and the proliferation of those images. Those are highly gendered images of who is creating the digital world. And it's not just that. It's all of those images are highly gendered. So I don't think we have escaped the body in any way.

Noble: I want to add to that. I think we could also ask about to what degree does artificial intelligence promote an erasure of our humanity, or we could call it the body, or a human experience. So those of us who study algorithms critically, we talk about the ways in which human beings now who engage with algorithms in everyday life are more likely to trust the algorithm than to trust previous forms of knowledge. Tarleton Gillespie writes about how algorithms have become so fundamental to the human experience that they've started to replace common sense, credentialed experts, or the scientific method, and even the word of God. This is what he says. And I think these kinds of ways of seeing a disappearance of the body, we could argue, by our increasing reliance upon an artificial intelligence is something that's really important and worth examining and thinking about into the future.

So what would it mean that, and you hear people already in such common everyday ways, talk about… I heard the provost of a big university once say something, or was reported that he said, "What do we need the library for when we have Google?" So it's like you have an interesting notion of human knowledge and its complexities, and that even the process… You know if you've had to go to the library to actually do research…which I'm not even going to ask for a show of hands because I already know that the number is low. So what does it mean, again, that we trust an algorithm to give us a decision or give us knowledge or to have already curated or done the hard thinking for us? And you know, there are people who are writing in controversial ways about this like Nicholas Carr, who's talking about the neurological effects of how our brains are changed by this kind of instant gratification, instant always, uncritical…our loss of our ability to think.

And so I think this could be a very important aspect of rethinking what it means to think about embodiment, that we trust our technologies to embody a level of our humanity that's better than our own humanity. And I think that's incredibly complicated.

Mina: And I think there is this opportunity instead of erasing the body, to make the body even more visible and more nuanced. I was just reading an article about the Tumblr culture of the tags and of talking about gender and gender identity has helped foster a significantly more nuanced language around what gender and gender identity look like. So moving beyond male/female binaries into genderqueer identities, LGBTI identities, things like that.

And then similarly we could talk about this with race and class and this decision to flatten race into these big categories. Instead, look at the nuances of what people's racial and ethnic identities look like. It seems like there's an opportunity to make the body more visible, and more visible in its specificity, and that rather than erasing the body it's about bringing out our humanity.

Of course there are risks to this, especially if you are highlighting a marginalized identity. That can create a non-safe space for these communities. But at the same time, the visibility of one's body, making that visible, making that visible to other people, becomes an important way for especially marginalized communities to find identities that may not be represented in mainstream media. And I think particularly the trans and LGBTI communities have a lot of great studies on how being visible to each other is itself a powerful act. So I think making our bodies more visible seems like a critical act of justice.

Further Reference

There was also a href="http://opentranscripts.org/transcript/biased-data-panel-qa/">Q&A session.

Biased Data: A Panel Discussion on Intersectionality and Internet Ethics at the Processing Foundation web site.


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.