Earlier this summer, my sister and I were wandering Best Buy and she was very excited to show me a fridge that lights up when you knock on it. I was deeply alarmed by this (and Lauren still thinks I’m being ridiculous) and went on to ramble for a good while about what I think the perils of inserting automated gestural interfaces into household technologies like fridges are. We then saw Amazon’s Alexa, which naturally alarmed me even more for its ability to completely innervate and surveil all of your household activities. Technology is designed to interact with us and to be interacted with, and the more we automate the processes this technology does, the more humanized the technology will become so that its ability to interact with us and be interacted with fits smoothly into our everyday lives. And as we design technology to be humanized artificial intelligence, we insert very real human bias into our artificial intelligence.
Our technology tends to become a habitually subconscious part of our daily lives. In fact, it’s designed to do this. We don’t think about holding our phones in our hands or using a computer for work too frequently because this technology has very effectively become a natural extension of our thoughts and actions. It has become such a natural reaction for us to reach for our phones or computers to communicate, write, consume, and so many other things that this technology has snuck its way into our habits. We no longer have to consciously make a choice to use our phones or computers, and as such they have influenced the way our habits are formed. This manifests in things as small as we may hold our bodies differently to accommodate holding a phone, or assuming that everyone around you has similar access to a phone or a computer and similar knowledge about how to use them. Personally, I think a particularly dangerous manifestation of this is the way that our views of technology as something inherently impartial then bleed into our habits.
Despite technology becoming such an omnipresent part of our culture and interactions, we don’t look at technology the same way that we look at other people, as is logical. We see technology as a series of computations that has a thinking capability different than that of our own. We recognize that when we talk to Siri on our iPhones we can’t use the same social cues we use with our friends. The danger here is that the key difference between our perceptions of our technology and our perception of other humans is we view technology as inherently impartial. We give technology a lot of assumed power when we allow ourselves to label judgement calls made by our technology unbiased and somehow more correct than a judgement made by a human. We are cognizant of the inherent inability of technology to feel and perceive emotions in the same way we can, but we view this as something of an advantage for the technology. And in doing so we not only discredit the ability of emotional decisions to be powerful and correct in their own right, but we also discredit the very real biases in technology.
Though technology can feel like a black boxed overlord that we have no capability of having human influence on, this is far from the truth. Technology can, indeed, function as a black boxed overlord, but this is only a result of the power we’ve given its decision making. Particularly in our present moment in which artificial intelligence is a constantly growing phenomenon, we are infatuated with creating brains in computers that we think can somehow make decisions in a more unbiased fashion than we can. This is so far from the truth. These machine brains are dripping with very real, very human bias. Everything that went into creating them at every step is human. From the human-curated data to the human-written code, from the human-designed models to the human-decided implementations, artificial intelligence can fundamentally never exist free of human input and human prejudice. And we see this at every step of the process.
Artificial intelligence is always based off of some sort of data, whether it be images, numbers, or anything else. The number of ways in which this data can be biased is innumerable, but perhaps most notably it can be exclusionary of certain groups of people, there can be a bias against certain groups of people in the methods used to collect the data, the data can be collected in an environment that caused a specific result to be more prevalent even if that is not indicative of the experiences of the rest of the world, etc. Our tendency to say that numbers/data do not lie and are somehow inherently correct feeds the technological bias long before the technology is even born. Similarly, technology development is a hugely biased process and a hugely biased industry. The tech world is a white-washed, cis male dominant field. And the incredible amount of privilege these people have matters enormously. The lack of diversity in tech means that it is designed for a very specific type of person, furthering the racism, sexism, homophobia, transphobia, xenophobia, and more that we see outside of our technology. And this problem is so rarely addressed in tech because, in the same way that technology is viewed as the objective and unbiased, white, cishet, and male are all seen as neutral traits. Anyone not white, cishet, or male is seen as the other. And, of course, this is seen in our technology in the same way that this bias has become an internalized habit in our everyday interactions.
All this to say, because we have spent so many years internalizing it, prejudice and bias do not always present as huge, glaring events. It starts in the details, and it is in every detail. I believe it is important to be incredibly wary of anything and everything that we are unafraid to label as neutral or unbiased, because all this means is that it is what the current political climate has normalized. And that is an eternally shifting and molding idea. The fact that technology has become so integrated into our habits, and will only continue to become more integrated and in more ways, speaks to technology being biased in the same way our other interactions are. In order for technology to fit so smoothly into our lives, it also must not jar with our perceptions of what is neutral. The bias is everywhere, right down to the things we begin to view as mundane. It’s in sensors that don’t recognize black skin, it’s in artificial intelligence trained on data sets that classify gender based on specific features deeply rooted in a false binary, it’s in artificial intelligence trained to see Muslims as inherently suspicious. It is everywhere.
Prejudice neither begins nor ends in Nazi rallies and presidential elections. It is always the result of a long history of bias building from the details. So, then, change also does not begin or end here. Change in the mundane, in the details that we are not even conscious of, is an important way of fighting prejudice. We must be able to address the ways in which our omnipresent technology feeds this bias, because otherwise its reach and ability to influence us will continue to grow and will continue to be harmful. Being actively aware of the ways in which our environment conditions us to normalize bias is work, but it is also an important method of resistance.