top of page

Technological Racism


I was very much into the discourse of technology good vs. technology bad. I devoted most of my academic life to the aesthetics of technology. One of my favourite books is Do Androids Dream of Electric Sheep. My favourite genre is Sci-fi, always. I used to watch Sci-fi films and try to analyse how hopeful the writers were according to the aesthetics of the film, independent of the plot. Blade Runner, bad. Alien 1, good. Alien 3 bad bad bad. I, robot, good!


So, technology and I have a history. Not always a good history. Sometimes I think that technology is my toxic lover. I always think of her when I’m researching something else, wishing that I could come back to her, but when I do she drains me. She makes me think that I can unveil from her some deep knowledge of the world, but so far that hasn’t happened (unless you consider capitalism as overt in any way). And like a toxic relationship, my fascination for her is such that I keep hoping that something will change. There is also a bit of gaslighting there because most of the times I think this is my fault, perhaps because I’m not intelligent enough to understand.


In my non-self-deprecating days, I am more inclined to consider technology as an impressionist painting. From far it looks so fascinating and complex that you can’t help but come closer to see how that magic happens. And all you see up close is splotches of paint with no nexus in between. What I mean by this is our technologies don’t have a mind of their own: They are inextricably tied to us users, and society as a whole, and capitalism, and a million other things. And so it is impossible to approach the subject as a whole, but when you go closer, it can lose its fascinating power. Because up-close is not robots taking over, its mostly us human being assholes. And that story, that story we know it all too well.


Lately, I’ve been focusing on racism in technology. I think most of us are more enlightened about the current issues of racism today than we were a year ago. If you are white, privileged and live in a city, if you are reading this (damn it, algorithm that only takes you to the likeminded corners of the internet), probably means that you and your circle are not racist, or at least you don’t mean to be. Something that many of us found out the past year is how embedded racism is in our society, and not only in the southern states of America, and not only in small towns. Is not a cliché, is a fact. Sometimes it happens as a sin of omission. Now, if you are catholic, omission is only a sin if it happens deliberately and freely. You might not mean it, but you still need to fix it.


And this omission is flagrant in technology, is like they can’t even fathom that some humans are not white. Silicon Valley and its bro culture don’t help. They are so out of touch with the needs of anyone different to them. Remember when NASA sent the first American woman, Sally Ride, to space and the engineers asked her if 100 tampons would be enough for 7 days? This is something similar.


Lost in COMPAS


In 2015, the technology and society journalist Julia Angwin published a very controversial article. She had been investigating the COMPAS algorithm, used in some US states to help make risk assessments of criminal defendants when they are seeking parole. COMPAS analyses some information about the defendant, such as their criminal record, age, education level and the answers to interviews to determine whether the person is likely to reoffend.


When COMPAS processes the data it assigns a score related to the probability of the defendant to be arrested again for another offence. What Julia discovered is that the error margin for black defendants was 45% compared to a mere 23% for white defendants. Black people were being placed in a higher risk category than they should have, even when they ended up not committing a crime. The creators of the algorithm claimed that their model was well-calibrated for white and black defendants.


Let’s take this into account; this is not a Twitter scrap. These were proper articles with research, statistical analyses and well-thought arguments. When the creators of the algorithm say that their numbers were right, they had the data to back it up. Also, the issue was well-known in its field, back in 2015. Nowhere near the “is Jeffree Star dating Kanye West?”, but still.


Using COMPAS there is a 60% probability that a black defendant will be classified as a high risk against 34.8% for whites. But there is also a bigger margin of error for the black population than the white. That means that, if you are black, you could be given a bigger sentence than you deserve, or be denied parole when you were not gonna reoffend. On the other hand, if you are white, you have more probability to have a false negative: to be released early even if you are likely to commit a crime again. That’s worrisome.


The mathematician David Sumpter crunched the numbers too. Then, he called for back up to go over the numbers again. To save us all the hard maths, what he found out is that if you calibrate the algorithm to avoid bias, unfairness rears its ugly head somewhere else. He said that is “like those whack-a-mole games at the fairground where the mole keeps popping up in different places”. If you try to fix the bias of the algorithm, false positives and negatives appear anyways, and there is always some group who is discriminated against. The algorithm doesn’t understand the nuisance of social justices, he just crunches up the numbers. But we are the ones that, without meaning it, introduce bias in our technologies.


Digital Redlining


I think the COMPAS example might be a bit intricate but is worth mentioning because the confusing percentages and statistics reflect how puzzled they were about biased technology. With the next examples, you’ll see that bias was not a possibility: it’s a fact.


Redlining is a prominent example of institutionalised racism. In the US during the Great Depression, the federal government create the Home Owners’ Loan Corporation to give home loans on low-interest rates. They created a colour-coded map of neighbourhoods: red areas were undesirable for a loan and green were low-risk. Other institutions and businesses started to use the maps. In California, Security First National Bank redlined neighbourhoods in central L.A. with notes that read “concentrations of Japanese and negroes”. In the US you can’t get a house in a decent neighbourhood, you can’t get access to the good school districts so your children won’t have access to the best universities and ultimately, have access to better-paid jobs. And so it goes.


Redlining was outlawed in 1968 but its consequences are still palpable today. Other software is being used to decide on insurance, loans and employment and as we’ve seen with COMPAS, not even mathematical models are free from bias, and even if companies are nor publicly redlining, that doesn’t mean that their systems promote diversity. Redlining also appears in unexpected - and sneaky – ways, like Pokémon Go.

Following government guidelines on social distancing since 2016

In 2016 it seemed that everyone was playing Pokémon Go. I was trotting through Edinburgh looking for PokéStops with a tablet because my phone was not fancy enough to support the app. Meanwhile, in the US, entire minority neighbourhoods were being redlined. Users noticed that black neighbourhoods had fewer PokéStops and Gyms than white neighbourhoods, and created the Twitter hashtag #mypokehood to obtain information about the location of the PokéStops and gym. Areas in the US with fewer PokéStops: African-American areas of Detroit, Miami and Chicago. In New York, boroughs like Brooklyn and Queens, populated by lots of Hispanic and black residents.


Why is that? Because Pokémon Go uses the same base map than other augmented-reality game, Ingress. Ingress locations were created by the users to create portals or highlight battlegrounds. As the target-player was mostly white, young and English-speaking, the locations in Pokémon Go overlook diversity.


Unrecognisable



Technologies are hostile to non-white people even in seemingly trivial matters, like water and soap taps. In this video, we can see that the automatic soap dispenser is completely ignoring the black hand while when the white guy approaches he gets the soap.

Get some soap! Not you tho

Is not just that one racist tap. Nikon digital cameras also kept asking if someone had blinked when photographing people with Asian features. The camera interpreted their eyes as closed because the developers didn’t take into account different eye shapes.

!!??

Like the guy in the soap dispenser video, we can laugh it off and think that those occurrences are harmless mistakes, oversights. But of course, is not just things as water taps. As facial recognition slowly seeps into our lives, it becomes more imperative to revise the criteria under which that technology is made. Computer scientist and digital activist Joy Buolamwini realised the flaws on facial recognition in 2018, when she started as a graduate in MIT and noticed that dark faces were not being detected. Buolamwini studied the performance of three leading recognition systems: Microsoft, IBM and Megvii. In her research, Buolamwini discovered that facial recognition works in 99% of white men, but has a margin of error of 35% for dark-skinned women. She even tried it in well-known black women such as Michelle Obama, Oprah and Serena Williams.


To teach your software facial recognition you need to create a training set with examples of faces. You feed the system with different images along with the input of “this is a face” or “this is not a face”, and over time the system will start recognising them. However, if you don’t show the computer a diverse set of faces, it will naturally fail to recognise whatever deviates from the norm.


This is especially concerning because facial recognition is already being used by law enforcement. In Washington County Sheriff’s Office (Oregon) they use Amazon Rekognition to match the mug shots from jail to photos and videos sent by eyewitnesses or retail stores’ surveillance cameras, to see if the person caught on camera is whoever they are looking for. The airline JetBlue has begun using facial recognition in the US to board certain flights. Although it aims to catch people who overstay their visa, it is likely that eventually face recognition will become too convenient to avoid. You can opt-out and have your documents scanned, but then when facial recognition systems are everywhere at the airport, are you gonna do that every step of the way? At baggage drop, security control and the boarding gate? At the end of the day, when it comes to technology, comfort always wins.



Code warriors


Joy Buolamwini

So then if facial recognition will eventually become ubiquitous, it becomes (even more) imperative to put systems in place that exclude discriminatory practices. In 2018, Buolamwini created the Algorithmic Justice League, an organisation that raises public awareness about the impacts of AI. What Buolamwini says is that we need to strive for inclusive coding, and to do that the companies must make some changes.


Firstly, organisations should have a diverse workforce, with teams from different backgrounds able to check each others’ blind spots. Secondly, when developing systems the teams need to factor in fairness the same way that they factor-in efficiency. It couldn’t be put as beautifully as Buolamwini says it:

“Why we code matters; We’ve used tools of computational creation to unlock immense wealth, we now have the opportunity to unlock even greater equality if we make social change a priority and not an afterthought.”

This way, racist technologies will not be corrected, they will never be implemented to begin with. That is the real goal.


References


  • Akhtar, Allana. “Is Pokémon Go racist? How the app may be redlining communities of color”, Usa Today, Aug 9, 2016 https://eu.usatoday.com/story/tech/news/2016/08/09/pokemon-go-racist-app-redlining-communities-color-racist-pokestops-gyms/87732734/

  • Bogado, Aura. “Gotta catch ‘em all? It’s a lot easier if you’re white”, Grist, Jul 19, 2016

  • https://grist.org/justice/gotta-catch-em-all-its-a-lot-easier-if-youre-white/

  • Harwell, Drew. “Facial-recognition use by federal agencies draws lawmakers’ anger”, The Washington Post, July 9, 2019. https://www.washingtonpost.com/technology/2019/07/09/facial-recognition-use-by-federal-agencies-draws-lawmakers-anger/

  • Larson, Jeff; Mattu, Surya, Kirchner, Lauren and Angwin, Julia. “How we analized the COMPAS recidivism Algorithm”, ProPublica, May 23, 2016. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm

  • Lohr, Steve. “Facial Recognition is Accurate, if You’re a White Guy”, The New York Times, Feb.9, 2018. https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html

  • Sharp, Gwen. “Nikon camera says asians: people are always blinking”, The Society Pages, May 29, 2009. https://thesocietypages.org/socimages/2009/05/29/nikon-camera-says-asians-are-always-blinking/

  • Smith, Gary. “High-tech redlining: AI is quietly upgrading institutional racism”, Fast Company, Nov. 20, 2018. https://www.fastcompany.com/90269688/high-tech-redlining-ai-is-quietly-upgrading-institutional-racism

  • Sumpter, David. Outnumbered: From Facebook and Google to Fake News and Filter-bubbles – the Algorithms that control our lives. London: Bloomsbury Sigma, 2018.

0 comments

Recent Posts

See All
bottom of page