Algorithms Don’t Think About Race. So Tech Giants Need To.

Recently, during a presentation to an audience of library professionals — all of whom (including myself) happened to be white — regarding subject access concerning gender and sexuality, I demonstrated variations describing major North American racial categories across three databases.

One of the participants spoke up. “I find those offensive,” she said. “Why do we need to think about race? I don’t think it’s relevant!” I was a bit taken aback by her response, as both the previous speaker and myself had addressed the theme of intersectionality — how we’re not just men, women, trans, white, black, Indigenous, able-bodied, straight, gay etc. but that we’re all combinations of all of these factors. Several other audience members joined me in pointing out that many people are directly and negatively affected because of how society responds to their race, so yes, it is an important factor that we need to discuss. Indeed, argued one participant, the very absence of diverse faces in the room was likely a symptom of structural racism.

“I just don’t get that!” She countered. “I don’t think about race, I just don’t. It’s not important!” The conversation continued for a few minutes and I managed to steer the presentation back to the slides I’d prepared, but the woman — still clearly agitated – -got up and left shortly afterwards.

Her response isn’t unusual: any open discussion of race is often met with hostile reaction from whites who counter that merely raising the issue is itself racist. I realized afterwards that what I should have more clearly articulated is that the intention of such a discussion is not to accuse anyone of being a racist, but rather to acknowledge that because we are all socialized within a structurally racialized system, we are all affected by race whether we say we think about it or not. Having just discussed it earlier, I should also have in particular referred back to Sanford Berman’s work on Library of Congress Subject Headings to show how structural racism in the real world can get reproduced in the language of library catalogues, databases and search engines.

This latter phenomenon has gained increasing attention over the past year as journalists and activists have documented apparent bias in mainstream search engines. In the middle of 2016, a viral YouTube video demonstrated how a search for “three black teenagers” resulted primarily in pictures of criminal suspects, while the same search for white teenagers showed happy, well-dressed young people. In a related story from later that year, a British journalist named Carol Cadwalladr typed in the start of racially-oriented questions into Google and got appalling results from the site’s auto-complete feature:

[Google] offered me a choice of potential questions it thought I might want to ask: “are jews a race?”, “are jews white?”, “are jews christians?”, and finally, “are jews evil?” Are Jews evil? It’s not a question I’ve ever thought of asking. I hadn’t gone looking for it. But there it was. I press enter. A page of results appears. This was Google’s question. And this was Google’s answer: Jews are evil. Because there, on my screen, was the proof: an entire page of results, nine out of 10 of which “confirm” this.

I decided to try some related searches myself, and was equally disgusted with the results:

google_blm

bing_blm2At the same time, I was working on updating my University Library’s research guide on Race, Racialization and Racism, and decided to link to some recent video content regarding the Black Lives Matter movement. Heading over to YouTube, I typed in “Black Lives Matter” and was soon shocked at what I saw: page after page after virulent page of videos — most of which featuring white speakers — that were blatantly anti-BLM, calling it “hateful” the “new KKK,” “racist” and a “terrorist organization.” (I am deliberately not providing links to these videos).

The prominence of such content in Google’s and YouTube’s search results is based on algorithms predicated on popularity and the needs of advertisers, not relevance, accuracy or reasonableness. While such results may be considered by users to be authoritative and “the truth,” as media and cinema studies scholar Safiya Umoja Noble writes,

[i]t is dominant narratives about the objectivity and popularity of web search results that make misogynist or racist search results appear to be natural. Not only do they seem “normal” due to the technological blind spots of users who are unable to see the commercial interests operating in the background of search (deliberately obfuscated from their view), they also seem completely unavoidable because of the perceived “popularity” of sites as the factor that lifts websites to the top of the results’ pile. Furthermore, general belief in myths of digital democracy emblematized in Google and its search results means that users of Google give consent to the algorithms’ legitimacy through their continued use of the product, despite its ineffective inclusion of websites that are decontextualized from social meaning, and Google’s wholesale abandonment of responsibility for its search results.

The potentially lethal consequences of this kind of abandonment were made starkly clear following the trial of Dylann Roof, who was convicted and sentenced to death for murdering nine people in the hope of launching a race war, when it was revealed that his goal was set in motion by his immersion in racist Internet articles:

Roof’s radicalization began, as he later wrote in an online manifesto, when he typed the words “black on White crime” into Google and found what he described as “pages upon pages of these brutal black on White murders.” The first web pages he found were produced by the Council of Conservative Citizens, a crudely racist group that once called black people a “retrograde species of humanity.” Roof wrote that he has “never been the same since that day.” As he delved deeper, because of the way Google’s search algorithm worked, he was immersed in hate materials. Google says its algorithm takes into account how trustworthy, reputable or authoritative a source is. In Roof’s case, it clearly did not.

Facebook, too, has run into trouble for its reliance on algorithms, with the result that users are faced with ubiquitous “fake news” originating on the far right, compounded by its decision to eschew a tag for Black Lives Matter:

While Facebook has attempted to profess that algorithms are somehow neutral, many people have pointed out that an algorithm also represents an editorial decision—the instructions that coders pour into it are just as subject to human values and bias as other choices.

In much the same way that accusing individuals of racism misses the larger point, we need to recognize that tech giants such as Google and Facebook aren’t deliberately, consciously racisthowever, by basing their operations on supposedly “neutral” algorithms that don’t account for structural racism in the broader society, they can’t help but occasionally produce racialized results — with sometimes deadly consequences.

To address this, more curation is required on the part of tech companies. Search engines should not be auto-suggesting racist search queries, negatively portraying racial groups with image results, front-loading blatantly racist videos in response to a general query or immersing users in racist content without balancing results from anti-racist websites. Just as claiming one doesn’t think about race is in fact a decision to think a certain way about race, so too are claims of algorithm neutrality.

Advertisements