Implicit and Unconscious Bias in the Digital World
In my ongoing search for savvy resources about building equity, inclusion, and diversity into online training courses, I stumbled across some fascinating studies on implicit bias, and how bias impacts the apps and online services we use as well as the artificial intelligence (AI) that fuels those apps and services.
For context, there are two terms commonly used in these discussions. Implicit bias includes attitudes or stereotypes that affect our understanding, actions and decisions in an unconscious manner. Unconscious bias spans our backgrounds, experiences and stereotypes that impact our decision-making, including our quick judgments of people and contexts without realizing what’s actually at play, or at least what’s under the surface. What I found most compelling in these studies is the notion that many of our deeply rooted associations do not necessarily align with our declared beliefs or even reflect stances we would explicitly endorse. Wow. (To dive right in and see where your biases might be lurking, I encourage you to take Harvard’s Implicit Association Test (IAT) – and then spend the week questioning everything you think you thought.)
Reinforcing What We Hope to Erase?
AI-powered systems use historical data to make judgments just like our brains do. (Well, not quite always just like our brains, but tech is getting close.) Fun fact: historical data is often chock full o’ bias as it encodes information (and, thus, all that bias) into the programs it feeds. One study cautions that AI has the potential to reinforce existing biases because, unlike humans, algorithms may be unequipped to consciously counteract learned biases. Examples of sexist and racist bias appear in algorithms that make language and facial associations. What we’ve already seen pop up across platforms are linguistic connections between traditionally gendered roles – matching “female” and “woman” to “homemaker” and arts and humanities careers, and “male” and “man” to science, tech and construction professions. Even Google Translate has shown signs of sexism, automatically suggesting words like “he” for male-dominated jobs and vice versa, when translating from a gender-neutral language like Turkish.
It gets worse, at least linguistically. AI systems have shown to more readily associate typical European (read: white) names with positive words while African American names are more than often associated with negative words. (Everyone can see this is problematic, correct? Again, take the Harvard IAT and have your mind blown even more.)
There’s Work to Be Done
The good news is we can shift far, far away from our learned biases, lose those crappy stereotypes and attitudes, and not make them worse, which is, unfortunately, what some programs and apps might actually do. Tech challenge: Can we program algorithms…to consciously counteract learned biases? We’re going to have to if we want to hack away at, rather than reinforce, systemic sexism and racism. (Is it just me or does there seem to be an opportunity for implicit/unconscious bias eradication training for all AI programmers across the land?)
It’ll be interesting to see where such studies and awareness actually lead the tech world. For those of us building our own platforms for learning, we can always do better by constantly considering the race, gender, sexual orientation, age and ability of our audiences and choose language and imagery to represent that rich diversity. And, when we uncover and understand our own implicit bias, we’ll be able to better address equity and inclusion opportunities in all aspects of our professional and personal lives.