茄子直播

A common test to evaluate people's implicit bias has been 'oversold,' 茄子直播 researcher says

Professor Ulrich Schimmack of the University of Toronto Mississauga stands in front of a railing with his arms crossed
Ulrich Schimmack's research raises questions about the accuracy of the Implicit Association Test, which is used to test people for implicit bias (photo by Drew Lesiuczok)

For decades, the psychology community has been using the Implicit Association Test (IAT) to determine if someone harbours hidden prejudices.

The tool was developed at Harvard University and is supposed to reveal hidden truths about the test-takers, including whether they suffer from implicit bias that results in unintentional racism. Freely available to anyone , the test has been used in countless academic studies and has been employed by companies and institutions in their implicit bias training sessions. 

But Ulrich Schimmack, a psychology professor at the University of Toronto Mississauga, says his research suggests the test can鈥檛 delve into our unconscious and reveal our hidden prejudices. 

鈥淸It鈥檚] basically a roulette wheel that comes up with a random number,鈥 he says.

He adds the test became popular after it was published some 20 years ago because psychologists had been fascinated with the unconscious but had no scientific method to measure it. The promise of the IAT was that it would 鈥渇inally provide a scientific window to the unconscious.鈥

Schimmack, who studies happiness and well-being, was curious if he could use the same method in his research. If he could, he would no longer have to rely solely on self-reporting, where it鈥檚 more difficult for people to admit their lives aren鈥檛 ideal. He and his team undertook a study to see if the measure was valid.

鈥淲e found our happiness IAT was giving us random results. It wasn鈥檛 useful at all to measure a person鈥檚 happiness,鈥 Schimmack says. 鈥淭hen you wonder, if it鈥檚 not working for this, when does it work and what does it really measure?鈥

He published those first findings 10 years ago and continued his research without the use of IAT. More recently, though, he noticed another decade had passed and the test was still being widely used and taken at face value, so he took a closer look at the tool.

Those who take an IAT test are required to categorize concepts as quickly as possible. In the racial bias test, participants are asked to link people of European descent with words related to 鈥渂ad鈥 concepts and people of African American descent with words related to 鈥済ood鈥 concepts, and vice versa.  Based on speed and accuracy, a score is provided and a person鈥檚 implicit bias is determined.

Schimmack says many things can influence how fast people can do the task, which introduces other sources of variation that have nothing to do with attitude. The scores can also be different from one鈥檚 actual behaviour.

鈥淭wenty years of research produced very little evidence that the IAT test predicts any real-world behaviour,鈥 Schimmack says. 鈥淥n top of that, some of the articles that claim it does, on close inspection, fail to show that.鈥

For example, one study claimed the IAT could predict some people would not vote for former U.S. President Barack Obama due to anti-Black implicit bias. But on closer inspection, Schimmack says they found that was not true.

Schimmack says his colleague also found that, in simulations, IAT scores could not predict if a police officer was more likely to shoot a Black suspect or a white suspect.

Others have also criticized the authors of the IAT for making bold claims without demonstrating that their measurement predicts any real-world behaviour of discrimination, Schimmack notes.

Racial bias is a reality, Schimmack says, but the problem is too many discussions of the issue are based on research findings that rely on flawed IAT measures. For example, some argue implicit bias training is not useful because it doesn鈥檛 change IAT scores. But if IAT scores are not valid in the first place, they are not likely to be effective evaluation tools. 

鈥(IAT) shouldn鈥檛 be used as a criterion to evaluate whether implicit bias training was successful. It shouldn鈥檛 be used in a training context,鈥 Schimmack concludes, adding that it鈥檚 questionable, if not unethical, for a test to claim to provide scientific feedback if that鈥檚 not the case.

鈥淔or the most part it has been oversold and it offers very little,鈥 Schimmack says.

The Bulletin Brief logo

Subscribe to The Bulletin Brief

UTM