Has Research on Implicit Bias Produced Anything Important with Any Scientific Certainty?
A Conversation with Paul Bloom
Paul Bloom is a professor of psychology at University of Toronto after having spent 21 years as a named chair at Yale. I know, for many of you, “Yale” brings up reasons to be sneeringly dismissive and for some good reasons (Halloween costumes anyone?, demonization of White people?). But none of that applies to Paul. Over at his Substack, you will find, among other things, his personal compendium of credible findings in psychology, of which I present a few here:
Babies have a surprisingly sophisticated understanding of the physical and social world before their first birthday.
Our conscious experience of the world is sharply limited; if our attention is elsewhere, we often fail to see what’s right in front of our noses. Check out this classic demo.
Perception is a complex inferential process; what you see is influenced by your unconscious expectations of how the world works.
All sorts of psychological traits are heritable to a strong degree—not just the obvious ones such as intelligence, but also surprising ones like how religious you are.
Many sex differences are culture-specific, but others, such as differences in desire for sexual variety, are universal, showing up everywhere in the world.
We overestimate the likelihood of infrequent but conspicuous events, such as plane crashes and shark attacks.
There is more, but this is not a list compiled by the type of ideologically-infused lunacy you often see me doing battle with here. You can find out more about Paul here.
Some months ago, when Paul posted on his Substack his analysis of the strengths and weaknesses of work on implicit bias, it caught my attention. My take was that he got the weaknesses that he identified right, missed some important ones, and overstated the strengths. And I said so to him directly, by commenting on his Substack post. He gracioiusly replied. Contra to much of what passes for academic discourse these days, this exchange was, I thought, respectful and even edifying. Therefore, I asked Paul permission to post the whole exchange here, which he graciously granted.
I start off by summarizing his post. The entry below starts with a synopsis of his post on it. At the end of my synopsis of his post, I add a line like this one:
to separate his main post from our exchange. Then, following that line, is our exchange in total, with some light editing for fluency but none of the substance changed. I hope you will enjoy and maybe even get something out of it.
Synopsis of:
In Paul’s post, he starts off by declaring critics of implicit bias who call it nothing but “woke nonsense” to be wrong and also that “some social psychologists are weirdly dogmatic about it” and “endorse claims…that go way beyond what the studies show” including “smearing their critics as racist.”
So this is, in some sense, “balanced.” But the issue (coming, in the commentary exchange) is not whether he sees strengths and weaknesses among both implicit bias research and its critics, but whether the particular strengths he identifies are actually strengths and whether the weaknesses he identifies fully capture the weaknesses of implicit bias research.
Paul then summarizes 90+ years of data showing, e.g., that studies of racial stereotypes found that, whereas 75-84% of those surveyed in 1933 believed Black people are superstitious and lazy, these numbers fell to 1-2% by 2000, and that the number of people saying they would vote for a qualified Black candidate for president of the U.S. rose from under 40% in the 1950s to about 95% by 2000.
I am pretty sure that that the combination of massively declining prejudice according to surveys with the stubborn endurance of many gaps, disparities, and inequalities was one of the inspirations for implicit bias research. The psychology of that inspiration goes something like this: If people are saying they are way less bigoted than people used to say, wtf is going on? Maybe its unconscious! Maybe people are just as bigoted as they always were but are either lying or, worse, do not even know how bigoted they are! Enter implicit bias research!
Paul then goes on to define implicit bias this way:
They are the associations that we have with human groups, often with some sort of positive or negative feelings. Some examples are associating elderly people with incompetence, men with violence, women with nurturance, and Jews with finance.
I have quoted it here because, as you shall see from our exchange, my first objection to Paul’s post is that I think this definition is unjustified, so please keep his definition in mind.
Enter the IAT
Paul then introduces and explains the Implicit Association Test. I suspect few of my readers need this introduction, but, if you do, feel free to go back to Paul’s Substack essay providing an explanation. It is clear and succinct.
What is the main finding of IAT studies?
Paul states this (another point of disagreement discussed later):
These studies have been done with millions of people and have found that people have negative associations on the basis of race, age, sexual orientation, body weight, disability, and other factors.
He then goes on to discuss a slew of findings he considers credible and interesting and argues that implicit bias makes a difference in the real world. But he also discusses some claims about implicit bias that he thinks require walking back:
IAT scores are not useful for figuring out whether a person is racist because they are too noisy (measured with too much random error).
IAT scores do not mean everyone is racist. Paul reviewed research showing that one gets “bias” on IAT scores merely from knowing that some group (real or made up for the experiment) is oppressed. Negative associations with a group because one understands that they are oppressed is not, he argues, bigotry.
Paul then wraps up with an extensive consideration of what people should do about their implicit biases. Feel free to return to his Substack essay for this. Of course, whether his suggestions actually have anything to do with implicit bias depends heavily on the assumption that implicit bias is scientifically sufficiently well-established to warrant doing something about — which was basically the core of our differences in the exchange that follows below.1
Comment Thread Between Lee Jussim and Paul Bloom
October 20, 2023
Lee Jussim:
Hi Paul. I reject the reasonableness of defining an “association” as a bias. Indeed, defining “bias” is pretty tricky. Sometimes, it is mere preference. Other times it is deviation from a normative statistical model. Yet other times it, it is some sort of systematic error. Yet other times, it is two people judging the same stimuli differently. Associations are none of these things. Associations may predict some of these things some of the time, but: 1. That is an empirical question each time; 2. If A (associations) predict B (preferences, behavior, judgments) then A and B must be different things. Put differently, I associate ham with cheese. To refer to this as any sort of bias (not that you did, but it is an association which you did state is a bias) strikes me as unjustified. Then you get the nasty downstream effects, whereby A is amply demonstrated (association) and then simply presumed to constitute some sort of nasty bias (e.g., racial prejudice or stereotypes). Very motte (there is an association!) and bailey (look how racist they all are!).
It then goes downhill from there. Even when associations predict some outcome, it does not necessarily constitute the bailey. Say IAT scores predict discrimination, r=.2 or so. This can occur because high racism IAT scores correspond to racist behavior, 0 corresponds to egalitarian behavior and negative scores to anti-White behavior. Or it can occur because high “racism” IAT scores correspond to egalitarian behavior and ones near zero to anti-White behavior. For a real example, see Blanton et al. (2009) in the Journal of Applied Psychology.
Paul Bloom:
HI, Lee.
I’m having a tough time connecting your comments with my post. I say that implicit biases are associations we have with groups, often with some sort of positive or negative feelings. So, no, you’re wrong -- for me, the association between ham and cheese isn’t a bias.
I don’t say or imply that biases are nasty or racist (though I think some are). I mean, I have a section called “Does implicit bias mean that we are all, in general, racist?” and then write “My answer is: No”. Hard to be more clear than that. And since I argue against the predictive power of IATs and say that they don’t predict discrimination, I don’t get the point of your last paragraph at all.
I’m very interested in your views on this topic, and would love your feedback, but I worry that you wrote your comment without actually reading my post.
Lee Jussim:
I read the whole thing carefully, Paul. I was responding to this, which strikes me as the definitional glue holding together much of the post. You wrote:
“What are implicit biases? They are the associations that we have with human groups, often with some sort of positive or negative feelings. Some examples are associating elderly people with incompetence, men with violence, women with nurturance, and Jews with finance.”
So your answer to the question, “What are implicit biases” is associations with human groups. My comment addresses that head-on and says, “not so fast.”
You then continue:
“These studies have been done with millions of people and have found that people have negative associations on the basis of race, age, sexual orientation, body weight, disability, and other factors.”
Well, maybe. They show different reaction times to various double paired-then-inverted-double-pairs of concepts. But the entire point of my comment (and, really Blanton et al.’s paper) is that 0 does not necessarily correspond to egalitarian beliefs, judgments or behaviors. Indeed, they found that IAT scores much higher than 0 (typically interpreted as “negative associations”) correspond to egalitarian-ness. If scores much higher than 0 typically or even intermittently correspond to egalitarian-ness, then it is not justified to interpret the many studies showing IAT>0 as “negative associations” in any meaningful way with respect to any sort of bias.
I decided to comment because, though I do think you did a good job here capturing some of the limitations and interpretive problems with IAT research, the post understates known limitations to the IAT that undercut justifications for interpreting findings as bias (as defined in any of the ways listed in my first comment).
Paul Bloom:
Hi Lee. You say that IAT scores don’t reliably connect how “egalitarian” people are. Are you sure we disagree about this? Maybe we’re using the word in different ways, but I certainly wouldn’t say that the IAT measures how egalitarian we are!
But, yes, I do think the IAT can detect (roughly) “negative associations”. If you are quicker to connect terms like “bad” to the elderly (vs. positive terms like “good”), say, this suggests that we have a negative association with elderly people. I guess we disagree about this?
Lee Jussim:
It is a reaction time difference, not a bias. Whether any particular score, including a faster response to elderly-bad/young-good than to elderly-good/young-bad reflects a “negative association” in any sense other than “speed-of-reaction-time” is precisely what is at issue. And yes, I am saying that a faster response on the IAT to elderly-bad/young-good than to elderly-good/young-bad does not necessarily mean “negative association” in any conventional meaning of the word “negative.”
Relatedly, you did define implicit bias as an association involving human groups, so that would exclude my ham and cheese example (though, cognitively, I do not see why it should, but it’s your definition, so I’ll go with it). I associate the French with wine, pro baseball players with athleticism, and rural dwellers in the U.S. with Trump support. I do not see how any of these would constitute bias for any conventional meaning of the term bias. If one agrees that some associations are not bias, then bias cannot be defined as associations of concepts with human groups. It remains possible that some associations of concepts with human groups are some sort of bias, but one would then need to define bias as something other than mere associations.
Paul Bloom:
My definition of “implicit bias” doesn’t include ham and cheese because nobody would think of ham and cheese as a bias. I’m not sure why you’re confused about this definitional choice—shouldn’t a good definition match the usual meaning of a term? And I agree that associating French with wine, etc. isn’t a bias either. But I did also mention valence – “often with some sort of positive or negative feelings”. If I took out “often” (which I probably should), would we be on the same page?
If we get a faster response to elderly-bad/young-good than to elderly-good/young-bad, there has to be a reason for this. I think it’s because we associate elderly with bad and young with good. You disagree, which is fine, but what’s your alternative?
Lee Jussim:
No, that’s not really the point. The point is that the IAT is so filled with measurement artifacts that a simple interpretation of “reaction time difference” as “association” is not necessarily justified. See Fiedler et al., 2006; Blanton et al., 2015 (Towards a meaningful metric...); Sherman’s various papers on The Quad Model; Machery’s work on IAT anomalies or Gawronski et al., 2022, Psychological Inquiry, titled “Reflections on the difference between implicit bias and bias on implicit measures.” Great exchange there. Quoting Gawronski et al.: “Our rejection of BIM [Bias on implicit measures] as an indicator of unconscious biases raises the question of whether implicit measures still have any value for research on social biases. Some commentators seemed rather skeptical about that, noting that the research program on BIM has lost considerable momentum over the last years—partly due to unresolved debates about the predictive validity of BIM and meta-analytic evidence questioning the presumed causal role of BIM in discriminatory behavior.” And also: “Expanding on the debate about the meaning of the term implicit, we discourage using the term implicit in reference to bias. Use of the term implicit is just too flexible and inconsistent to ensure conceptual precision.”
Or one stop shop: https://osf.io/74whk/, >40 sources critical of the IAT in particular or the concept of implicit bias in general.
Paul Bloom:
Those are useful citations and quotes, Lee, on many topics. And I agree with a lot of them, such as about the lack of predictive validity of the IAT.
But I was hoping you could answer my question. You said that I was wrong when I said that a faster response to elderly-bad/young-good is because we associate elderly with bad and young with good. You repeat this here, saying that my conclusion about associations is “not justified”, Ok, good, so how do YOU explain the effect? (I’m not playing gotcha – I’m genuinely interested in what the best alternative is, and I know you’ve given these issues a lot of thought.)
Lee Jussim:
“How do you explain the effect?” is exactly the right question, Paul. The answer is ... no one knows, because the effect may reflect an association, but it may also reflect so many other artifacts and irrelevancies that its interpretation is not knowable with anything approaching scientific certainty. Accordingly, Corneille (in a commentary on the Gawronski et al article) recommends abandoning the concept entirely.
Another commentary referred to work with the IAT as a “degenerating” line of research. The idea is that the dawning recognition that there are so many unknowns about what the IAT captures that much of the work now focuses not on “implicit bias” (whatever that means) but on figuring out what in tarnation the IAT actually does measure.
“…the implicit measurement approach to implicit bias has suffered from significant paradigm degeneration (Lakatos, 1970). To maintain itself, auxiliary assumptions such as multiple moderators in conjunction lead to respectable predictive validity correlations (Kurdi et al., 2019), social desirability bias on laboratory behavioral measures (Tierney et al., 2020), the cumulative consequences of minute discriminatory biases (Greenwald et al., 2015; Hardy et al., 2022), mismatched and suboptimal behavioral outcomes in studies examining causality (Gawronski et al., this issue), and aggregate-level crowd biases (Payne et al., 2017) must be invoked. Some or even all these defenses may hold empirically. And yet this heavily modified theoretical structure would still represent a major retreat from earlier models in which pervasive individual-level implicit prejudices and stereotypes constitute major causal contributors to societal inequities. Thus, we believe that Gawronski et al. (this issue) underestimate the seriousness of the empirical challenges to the “bias on implicit measures” (BIM) paradigm, as well as the need for major reforms including (but not limited to) those they advocate.”
Source: “Avoiding bias in the search for implicit bias”: https://www.tandfonline.com/doi/epdf/10.1080/1047840X.2022.2106762?needAccess=true&role=button
I note, however, that Gawronski et al.’s reply to the commentaries acknowledged even more limitations to the IAT than did their target article, so they were reasonable and responsive to this sort of criticism.
With “misinformation” in the air the last few years, I like to turn the academic spotlight typically used for some variation of “look how idiotic people are” (and lord knows, people can be deluded fools) on academics themselves. This is from a talk on “Academic Misinformation”, and it starts out by pointing out that “misinformation” is (as far as I can tell) not very different from “falsehoods” or “inaccuracies” -- so it’s mostly old wine in a new bottle:
Different Types of Falsehoods
1. Known to be Factually Untrue. “It is brighter at night than during the day.”
2. Unjustified by the evidence. “There was once life on Mars.”
3. Misleading by presentation of incomplete evidence, when the complete picture shows something different. Almost anything about which there are (social) scientific controversies.
Most claims about the IAT and implicit bias do not full under 1. They do fall under 2 and 3.
Solution? Let’s wait for another 30 years or so before making bold claims about implicit bias, including (especially?) claims based on the IAT.
(I then continued with this second comment. It may seem tangential in that it is not on implicit bias; instead, it briefly reviews how, after 75 years of research in a certain area of perception involving unconscious processes, basic issues are still unresolved. I hold this up as a model for how to consider implicit bias research—LJ).
Historical Digression with Perhaps Some Parallel
1940s: New Look in Perception! People’s motivations influence basic perception!
1950s: Withering criticisms revealing artifacts, biases, and unruled out (and likely) alternatives. (F. Allport, Prentice, and others).
1980s: Automaticity! Social Priming!
2000s: Social priming shown to be filled with p-hacked and unreplicable studies.
2000s: People’s motivations do influence perception after all! We have better methods! (Balcetis, Cole, Banerjee).
2016: Firestone & Scholl. Rinse & repeat (see 1950s withering criticisms, updated and more sophisticated but essentially applied in the same way).
BUT:
2021: Cole & Balcetis (in Advances in Experimental Social Psychology), tl;dr, paraphrased conclusion not actual quote: “we took the criticisms very seriously, have performed slews of studies that address and account for them, and there is now clear evidence for motivation influencing basic perception.”
I don’t know what I make of all this, and I look forward to this work being openly criticized by skeptics. But at least it’s a step forward.
So, if you start in 1947 and end up in 2021, that is 74 years and the evidence is definitely better, but we won’t know how definitive it is for a while yet.
That is, to me, an excellent model for how to treat most claims based on the IAT or about “implicit bias.”
Paul Bloom:
Thanks for the thoughts, Lee. If I do a deeper dive into this topic, these references and your arguments are going to be very useful.
Footnotes
Paul’s suggestions for dealing with implicit bias. Whether they do anything about implicit bias is a different question than whether they are good or bad ideas on their merits. I quite liked several of his suggestions, but my liking of his suggestions had nothing to do with implicit bias.
I would think of ham and cheese as a bias.
What a refreshing example of how two professionals with differing viewpoints should interact with each other.