Question: Is There a Publication Bias Against "Non-Significant" Racism Findings?
While there is much to discuss in Jussim's 5 Posts of the year, I want to address a specific issue that I sensed in these posts: are psychology journals averse to publishing articles that demonstrate a lack of racist attitudes? I ask because I have direct experience with this.
My undergraduate honors student, an Asian American, wanted to explore anti-Asian hate in the U.S. following his own distressing experiences of reading about anti-Asian hate at the start of the pandemic (e.g., massage business killings in GA). For his thesis, he conducted a replication of a study by Cantone et al. (2019) titled "Sounding guilty: How accent bias affects juror judgments of culpability." DOI: 10.1080 /15377938.2019.1623963
The original study used a mock jury paradigm to investigate how race and accent influence judgments of wrongdoing; it found a significant "racist effect," where Black and Hispanic defendants were judged as more culpable than white defendants.
My student adapted this methodology by substituting Asian defendants (instead of Hispanic, but keeping Black) and manipulating their photos and accents. So this was a 2 X 3 design of accent (yes, no) X 3 racial categories (Asian, Black, White). Contrary to the original findings, the results showed no significant effects of race or accent across most groups—with one notable exception: participants judged "white redneck" accented defendants as more culpable than other groups.
We concluded that the Boston University (BU) undergraduate sample did not exhibit detectable racist attitudes in this context. In our discussion, we explored several potential reasons for this, including social desirability bias and high political awareness. Perhaps most importantly, we noted the specific demographics of the study: Massachusetts is a deeply liberal Blue State, and Boston University has a large Asian student population. We titled our manuscript: “Race and Accent Bias: A Mock Jury Investigation in a Blue State.”
The BU Psychology Department awarded this study Best Psychology Honors Thesis for the year. Following graduation, my student spent a post-baccalaureate year running an additional 150 participants to ensure we met the high sample size requirements of modern psychological research, bringing the total sample size to 212.
We submitted the manuscript to the same journal that published the original Cantone study. We were careful not to frame our work as a "failure to replicate," but rather as a nuanced look at how different populations yield different results. Given that the original study was conducted in Nebraska, we wanted to highlight that these specific racist attitudes—at least when measured via mock jury paradigms—may not be uniform across every region of the U.S.
The reviewers were harsh and the paper was rejected outright. The criticisms focused on our methodology and write-up. Yet we had followed the Cantone et al. method almost exactly.
I agree the reviews did point out shortcomings in our writeup; considerable rewriting was needed; the ms retained some elements that made it feel like an undergraduate honors thesis. But, I expect a revise-and-resubmit when one followed a procedure previously published in that journal and found a noteworthy result relevant to social life. The outright rejection and the rather mean tone of the reviews left me wondering: was our work rejected because it failed to confirm the dominant academic narrative that racism is is always present? That is, journals do not want to publish "no dectectable racism here." Also, we were clear that we **had expected to find racism**.
[BTW, my student is doing fine, pursuing a master's at Columbia University in a different area.]
1. Catherine, you have been one of my fav commenters. And I say that even though sometimes we have disagreed, sometimes sharply. But your arguments have always been evidence-based and tightly logical.
2. I say that because this sounds like two potential guest posts here. This is a standing offer. GP1: A writeup of the study, blog style (there are several like this here already, including Who Agrees with Hitler) as models. GP2: A second essay on the treatment by reviewers.
3. There is ample evidence that it is hard to publish stuff that finds no bias. The single best I know is this: Ziggerel found a slew of NSF funded studies finding no racial bias that were never published (whether the researchers did not try or tried and failed is unknown):
This study reports results from a new analysis of 17 survey experiment studies that permitted assessment of racial discrimination, drawn from the archives of the Time-sharing Experiments for the Social Sciences. For White participants (n=10 435), pooled results did not detect a net discrimination for or against White targets, but, for
Black participants (n=2781), pooled results indicated the presence of a small-to-moderate net discrimination in favor of Black targets; inferences were the same for the subset of studies that had a political candidate target and the subset of studies that had a worker or job applicant target. These results have implications for understanding
racial discrimination in the United States, and, given that some of the studies have never been fully reported on in a journal or academic book, the results also suggest the need for preregistration to reduce or eliminate publication bias in racial discrimination studies.
4. It is possible, but its hard. This paper is in press, finding no discrimination. But it is very crazy thorough, pre-registered (possibly a registered report, I am not sure about that) and also an adversarial collaboration on a very controversial set of issues. It is in my queue for a stand-alone Unsafe Sci post.
5. In 1987, this was my job talk study. Found pro-Black bias in a simulated hiring situation. BUT and relevant to your study, we found HUGE accent effects and also HUGE social class type effects -- and argued (correctly I think) that if you manip all that orthogonally, our results are fine, but in the real world, they are not orthogonal and help understand disproportionate negative evaluations of Black applicants. Regardless of whether that is "right" or not, and not just from this one paper, I think you can pub that sort of stuff if you give a social justice twist.
Thanks for sharing and all these wonderful articles. I do have to say this...I love Fiddler on the Roof and the song containing the debate over whether it was a horse or a mule is one of my favorites. The fact that this created a massive controversy did, in its own way answer the question. Clearly if academia were the party in question...it would have to be a horse....as only in academia could a discussion of innocent hard working mules reveal so many horses a..ses!
I used to blog for Psychology Today as well, and I stopped for the same reason you did. I was tired of their politics and censorship.
I love your stuff and I always look forward to your next post. Happy 2026, my friend!
Question: Is There a Publication Bias Against "Non-Significant" Racism Findings?
While there is much to discuss in Jussim's 5 Posts of the year, I want to address a specific issue that I sensed in these posts: are psychology journals averse to publishing articles that demonstrate a lack of racist attitudes? I ask because I have direct experience with this.
My undergraduate honors student, an Asian American, wanted to explore anti-Asian hate in the U.S. following his own distressing experiences of reading about anti-Asian hate at the start of the pandemic (e.g., massage business killings in GA). For his thesis, he conducted a replication of a study by Cantone et al. (2019) titled "Sounding guilty: How accent bias affects juror judgments of culpability." DOI: 10.1080 /15377938.2019.1623963
The original study used a mock jury paradigm to investigate how race and accent influence judgments of wrongdoing; it found a significant "racist effect," where Black and Hispanic defendants were judged as more culpable than white defendants.
My student adapted this methodology by substituting Asian defendants (instead of Hispanic, but keeping Black) and manipulating their photos and accents. So this was a 2 X 3 design of accent (yes, no) X 3 racial categories (Asian, Black, White). Contrary to the original findings, the results showed no significant effects of race or accent across most groups—with one notable exception: participants judged "white redneck" accented defendants as more culpable than other groups.
We concluded that the Boston University (BU) undergraduate sample did not exhibit detectable racist attitudes in this context. In our discussion, we explored several potential reasons for this, including social desirability bias and high political awareness. Perhaps most importantly, we noted the specific demographics of the study: Massachusetts is a deeply liberal Blue State, and Boston University has a large Asian student population. We titled our manuscript: “Race and Accent Bias: A Mock Jury Investigation in a Blue State.”
The BU Psychology Department awarded this study Best Psychology Honors Thesis for the year. Following graduation, my student spent a post-baccalaureate year running an additional 150 participants to ensure we met the high sample size requirements of modern psychological research, bringing the total sample size to 212.
We submitted the manuscript to the same journal that published the original Cantone study. We were careful not to frame our work as a "failure to replicate," but rather as a nuanced look at how different populations yield different results. Given that the original study was conducted in Nebraska, we wanted to highlight that these specific racist attitudes—at least when measured via mock jury paradigms—may not be uniform across every region of the U.S.
The reviewers were harsh and the paper was rejected outright. The criticisms focused on our methodology and write-up. Yet we had followed the Cantone et al. method almost exactly.
I agree the reviews did point out shortcomings in our writeup; considerable rewriting was needed; the ms retained some elements that made it feel like an undergraduate honors thesis. But, I expect a revise-and-resubmit when one followed a procedure previously published in that journal and found a noteworthy result relevant to social life. The outright rejection and the rather mean tone of the reviews left me wondering: was our work rejected because it failed to confirm the dominant academic narrative that racism is is always present? That is, journals do not want to publish "no dectectable racism here." Also, we were clear that we **had expected to find racism**.
[BTW, my student is doing fine, pursuing a master's at Columbia University in a different area.]
1. Catherine, you have been one of my fav commenters. And I say that even though sometimes we have disagreed, sometimes sharply. But your arguments have always been evidence-based and tightly logical.
2. I say that because this sounds like two potential guest posts here. This is a standing offer. GP1: A writeup of the study, blog style (there are several like this here already, including Who Agrees with Hitler) as models. GP2: A second essay on the treatment by reviewers.
3. There is ample evidence that it is hard to publish stuff that finds no bias. The single best I know is this: Ziggerel found a slew of NSF funded studies finding no racial bias that were never published (whether the researchers did not try or tried and failed is unknown):
https://journals.sagepub.com/doi/pdf/10.1177/2053168017753862
Abstract
This study reports results from a new analysis of 17 survey experiment studies that permitted assessment of racial discrimination, drawn from the archives of the Time-sharing Experiments for the Social Sciences. For White participants (n=10 435), pooled results did not detect a net discrimination for or against White targets, but, for
Black participants (n=2781), pooled results indicated the presence of a small-to-moderate net discrimination in favor of Black targets; inferences were the same for the subset of studies that had a political candidate target and the subset of studies that had a worker or job applicant target. These results have implications for understanding
racial discrimination in the United States, and, given that some of the studies have never been fully reported on in a journal or academic book, the results also suggest the need for preregistration to reduce or eliminate publication bias in racial discrimination studies.
4. It is possible, but its hard. This paper is in press, finding no discrimination. But it is very crazy thorough, pre-registered (possibly a registered report, I am not sure about that) and also an adversarial collaboration on a very controversial set of issues. It is in my queue for a stand-alone Unsafe Sci post.
https://osf.io/download/5ns3a/
5. In 1987, this was my job talk study. Found pro-Black bias in a simulated hiring situation. BUT and relevant to your study, we found HUGE accent effects and also HUGE social class type effects -- and argued (correctly I think) that if you manip all that orthogonally, our results are fine, but in the real world, they are not orthogonal and help understand disproportionate negative evaluations of Black applicants. Regardless of whether that is "right" or not, and not just from this one paper, I think you can pub that sort of stuff if you give a social justice twist.
https://sites.rutgers.edu/lee-jussim/wp-content/uploads/sites/135/2019/05/Jussim-et-al-1987-JPSP-Three-Theories.pdf
Actually, there is another possibility, NOT mutually exclusive with the above. Do the revisions to improve the writing and submit it to JOIBS.
Agree with this option, and thanks for suggesting Journal of Open Inquiry in the Behavioral Sciences (JOIBS).
Thanks for sharing and all these wonderful articles. I do have to say this...I love Fiddler on the Roof and the song containing the debate over whether it was a horse or a mule is one of my favorites. The fact that this created a massive controversy did, in its own way answer the question. Clearly if academia were the party in question...it would have to be a horse....as only in academia could a discussion of innocent hard working mules reveal so many horses a..ses!