10 Comments
User's avatar
Dr Lawrence Patihis, PhD's avatar

Excellent work in your team, Lee. Well done to Nathan Honeycutt for leading it.

Expand full comment
Mforti's avatar

Take this comment with a grain of salt as I am not an academic. Do participants know upfront they are participating in a study and if so does this affect the results. So for example the response rates seem low- 23% for M-R and 10% for yours. So could there be any self selection happening ie are people with an ideological bent / activists more likely to respond than non activists. And related to this, again if responders know the goal of the study do they intentionally bend their responses to affect the results. This could also be accomplished by carefully selecting who to send it to.

Expand full comment
Lee Jussim's avatar

In order:

1. Yes, they knew they were in a study

2. Yes, it could affect the results, but the best way to get a bead on this is by comparing it to studies where people did not know they were in a study (or, in fact, just went about their jobs and someone studied what happend later). As it turns out, biases favoring women have been pretty common for a while in academia. As far back as the early 2000s, there were biases favoring women in many science fields. For example, in biology, 26% of bio faculty job applicants were women, 34% of those being offered jobs were women. More data showing similar patterns in other fields can be found in a 2010 report by the National Academy of Science. The table from which the data above come appears on p.7. Its a book, but the Google Book preview shows it:

https://books.google.com/books?hl=en&lr=&id=riF1AgAAQBAJ&oi=fnd&pg=PR1&ots=WA1hEW-SJU&sig=WmzFmH2KolaXRnPFSPAgJRHjwRQ#v=onepage&q&f=false

3. Are activists more likely to respond? Its possible, but I doubt it. In both their and our studies, the demographic distribution of the samples corresponded closely with the demographics of of the professoriate nationally. It stops short of being an ironclad guarantee but strongly points against systematic self-selection biases.

4. Did respondents know the goal of the study? Probably sometimes, but not very often. Each only evaluated one applicant, so that we were studying sex bias was not apparent. We did however probe for suspicions about the purpose of the study with open-ended questions. Coders then rated the responses for suspicion. We then used coders' ratings of suspicion as predictors of bias. It was statistically unrelated for 10/12 tests, weakly related in the remaining 2, and, most important, the pattern of favoring women occurred regardless of whether they were high or low in suspicion. If there was anything going on there, it was pretty weak.

Expand full comment
Mforti's avatar

Thanks. Interesting you note the presence of biases favouring women as far back as the early 2000s. It was about 2003 that I started noticed this in the corporate world and by 2006 it was all on and gained significant momentum since then. Only in the last couple of years have I noticed pushback.

Expand full comment
MB's avatar

is it possible that the results from 2012 did not replicate for today, because there might actually have been a bias in favor of men back then, and now there is one in favor of women?

Expand full comment
Lee Jussim's avatar

Its possible. Pages 54-55 of our paper, which is linked in the post, discuss six possible explanations for the differences in results, and that is one of them. We do not state this in the paper because there is no place in a scientific paper for arguments from the gut. But this is a comment on a blog post. My gut says M-R was probably a false positive, but there is no way to know for sure.

Expand full comment
Edgy Ideas's avatar

So annoying, thanks Lee! I hope that Williams & Ceci still stands up better than M&R.

I know you looked at both in the past.

Often, one would see only M&R quoted in articles, to support a particular point of view.

When are we going to talk about the Citation Crisis in Soc Psych?

Those who either intentionally or inadvertently keep citing shoddy or non-replicating work more often than replicating studies that do not produce the "correct" result".......

Expand full comment
PharmHand's avatar

Nice!!

Expand full comment
Matt Burgess's avatar

This looks like painstaking work, Lee, but this type of replication (of any widely cited study that influences policy) is also a great service to the profession.

Expand full comment
Lee Jussim's avatar

Thanks. It was indeed. Hell, its been over 2 weeks since my prior post, because it took me about 3x to just to write this one as a regular post. I ended up deciding to make it a series because I kept wanting to put in more stuff and kept realizing it would be too long -- which is how I ended up deciding to make this a series.

Expand full comment