Excellent article. It puts into light, for me, how a perfectly legitimate school of thought can become, when it acquires dominance in the social discourse for purposes other than specifically scientific.
It is not the first time -- and popular psychology and psychological fads have been deleterious for decades. But it seems to be the first time that (not in psychology alone) something goes out into the "popular" acceptation, gets trimmed down and stultified, becomes a craze, makes some people a lot of money (or power/influence/what have you), and then COMES BACK into the halls of scholarship crowned as dogma.
Interesting times, an old Chinese friend of mine would say.
“ The question is rather whether such beliefs cause unfair or illegal behavior toward others based on their group membership. In fact they do. Kteily et al. (2011) found that SDO predicted negative outgroup affect among White college students four years later. Nonwhites were not tested. In short, beliefs like Social Dominance Orientation are more prevalent in socially dominant groups and hence contribute to the maintenance of social dominance.”
Making a claim of causality—“In fact, they do.”—from test results is dicey. Making the claim with the specific statement that a control group wasn’t tested totally invalidates a conclusion of causality. In this case, “Nonwhites were not tested.” Pretty explicit. That aspect of the “study” also highlights the observers’ biases.
You know, every time I see scare quotes in a context like this, I cringe. The scare quotes imply, without explicitly declaring it, that the study in question is not a /real/ study, not scientifically sound. And this serves absolutely nothing to the purpose of questioning the validity of the conclusions of the study itself. A study can be perfectly sound and arrive to incomplete conclusions, or conclusions that can be used for unsavoury purposes (The Bell Curve is a textbook example of this). Dismissing science on these grounds is exactly what the notorious Nature manifesto stands for.
Now, I am no expert on social psychology, but I have a solid academic education from an institution and era (Oxford, the 70s) still uninfected by today's insanity. I have a degree in history and one in philosophy, and am pretty well grounded in epistemology because of personal interest. I can read a scientific paper in the social sciences and understand it, if not in the finer nuances. And (bless the age of the internet, for a change) Kteily et al. (2011) is available online, so I went and read it.
Certainly Jussim would do this better than I, but I will try: you seem to evince the wrong conclusions about the study quoted, and I suspect you have not read it.
The "In fact they do" statement is completely legitimate, within the caveats of any scientific claim (until disproof). The authors did not /test/ anything themselves. They studied the results of SDO tests taken by undergraduates at UCLA, who were then retested several years later.
There was no need of control group for the purpose of the study, which explicitly stated was to "...more comprehensively explore the causal status of SDO as an antecedent to intergroup prejudice and discrimination. The ideological asymmetry hypothesis within social dominance theory argues that the relationship between SDO and intergroup attitudes and behaviors will typically be stronger among dominant rather than subordinate groups (e.g., Fang et al., 1998, Peña & Sidanius, 2002). Thus, we used only White participants..."
And the study shows that in fact, high SDO values corresponded to negative outgroup affect among the individuals tested. You cannot dismiss the study on that account.
What the study does not show evidence for, instead, at least to my eyes, is the second part of the theory that the authors set out to validate: that is "that the relationship between SDO and intergroup attitudes and behaviors will typically be stronger among dominant rather than subordinate groups".
For that, they should have had a similar study made of subordinate groups in the same conditions, and found that in subordinate groups high SDO values do NOT correspond to negative outgroup affect.
And to consider it an established fact (within the limits of science), the results should be replicated in a substantial number of other studies.
That unfortunately social sciences suffer from a paucity of scientific rigour in comparison to hard sciences (due to a number of factors among which the nature of their subjects and the history of their disciplines) is not a reason to dismiss them offhandedly.
Thanks for the thoughtful response. Sorry to have made you cringe.
As to the excuse for the study I mentioned, in the absence of a control group, there's no science to be had. There wasn't one, and the means to avoid it, namely excluding white people, means that the experimenters only wanted to cherry-pick something like data, but not really data, to attempt to make a point. They made a point, but not a valid one. What they did show was that they went in with a bias strong enough that it could not be validated using standard, accepted scientific method.
I, too, have a solid academic background: bachelors from MIT in 1969, organic chemistry; MD from Tulane in 1973. I know what real science looks like. And science this ain't.
I acknowledge you critique but disagree with the conclusion. Not all scientifical research uses control groups, although they are most common in medicine and (frequently) social sciences (there are many methods of falsification/verification of hypotheses, that I have seen, many controls that can be applied, and the use of data collected by others is quite common at least in the biological sciences). Besides, a study does not need to satisfy a claim of completeness to be valid as a study.
As a layman of psychology, the fact that the study quoted is accepted as valid (in the respect that it shows correlation between SDO and negative outgroup affect -- and truly, it seems rather obvious that beliefs like “Superior groups should dominate inferior groups,” or “We should not push for group equality” would result in negative outgroup affect) -- sorry for the obscenely long parenthetical -- the fact that it is accepted as valid by Professor Krueger, and that it was accepted as such by peer review eleven years ago, at a time when today's biases were not so strong, is enough for me.
What I see in it, is that it finds correlation between SDO and negative outgroup affect in a socially dominant group (in this case a sample of White university students. This is what Krueger says: this correlation is there (and I doubt that Kteily et al. (2011) is the only study in existence to support such finding).
But what is not there, is evidence that the same correlation does NOT exist or is MUCH scarcer in socially subordinate groups.
Scientific research can be faulty and limited without being worthless or not-science.
Judging some research not to be science because of this is a position that, brought to logical conclusions, will bring us to say that no human or social science can be called science.
That's the position of the latest guest of Professor Jussim, by what I understand of his satire. But it is a position that I find deeply unhelpful, as it leads to the dismissal of the human and social sciences altogether. Which is not in my opinion a desirable outcome.
There’s a lot to unpack here, but let me start with
“Scientific research can be faulty and limited without being worthless or not-science.”
The ultimate validity of scientific research rests on its consistency with reality. The measures of that consistency are 1) reproducibility and 2) ability to make correct and accurate predictions. First to the consistency with reality issue. If someone brings a prejudice to the table, he can look at a study and dismiss it as invalid since it doesn’t comport with what he believes a priori to be correct. Everyone knows that Maldonians are genetically incapable of telling the truth. Therefore any research that shows a Maldonian telling the truth must perforce be incorrect. This is not science; it’s misinformed bias. Black swans occur. When they do, they upset the status quo of science. Einstein and before him, Newton were such swans.
Once a belief system begins, bogus research can support it for a while. When I was an undergrad, there were a number of papers that came out about something called “polywater.” Polywater was the answer to why Russian winter wheat didn’t freeze and snap of when the Siberian winters got cold enough to freeze all other water-containing things solid and brittle. Like Kurt Vonnegut’s ice-9 [Cat’s Cradle], polywater was yet another phase of water, just as ice, liquid, and steam. It just didn’t freeze at the expected temperature.
I tried reading a number of the papers written on it, but I couldn’t follow the logic stream. It seemed that minuscule differences in behavior were the keys to understanding properties of this stuff. There had been a flurry of papers describing new effects. Great, subtle science? Nope: suddenly the papers just…stopped. No more written about polywater. Gone. Guess what? It hadn’t likely been a hoax, but it wasn’t a real effect. It couldn’t be reproduced. And it certainly didn’t allow accurate predictions.
Around 1982 or ‘83 a diagnostic technique called IVDSA came along. It was the first radiographic use of digital imaging. What would happen is you would start an IV, inject a contrast material, and then scan the region of interest—usually the neck—with the device. Everyone had to have one. It was revolutionary. The big meetings following the general release of the first commercial products saw scientific paper after paper of excellent results, tricks and tips for getting them, and rosy predictions that catheter angiography would go the way of dinosaurs.
Most of us who tried this—and plenty of us did—couldn’t reproduce the results. Meetings the following year saw paper after paper by the same authors saying that they couldn’t reproduce their earlier results and that the procedure was useless. Most of us already knew this from our own experience. And suddenly IVDSA was as dead as a doornail.
What went wrong in these two experiences? Polywater experiments lacked a meaningful controls. The IVDSA cherry-picked good images but ignored the much more numerous bad ones as outliers. In each case, what got reported were therefore anecdotes. Anecdote is not the singular of data. Neither of these was science, even though the polywater literature had been peer reviewed. I’m not not sure whether the IVDSA literature was, but much of it had been published.
Correlations do not prove causality. They are, at best, circumstantial evidence. A sine qua non to demonstrate that a correlated variable causes a hypothesized effect would be a proof that the two aren’t epiphenomenal, i.e., that they aren’t both effects of the same cause. None of the studies cited do this. They are not science. They are opinions. Science doesn’t care about anyone’s feelings and opinions. It only cares about what is objectively true regardless of the consequences.
I cannot overstate how painful this can be. Few things are as demoralizing as seeing a beautiful, well thought out theory beaten to death by a gang of unruly facts. I know this first-hand: it’s happened to any number of great ideas of my own. They fell to scientific method. Which I still swear by. And occasionally at.
I completely get what you say. That's the way of STEM sciences (with exceptions at the S), and I am well ready to swear by the strictest method as well, for those.
But theories fall to facts and evidence, over a process of testing. And having now read more about the subject that started our conversation, I have not found any sign that the causative correlation between high in SDO beliefs and "unfair or illegal behaviour toward others based on their group membership" has been disproved by evidence from other research (please let me know if you find any, I'd be glad). While I have not found much credible evidence to support the belief that this only happens with members of dominant groups or that it is due exclusively to the attitudes identified by SDO tests, and not instead also to other social, economical and historical factors that may even be the main cause of such attitudes (which would indicate a need to address these instead of focusing on the immorality of the attitudes).
And the problem is also this -- although it is good to require more and more scientific rigour from the human sciences, they will never completely satisfy the positivist requirements of a strict scientific method (so do not, for that matter, paleontology, or astronomy, or evolutionary biology, yet these are still considered science -- hell, Darwin built his theory on observations that made sense, but without any strict falsification protocol, because the things he studied could not be repeated in an experiment: the experiment would take millions of years: he guessed, on the basis of patterns that he recognised through logic and keeping records; yet the theory of evolution still holds, not greatly changed; and the reason why Lamarck's theory has less credit is the result of logic and circumstantial evidence, since we cannot experiment with evolution). Nor do the human sciences allow for true experimentation, which would be impossibly unethical.
From the point of view of strict positivist scientific method, the human sciences are not sciences, period. And much research in a number of "harder" sciences would fail to qualify as well. What is needed to make of a discipline a science is still very much debated in the scholarly world, whether we like it or not. For example, in the UK we classify human sciences with the humanities, but it does not remove the problem. There is no unanimous consensus about what characters science must display to be called science... the word itself is rife with multiple meanings, as it simply denotes "knowledge".
Fact is, these all are human endeavours. They suffer both from human fallibility and from human bias -- at every level -- complicated by the human tendency, equally beneficial and detrimental according to circumstance, to create immense theoretical constructions to explain the world. It seems to be a result of the evolutionary path of Homo sapiens to desperately desire to control its environment as much as possible, in order to feel safe: and a great tool for this is the predictive ability of sciences (if it works). For most of our research is not done per se, for the sake of knowing something, but in order to direct action towards a better state of things... in other words, to direct policies.
Which is where a large part of the ailments of the sciences come from, together with a large amount of positive incentives. And it is the reason why it is particularly of essence that research is kept in an environment of free circulation of ideas and debate.
But I digress. My only point is to say, Joe Horton, that if we stand on your very strict positivistic definition of scientific method, no research in the human sciences deserves the name of science, even when it delivers good results. And a lot of other scientific research does not either: take climate science for example, removing from it the political debates which are due to the fact that policies are necessitated by its findings: it works on models and observation of patterns; it cannot experiment or reproduce experiments; and yet it can make predictions, many of which turn out to be true. Is it a science? It uses scientific methodology, logic, and fulfils most of the requirements. But all? I doubt it.
I will not reject findings from a study just because it does not satisfy a purist concept of scientific method, if the findings make sense in the other respects. I will want to question them further. One study is not the end of all research on a subject, but it is a contribution to it.
Anyway, to end. You quoted Vonnegut. Anybody who quotes Vonnegut is my friend.
Excellent article. It puts into light, for me, how a perfectly legitimate school of thought can become, when it acquires dominance in the social discourse for purposes other than specifically scientific.
It is not the first time -- and popular psychology and psychological fads have been deleterious for decades. But it seems to be the first time that (not in psychology alone) something goes out into the "popular" acceptation, gets trimmed down and stultified, becomes a craze, makes some people a lot of money (or power/influence/what have you), and then COMES BACK into the halls of scholarship crowned as dogma.
Interesting times, an old Chinese friend of mine would say.
Another good one. I didn't know this.Thanks
Thanks for posting this. I appreciate it when Unsafe Science provides me with meaningful content.
“ The question is rather whether such beliefs cause unfair or illegal behavior toward others based on their group membership. In fact they do. Kteily et al. (2011) found that SDO predicted negative outgroup affect among White college students four years later. Nonwhites were not tested. In short, beliefs like Social Dominance Orientation are more prevalent in socially dominant groups and hence contribute to the maintenance of social dominance.”
Making a claim of causality—“In fact, they do.”—from test results is dicey. Making the claim with the specific statement that a control group wasn’t tested totally invalidates a conclusion of causality. In this case, “Nonwhites were not tested.” Pretty explicit. That aspect of the “study” also highlights the observers’ biases.
You know, every time I see scare quotes in a context like this, I cringe. The scare quotes imply, without explicitly declaring it, that the study in question is not a /real/ study, not scientifically sound. And this serves absolutely nothing to the purpose of questioning the validity of the conclusions of the study itself. A study can be perfectly sound and arrive to incomplete conclusions, or conclusions that can be used for unsavoury purposes (The Bell Curve is a textbook example of this). Dismissing science on these grounds is exactly what the notorious Nature manifesto stands for.
Now, I am no expert on social psychology, but I have a solid academic education from an institution and era (Oxford, the 70s) still uninfected by today's insanity. I have a degree in history and one in philosophy, and am pretty well grounded in epistemology because of personal interest. I can read a scientific paper in the social sciences and understand it, if not in the finer nuances. And (bless the age of the internet, for a change) Kteily et al. (2011) is available online, so I went and read it.
Certainly Jussim would do this better than I, but I will try: you seem to evince the wrong conclusions about the study quoted, and I suspect you have not read it.
The "In fact they do" statement is completely legitimate, within the caveats of any scientific claim (until disproof). The authors did not /test/ anything themselves. They studied the results of SDO tests taken by undergraduates at UCLA, who were then retested several years later.
There was no need of control group for the purpose of the study, which explicitly stated was to "...more comprehensively explore the causal status of SDO as an antecedent to intergroup prejudice and discrimination. The ideological asymmetry hypothesis within social dominance theory argues that the relationship between SDO and intergroup attitudes and behaviors will typically be stronger among dominant rather than subordinate groups (e.g., Fang et al., 1998, Peña & Sidanius, 2002). Thus, we used only White participants..."
And the study shows that in fact, high SDO values corresponded to negative outgroup affect among the individuals tested. You cannot dismiss the study on that account.
What the study does not show evidence for, instead, at least to my eyes, is the second part of the theory that the authors set out to validate: that is "that the relationship between SDO and intergroup attitudes and behaviors will typically be stronger among dominant rather than subordinate groups".
For that, they should have had a similar study made of subordinate groups in the same conditions, and found that in subordinate groups high SDO values do NOT correspond to negative outgroup affect.
And to consider it an established fact (within the limits of science), the results should be replicated in a substantial number of other studies.
That unfortunately social sciences suffer from a paucity of scientific rigour in comparison to hard sciences (due to a number of factors among which the nature of their subjects and the history of their disciplines) is not a reason to dismiss them offhandedly.
Thanks for the thoughtful response. Sorry to have made you cringe.
As to the excuse for the study I mentioned, in the absence of a control group, there's no science to be had. There wasn't one, and the means to avoid it, namely excluding white people, means that the experimenters only wanted to cherry-pick something like data, but not really data, to attempt to make a point. They made a point, but not a valid one. What they did show was that they went in with a bias strong enough that it could not be validated using standard, accepted scientific method.
I, too, have a solid academic background: bachelors from MIT in 1969, organic chemistry; MD from Tulane in 1973. I know what real science looks like. And science this ain't.
I acknowledge you critique but disagree with the conclusion. Not all scientifical research uses control groups, although they are most common in medicine and (frequently) social sciences (there are many methods of falsification/verification of hypotheses, that I have seen, many controls that can be applied, and the use of data collected by others is quite common at least in the biological sciences). Besides, a study does not need to satisfy a claim of completeness to be valid as a study.
As a layman of psychology, the fact that the study quoted is accepted as valid (in the respect that it shows correlation between SDO and negative outgroup affect -- and truly, it seems rather obvious that beliefs like “Superior groups should dominate inferior groups,” or “We should not push for group equality” would result in negative outgroup affect) -- sorry for the obscenely long parenthetical -- the fact that it is accepted as valid by Professor Krueger, and that it was accepted as such by peer review eleven years ago, at a time when today's biases were not so strong, is enough for me.
What I see in it, is that it finds correlation between SDO and negative outgroup affect in a socially dominant group (in this case a sample of White university students. This is what Krueger says: this correlation is there (and I doubt that Kteily et al. (2011) is the only study in existence to support such finding).
But what is not there, is evidence that the same correlation does NOT exist or is MUCH scarcer in socially subordinate groups.
Scientific research can be faulty and limited without being worthless or not-science.
Judging some research not to be science because of this is a position that, brought to logical conclusions, will bring us to say that no human or social science can be called science.
That's the position of the latest guest of Professor Jussim, by what I understand of his satire. But it is a position that I find deeply unhelpful, as it leads to the dismissal of the human and social sciences altogether. Which is not in my opinion a desirable outcome.
There’s a lot to unpack here, but let me start with
“Scientific research can be faulty and limited without being worthless or not-science.”
The ultimate validity of scientific research rests on its consistency with reality. The measures of that consistency are 1) reproducibility and 2) ability to make correct and accurate predictions. First to the consistency with reality issue. If someone brings a prejudice to the table, he can look at a study and dismiss it as invalid since it doesn’t comport with what he believes a priori to be correct. Everyone knows that Maldonians are genetically incapable of telling the truth. Therefore any research that shows a Maldonian telling the truth must perforce be incorrect. This is not science; it’s misinformed bias. Black swans occur. When they do, they upset the status quo of science. Einstein and before him, Newton were such swans.
Once a belief system begins, bogus research can support it for a while. When I was an undergrad, there were a number of papers that came out about something called “polywater.” Polywater was the answer to why Russian winter wheat didn’t freeze and snap of when the Siberian winters got cold enough to freeze all other water-containing things solid and brittle. Like Kurt Vonnegut’s ice-9 [Cat’s Cradle], polywater was yet another phase of water, just as ice, liquid, and steam. It just didn’t freeze at the expected temperature.
I tried reading a number of the papers written on it, but I couldn’t follow the logic stream. It seemed that minuscule differences in behavior were the keys to understanding properties of this stuff. There had been a flurry of papers describing new effects. Great, subtle science? Nope: suddenly the papers just…stopped. No more written about polywater. Gone. Guess what? It hadn’t likely been a hoax, but it wasn’t a real effect. It couldn’t be reproduced. And it certainly didn’t allow accurate predictions.
Around 1982 or ‘83 a diagnostic technique called IVDSA came along. It was the first radiographic use of digital imaging. What would happen is you would start an IV, inject a contrast material, and then scan the region of interest—usually the neck—with the device. Everyone had to have one. It was revolutionary. The big meetings following the general release of the first commercial products saw scientific paper after paper of excellent results, tricks and tips for getting them, and rosy predictions that catheter angiography would go the way of dinosaurs.
Most of us who tried this—and plenty of us did—couldn’t reproduce the results. Meetings the following year saw paper after paper by the same authors saying that they couldn’t reproduce their earlier results and that the procedure was useless. Most of us already knew this from our own experience. And suddenly IVDSA was as dead as a doornail.
What went wrong in these two experiences? Polywater experiments lacked a meaningful controls. The IVDSA cherry-picked good images but ignored the much more numerous bad ones as outliers. In each case, what got reported were therefore anecdotes. Anecdote is not the singular of data. Neither of these was science, even though the polywater literature had been peer reviewed. I’m not not sure whether the IVDSA literature was, but much of it had been published.
Correlations do not prove causality. They are, at best, circumstantial evidence. A sine qua non to demonstrate that a correlated variable causes a hypothesized effect would be a proof that the two aren’t epiphenomenal, i.e., that they aren’t both effects of the same cause. None of the studies cited do this. They are not science. They are opinions. Science doesn’t care about anyone’s feelings and opinions. It only cares about what is objectively true regardless of the consequences.
I cannot overstate how painful this can be. Few things are as demoralizing as seeing a beautiful, well thought out theory beaten to death by a gang of unruly facts. I know this first-hand: it’s happened to any number of great ideas of my own. They fell to scientific method. Which I still swear by. And occasionally at.
I completely get what you say. That's the way of STEM sciences (with exceptions at the S), and I am well ready to swear by the strictest method as well, for those.
But theories fall to facts and evidence, over a process of testing. And having now read more about the subject that started our conversation, I have not found any sign that the causative correlation between high in SDO beliefs and "unfair or illegal behaviour toward others based on their group membership" has been disproved by evidence from other research (please let me know if you find any, I'd be glad). While I have not found much credible evidence to support the belief that this only happens with members of dominant groups or that it is due exclusively to the attitudes identified by SDO tests, and not instead also to other social, economical and historical factors that may even be the main cause of such attitudes (which would indicate a need to address these instead of focusing on the immorality of the attitudes).
And the problem is also this -- although it is good to require more and more scientific rigour from the human sciences, they will never completely satisfy the positivist requirements of a strict scientific method (so do not, for that matter, paleontology, or astronomy, or evolutionary biology, yet these are still considered science -- hell, Darwin built his theory on observations that made sense, but without any strict falsification protocol, because the things he studied could not be repeated in an experiment: the experiment would take millions of years: he guessed, on the basis of patterns that he recognised through logic and keeping records; yet the theory of evolution still holds, not greatly changed; and the reason why Lamarck's theory has less credit is the result of logic and circumstantial evidence, since we cannot experiment with evolution). Nor do the human sciences allow for true experimentation, which would be impossibly unethical.
From the point of view of strict positivist scientific method, the human sciences are not sciences, period. And much research in a number of "harder" sciences would fail to qualify as well. What is needed to make of a discipline a science is still very much debated in the scholarly world, whether we like it or not. For example, in the UK we classify human sciences with the humanities, but it does not remove the problem. There is no unanimous consensus about what characters science must display to be called science... the word itself is rife with multiple meanings, as it simply denotes "knowledge".
Fact is, these all are human endeavours. They suffer both from human fallibility and from human bias -- at every level -- complicated by the human tendency, equally beneficial and detrimental according to circumstance, to create immense theoretical constructions to explain the world. It seems to be a result of the evolutionary path of Homo sapiens to desperately desire to control its environment as much as possible, in order to feel safe: and a great tool for this is the predictive ability of sciences (if it works). For most of our research is not done per se, for the sake of knowing something, but in order to direct action towards a better state of things... in other words, to direct policies.
Which is where a large part of the ailments of the sciences come from, together with a large amount of positive incentives. And it is the reason why it is particularly of essence that research is kept in an environment of free circulation of ideas and debate.
But I digress. My only point is to say, Joe Horton, that if we stand on your very strict positivistic definition of scientific method, no research in the human sciences deserves the name of science, even when it delivers good results. And a lot of other scientific research does not either: take climate science for example, removing from it the political debates which are due to the fact that policies are necessitated by its findings: it works on models and observation of patterns; it cannot experiment or reproduce experiments; and yet it can make predictions, many of which turn out to be true. Is it a science? It uses scientific methodology, logic, and fulfils most of the requirements. But all? I doubt it.
I will not reject findings from a study just because it does not satisfy a purist concept of scientific method, if the findings make sense in the other respects. I will want to question them further. One study is not the end of all research on a subject, but it is a contribution to it.
Anyway, to end. You quoted Vonnegut. Anybody who quotes Vonnegut is my friend.