How Institutional Review Boards can be (and are) Weaponized Against Academic Freedom
Jessica Hehman & Catherine Salmon
This is a guest post by Jessica Hehman and Catherine Salmon.
Jessica Hehman is a psychology professor at the University of Redlands in southern California. Her scholarship has addressed ageism, sex differences, sibling conflict and cooperation; the relationship between prenatal testosterone exposure (2D:4D), tomboyism, and temperament; theory of mind across the life span; and factors that influence perceptions of sexual harassment and sexual images. She also has experience on both sides of the IRB – as a researcher proposing her own research as well as someone who reviews research proposed by others. Jessica served on the University of Redlands’ IRB for six years and was chair of the IRB for three of those years.
Catherine Salmon is also a professor at the University of Redlands and is the director of the Human-Animal Studies program. She is the co-author (with Donald Symons) of Warrior Lovers: Erotic fiction, evolution and female sexuality and The Secret Power of Middle Children (with Katrin Schumann). Her primary research interests include parental investment/sibling conflict, male and female sexuality, particularly as expressed in pornography and other erotic genres, and human-animal interactions. She is a founding member of SOIBS, the Society for Open Inquiry in Behavioral Science, and she chaired her university’s IRB for ten years. If you’re interested in hearing about some of her work (some in collaboration with Jessica Hehman) check out her visit to Chris Williamson’s Modern Wisdom podcast
Jessica A. Hehman & Catherine A. Salmon
As previous chairs of our institution’s Institutional Review Board (IRB), we are acutely aware of the mission of the IRB, to protect the welfare of participants in human subject research. We teach courses that have significant components on research ethics and emphasize the past ethical problems that have led to the current oversight and regulations that govern the research process at universities today. However, recent changes at our own institution as well as concerns we’ve been hearing from colleagues at a variety of other universities and colleges have highlighted ways that IRBs can be (and in some cases already have been) used to infringe on the academic freedom of faculty and their students.
Today, IRBs typically require that researchers complete some form of ethics training before conducting research with human subjects to ensure they are trained in ways to apply these three principles in their research. Therefore, ideally IRBs should provide an objective, neutral and multiple-perspective review of a study’s ethics. As such, an effective IRB would be one that does not approve research that is not scientifically justified, violates participants’ rights, and/or poses an unreasonable risk to participants. IRBs should not, however, prevent ethically sound research from being conducted even if that research is investigating a controversial question. We will now discuss issues surrounding times when IRBs make decisions that are based on reasons other than the protection of human subjects.
Ways in which IRBs can potentially infringe on academic freedom
There are a number of ways institutional review boards can, and have in the past, infringed on the academic freedom of researchers. They include: stipulating types of questions or changes to questions, stipulating methodology or changes in methodology, delaying the review process, and rejecting proposals completely.
For example, Scott Atran is an American-French cultural anthropologist, Emeritus Director of Research in Anthropology at the Centre national de la recherché scientific in Paris, research professor at the University of Michigan, and cofounder of the Centre for the Resolution of Intractable Conflict at Oxford University. He is well known for his studies and publications on terrorism, violence, and religion and has done fieldwork with terrorists and Islamic fundamentalists (Atran, 2003, 2020, 2021). And yet, he has also written of his difficulty in getting the IRB at the University of Michigan to approve studies (Atran, 2007). In one case, the proposal involved him interviewing failed suicide bombers in an effort to understand the motivations behind their actions. His summary of the university’s IRB reasoning is as follows:
“You can't interview failed suicide bombers or their sponsors who are prisoners, because prisoners cannot, in principle, freely give informed consent. Thus, the IRB stipulated that parole boards must be kept abreast of everything and lawyers had to come along into the jail cells of the Bali bombers to verify that questions weren't potentially prejudicial to the prisoners. But it's nigh impossible to do serious anthropology or psychology with a lawyer present, and there are no parole boards in Indonesian military prisons. Nor is there any reasonable likelihood that this sort of research will worsen the condition of convicted mass killers. Even if the IRBs conditions could be met, UM's told me that it considers research with prisoners like these to be "in principle" incompatible with the rights of human subjects and therefore "never" likely to be approved. This, despite the fact that the prisoners and the organizations that originally sponsored their actions are more than willing to give their consent and eager to get their ideas out.”
In this sort of case, valuable research (with consenting participants and gatekeepers) is being blocked by IRB rejection.
There are also cases where the research is approved but with stipulations that change the methodology. One IRB was reported to have restricted a white researcher, interested in career aspirations, to only interviewing white participants, on the grounds that the topic could be upsetting to black participants (Seiber et al., 2002). That would certainly fundamentally change the goal of a study looking at whether certain factors (including race or ethnicity as well as gender) might influence career aspirations. It also raises the question of whether the methodological demand would have been made of a black researcher interested in the same topic. Would they have been told they could only interview black participants?
In other cases, the IRB tactic is to simply force long delays in the application and review process (multiple rounds of review, requests for revisions, delays in actual reviewing by chair or board) to discourage the applicant, in the hopes that they will give up on their topic, particularly ones with political or ideological implications. And of course, graduate students are likely to do so as they have limited time with which to get their research conducted and defended and may turn to a less sensitive, and perhaps less important, topic. Similar problems can hurt early career professors, who need a certain number of publications pre-tenure and find their research productivity dramatically slowed down. Better to switch to something a little more boring that fails to threaten or offend and get the project approved. A recent example comes from the State University of New York system where a pre-tenure professor wanted to do a study on digit ratio (2D:4D). The chair of the IRB said it was awful and deceptive because she knew they were using the measures to approximate penis size and rejected it. The study had nothing to do with penis size but the untenured faculty member was too afraid to challenge the IRB chair and so gave up on that line of research.
Why do IRBs infringe on academic freedom?
Evidence to date suggests a number of reasons behind the infringement of academic freedom by institutional review boards. Some may be well-intentioned while others are explicitly malicious or oppressive in nature. We will examine the most common ones here which include: to protect participants (though this can be overzealous), to protect the university (reputation or legal concerns), ideology (on the part of individual members and/or administrators who appoint and influence them), and to serve as a form of internal social control.
IRB committees often seem to overestimate the risk of certain types of studies, in particular ones that include questions about sensitive topics such as sexual behavior or illegal drug use (Schrag, 2011). This despite the fact that such studies are required to include consent forms that inform participants of the nature of the questions they will be answering. This happens because of people’s own personal discomfort with topics being projected onto participants who may feel quite differently (Klein, 1999) and despite studies indicating that participants are not harmed by participating in studies about sensitive topics, even ones that may include traumatic experiences (Fendrich et al., 2007; Seiber, 2008; Yeater et al., 2012). IRBs that object that simply being asked a question may be distressing are engaging in extreme paternalism when the participants are informed adults who can end their participation at any time or simply refuse to participate.
Another reason for this, in addition to generalizing their own feelings to others, is that members of ethics review boards are often not familiar with the existing literature or the methodology of the disciplines of the researchers they are overseeing. Chemists and Art History professors, for example, reviewing behavioral science research in the absence of an IRB member familiar with behavioral science research seems to guarantee a level of unfamiliarity that will impede the process. Committee members may end up assessing proposals “based on poor understanding of the methods involved, an overestimation of risks, and a reliance on personal experience rather than scholarly research” (Schrag, 2011). One of the case studies we discuss fits into this category.
Another of our case studies focuses on how administrators can influence IRBs in order to protect a university and its brand, but the protection of other organizations, including those who provide financial support can also lead IRBs astray. For example, William Woods has conducted research on HIV prevention (Binson et al., 2001) but a University of California campus IRB blocked a funded study examining compliance with public health regulations of bathhouses in San Francisco. This was a study in which data would be collected by entering the facilities and observing the HIV prevention practices. The IRB interfered by demanding consent from the bathhouse managers (who would presumably only do so if they followed regulations or changed procedures before allowing entry), compromising the integrity of the data collection (Katz, 2007). In a number of cases, IRBs may prioritize the reputation protection of the host institution over academic freedom or the protection of research participants, restricting research that may embarrass a university. In the case of research already being published that may impact a university’s reputation, ethics review can be used as internal social control to prevent future embarrassing research results (Hedgecoe, 2016). Note the following statement from an IRB chair:
“Private universities are largely corporations. The respect for academic freedom within these corporations is desirable, but not indispensable…Institutional Review Boards can forestall the public image problems and protect the institution’s reputation by weeding out politically sensitive studies before they are approved.” (Moss, 2007)
Anyone who conducts research that could be considered politically sensitive should be concerned about possible IRB overreach, whether that research concerns sexual preferences or behavior, questions where race is a possible factor, questions that are critical of specific institutions or professions, or political leanings.
From an ideology standpoint, one of the dangers of IRB censorship can occur when those opposed to a study’s arguments or conclusions attempt to use the IRB to attack the researcher and intimidate them via their own IRB into silence. One of the case studies discussed below falls into this category as Mike Bailey was attacked by transgender professors at other universities via his IRB because they disagreed with his popular press book on male to female transgenderism. Ideology as well as personal vendettas and professional jealousy can also influence IRB oppression of academic pursuits. Melvin Guyer experienced this, also at the University of Michigan, when he and Elizabeth Loftus began looking into Corwin’s account of Jane Doe published in the journal Child Mistreatment. He approached the administrator of the IRB to make sure that that what he was doing was not research but journalistic criticism and as such exempt. The administrator and IRB chair agreed. However, a month later he received a letter stating that his project was not exempt, that it was rejected, and that the IRB was recommending he be reprimanded. Several protests and exchanges later, a new letter was received from a new IRB chair notifying him that his work was exempt because it was not human subjects research and there was no basis for a reprimand. This was a year after the initial IRB administrator conversation. Later, Guyer was to find out from Loftus and her lawyer that a neurobiologist member of his IRB had played a crucial role in this via a confidential memo that was clearly motivated by something other than concern for subjects, filled with malicious personal insinuations. His own university had not disclosed this, but had sent it to Loftus’ university for them to try and use it against her (Tavris, 2002). A clear problem exists with the confidential deliberations of IRBs. How could Guyer address them? Why was he not given the opportunity to do so? This case was an example of both a personal attack from within an IRB as well as opposition to a challenge to an ideological narrative.
Three Case Studies
These three examples highlight, with a bit more detail, the different ways in which IRBs can be weaponized and the types of motivations that can be behind such misuse of the IRB review mandate.
The first concerns the role of administrators or other university officials and the influence they can exert on the IRB process, particularly when they perceive a research project’s results could reflect on their university in a way that affects the university’s brand or image. Ron Roberts’ research on the involvement of college students in sex work illustrates this well. A number of studies have documented the economic reality of student life and the fact that a number of students have embraced the sex industry, selling sexual services though exotic dance, pornography, escort work and other forms of prostitution (Betzer et al., 2015; Chapman, 2001; Roberts, 2018; Roberts et al., 2007, 2010). However, some university officials seem to be concerned with the optics of whether or not their students are perceived to be engaging in such sexual behavior to pay the bills or avoid student debt. Following the publication of a pilot study (Roberts et al., 2007) which drew media attention, Dr. Roberts received an email from his university publicity office acknowledging the importance of the research question but raising concerns about the media attention it was receiving as the participants were students at that university (Roberts, 2010). It clearly raised a red flag for university administrators in terms of the university image. Roberts has detailed examples of objections to research coming via the IRB process and the “excuse” of ethics. For example, a proposal to interview student lap dancers was approved by the ethics committee only on the condition that no students from that institution were invited to participate. Guarantees of anonymity, typical in most sex related survey or interview studies were apparently not sufficient (Roberts, 2010). A university involved in a study identifying a link between students participating in sex work and being in debt (Roberts et al., 2000), refused IRB approval to a future study of sex work among students, indicating that asking questions about people’s sexual behavior and possible early adverse sexual experiences was too sensitive. This despite the fact that it has been well documented that asking people questions about sexual behavior does not result in adverse effects (Yeater et al., 2012). In another case, roadblocks were thrown up in the IRB process for a student such that even if the hurdles could be surmounted, there would not be sufficient time to complete the thesis project (Roberts et al., 2007). It seems clear, as Roberts and colleagues articulated (2007), that “increasing ethical constraints on research are being vigorously pursued, not in order to protect the welfare of human beings who participate in research but to prevent socially important but ‘uncomfortable’ research from being conducted.” In the case of university students participating in sex work, the problem seems to be largely the discomfort of university administration with the optics of their own students participating in such work rather than protecting the rights of the research participants.
The second concerns how activists or members of the public (which can include other academics outside your research area) can try to cancel those whose findings they are unhappy with by trying to manipulate IRBs into sanctioning or condemning their target. To illustrate this type of attack, one need look no further than the attack on Northwestern’s sex researcher Mike Bailey, who in 2003 published a popular science book titled The Man Who Would Be Queen: The Science of Gender-bending and Transsexualism (Bailey, 2003). The attacks came from individuals, mostly male to female transsexuals, who were opposed to his discussion of two typologies of male to female transsexuals, those who are autogynephilic, natal males whose primary erotic object is themselves as a woman, and those who are androphilic, their primary erotic objects are male. For further discussion of the motivation behind the opposition to the two type taxonomy of male to female transsexual individuals, we would recommend Alice’s Dreger’s 2015 book Galileo’s Middle Finger: Heretics, Activists, and the Search for Justice in Science as well as Mike Bailey’s own article on the controversies he’s faced over his career that resulted from the 2018 FIRE faculty conference (Bailey, 2019). For those interested on the background of the development of the concept of autogynophila, we recommend the work of Ray Blanchard on this topic (Blanchard 1989, 1993, 2005). But our focus here is on the way in which certain individuals attacked Bailey personally rather than present evidence that challenged the ideas presented in the book. In particular, the way they tried, in the end unsuccessfully, to get Bailey’s IRB to go after him for what they claimed were ethical violations. The one we will focus on specifically here is that he conducted scientific research without ethical oversight based on the anecdotal stories reported in the book. The people involved in this campaign filed formal charges with the IRB at Northwestern which took them seriously and launched an investigation lasting over two years that in the end came to the conclusion that Bailey had not violated federal research oversight regulations. His book did not fit the federal definition of original research and the original research of Bailey’s that was in the book had gone through the IRB approval process. In addition, the anecdotes about real individuals used in the book to illustrate research findings do not fit the federal definition of research (Bailey, 2019), a number of which had been told in public by those individuals when giving talks in Dr. Bailey’s Human Sexuality class. In this case, the attempt to punish Bailey for publishing his book via his IRB was unsuccessful as he had, in fact, sought approval for his own research that was reviewed in the book and the interviews were not under his university’s IRB’s purview, based on their findings. However, IRB and ethics complaints put universities at risk, financially and in terms of reputation. The federal government can freeze research funding for universities as a whole if there are violations and no university wants to deal with that. As a result, IRB oversight continues its mission creep as part of the administrative bloat at universities and can be an effective way to attack researchers with currently unfashionable ideas, whether they are correct or not, causing IRBs to serve as instruments of censorship (Hamburger, 2005).
The third example begins in fall of 2019 and is an example of how the ideology or moral perspective of IRB members can exert its own censorship on researchers. The researcher, who was submitting the proposal to his IRB at an east coast college, was extending research we had already done. At the time this proposal was submitted, our paper was under review and has since been published (Salmon & Hehman, 2022). The initial study was approved by our IRB which has FWA approval status. This researcher was simply adding the use of eye tracking technology to see what people were looking at specifically when they looked at images of male and female faces that had semen on them. His IRB rejected the protocol, stating that seeing the images would harm their undergraduate students. Interestingly, the IRB chair was new, a biologist who had never done any human subjects research. In fact, the only member of his IRB who had done any human subjects research was from their education department and they had not done any research since they had been in graduate school.
In order to clarify why his proposal was rejected, as he had previously had eye tracking proposals involving nude images approved, this professor went to one of their IRB meetings. He was told that they were concerned that the images would cause harm and when he asked the chair to display the images in the meeting, he refused and said that they disturbed him and caused discomfort. It was unclear if any of the other IRB members had seen the images. The researcher reminded them that his previous work involved images of nudes and there had been no adverse events and asked them how they knew these images would cause harm. They said they did not know, they just felt they would, and so decided to reject the proposal. The IRB decision was not clearly based or justified by any actual evidence this protocol would cause harm and there had been no adverse events reported when we conducted the study it was expanding on.
To try and address the IRB concerns, the researcher decided to do a pilot study for his IRB’s edification (which according to FWA regulations are not subject to IRB review if they are not for publication, or are for administrative purposes) examining whether simply looking at sexually explicit images would have any adverse effect on the viewers. The survey was online with an informed consent specifying the images would be sexually explicit. The researcher collected data on campus and presented it to the IRB at their next meeting. He was then contacted by the Provost who issued a letter of reprimand, stating that the IRB had decided that his pilot study was a breach of protocol (despite agreeing that it was a pilot not intended to be used for anything but an administrative function and not subject to review on that basis). The Provost also agreed with this statement but insisted on the reprimand and was more concerned about the university logo being on a survey containing sexual explicit images. So we see optics again as a main concern of the administrator, which is not surprising. However, the IRB chair and other members of the board were clearly influenced by their own ideological or moral views and level of comfort with sexuality in rejecting the proposal. They then retaliated against the faculty researcher for a) daring to question their decision and b) provide evidence that challenged it, engaging in internal social control.
Conclusions and Suggestions
Back in 1985, Stephen Ceci and colleagues published a study in American Psychologist that suggested that IRBs were rejecting proposals on sensitive subjects twice as often as those not containing socially sensitive subjects and that their justifications for rejection were highly variable. Their study conclusions included that the primary reason for rejection of such sensitive proposals was the potential political implications of the findings and that IRB deliberations reflected the sociopolitical ideologies of their members, despite such behavior being non congruent with Federal Regulations (DHHS) for IRB deliberations which caution against such actions. The case studies reviewed here suggest that little has changed in the past thirty or so years and that some complaints about IRB process and overreach are certainly justified. In addition, it seems likely that ideological concerns, not only about possible findings but about asking certain types of questions, are continuing to infringe on open inquiry in ways that should concern us all. IRBs should not be the arbiters of what topics should be studied.
How can we solve the problem of IRB overreach? Well, some folks would like to see the whole machinery terminated, at least for the majority if not all social science research (leaving medical IRBs alone). The problem with tossing it all out is that a) Federal regulations require that research they fund with human subjects requires IRB oversight and b) some research probably does need an expert but not involved eye to make sure participants are protected. But certainly most survey or observational research could be considered exempt. Another approach is for IRB members to have the expertise expected with regard to the disciplines and methodology that they review and administrators should keep this in mind when appointing the membership. And finally, IRB training should emphasize the protection of participants but also caution against letting own personal views/ideology influence how projects are assessed. People often ignore their own biases and they need to be reminded to check their own baggage at the IRB door.
“Today, many of the IRBs originally established to protect subjects have instituted so many byzantine restrictions and rules that even good scientists cannot do their work. Some have become fiefdoms of power – free to make decisions based on caprice, personal vendettas, or self-interest, and free to strangle research that might prove too provocative, controversial, or politically sensitive.” (Carol Tavris, 2002)
References
Atran, S. (2003). Genesis of suicide terrorism. Science, 299(5612), 1534-1539.
Atran, S. (2007). Research Police – How a University IRB Thwarts Understanding of Terrorism. Institutional Review Blog. Retrieved September 18, 2022 fromhttp://www.institutionalreviewblog.com/2007/05/scott-atran-research-police-how.html
Atran, S. (2020). Measures of devotion to ISIS and other fighting and radicalized groups. Current Opinion in Psychology, 35, 103-107.
Atran, S. (2021). Psychology of transnational terrorism and extreme political conflict. Annual Review of Psychology, 72(1), 471-501.
Bailey, J. M. (2019). Academic Freedom and Sexual Hysteria: Three Controversies. 2018 FIRE Faculty Conference Proceedings, 53-74.
Betzer, F., Köhler, S., & Schlemm, L. (2015). Sex work among students of higher education: A survey-based, cross-sectional study. Archives of Sexual Behavior, 44(3), 525-528.
Binson, D., Woods, W. J., Pollack, L., Paul, J., Stall, R., & Catania, J. A. (2001). Differential HIV risk in bathhouses and public cruising areas. American Journal of Public Health, 91(9), 1482-1486.
Blanchard, R. (1989). The concept of autogynephilia and the typology of male gender dysphoria. Journal of Nervous and Mental Disease, 177(10), 616–623.
Blanchard, R. (1993). Varieties of autogynephilia and their relationship to gender dysphoria. Archives of Sexual Behavior, 22(3), 241-251.
Blanchard, R. (2005). Early history of the concept of autogynephilia. Archives of sexual behavior, 34(4), 439-446.
Blass, T. (2002, March). The man who shocked the world. Psychology Today, 35, 68-74.
Borenstein, J. (2008). The expanding purview: Institutional review boards and the review of human subjects research. Accountability in research, 15(3), 188-204.
Cave, E., & Holm, S. (2003). Milgram and Tuskegee—Paradigm research projects in bioethics. Health Care Analysis, 11(1), 27-40.
Ceci, S. J., Peters, D., & Plotkin, J. (1985). Human subjects review, personal values, and the regulation of social science research. American Psychologist, 40(9), 994-1002.
Chapman, M. (2001). Red light finds its way onto campus. The Times Higher Education Supplement, 1486, 20.
Dreger, A. (2016). Galileo's middle finger: Heretics, activists, and one scholar's search for justice. Penguin Books.
Fendrich, M., Lippert, A. M., & Johnson, T. P. (2007). Respondent reactions to sensitive questions. Journal of Empirical Research on Human Research Ethics, 2(3), 31-37.
Grady, C. (2015). Institutional review boards: Purposes and challenges. CHEST, 148(5), 1148-1155.
Gray, F. D. (1998). The Tuskegee syphilis study: The real story and beyond. NewSouth Books.
Gunsalus, C. K., Bruner, E. M., Burbules, N. C., Dash, L., Finkin, M., Goldberg, J. P., Greenough, W. T., Miller, G. A., Pratt, M. G., Iriye, M., & Aronson, D. (2007). The Illinois white paper: Improving the system for protecting human subjects: Counteracting IRB “mission creep”. Qualitative Inquiry, 13(5), 617-649.
Hamburger, P. (2005). The new censorship: Institutional review boards (University of Chicago, Public Law Working Paper No. 95). Chicago, IL: University of Chicago.
Haney, C., Banks, C., & Zimbardo, P. (1973). Interpersonal dynamics in a simulated prison. International Journal of Criminology and Penology, 1, 69-97.
Haney, C., & Zimbardo, P. (1998). The past and future of US prison policy: Twenty-five years after the Stanford Prison Experiment. American Psychologist, 53(7), 709-724.
Hedgecoe, A. (2016). Reputational risk, academic freedom and research ethics review. Sociology, 50(3), 486-501.
Heller, J. (1972, July 26). Syphilis victims in U.S. study went untreated for 40 years. The New York Times. https://www.nytimes.com/1972/07/26/archives/syphilis-victims-in-us-study-went-untreated-for-40-years-syphilis.html
Jones, J. H. (1993). Bad blood: The Tuskegee syphilis experiment (New and expanded edition). Free Press.
Jonsen, A. R. (2005). On the origins and future of the Belmont Report. In J. F. Childress, E. M. Meslin, & H. T. Shapiro (Eds.), Belmont revisited: Ethical principles for research with human subjects (pp. 3-11). Georgetown University Press.
Katz, J. (2007). Toward a natural history of ethical censorship. Law & Society Review, 41(4), 797-810.
Klein, M. (1999). Censorship and the fear of sexuality. In J. Elias, V. D. Elias, V. L. Bullough, G. Brewer, J. J. Douglas & W. Jarvis (Eds.). Porn 101: Eroticism, Pornography, and the First Amendment (pp. 145-155). New York: Prometheus Books.
Koocher, G. P. (1977). Bathroom behavior and human dignity. Journal of Personality and Social Psychology, 35(2), 120-121.
Milgram, S. (1963). Behavioral study of obedience. Journal of Abnormal and Social Psychology, 67(4), 371-378.
Middlemist, D. R., Knowles, E. S., & Matter, C. F. (1976). Personal space invasions in the lavatory: Suggestive evidence for arousal. Journal of Personality and Social Psychology, 33(5), 541-546.
Middlemist, D. R., Knowles, E. S., & Matter, C. F. (1977). What to do and what to report: A reply to Koocher. Journal of Personality and Social Psychology, 35(2), 122-125.
Miller, A. G., Collins, B. E., & Brief, D. E. (1995). Perspectives on obedience to authority: The legacy of the Milgram experiments. Journal of Social Issues, 51(3), 1-19.
Moss, J. (2007). If institutional review boards were declared unconstitutional, they would have to be reinvented. Northwestern University Law Review, 101, 801-808.
Perry, G. (2013). Behind the shock machine: The untold story of the notorious Milgram psychology experiments. New Press.
Reverby, S. M. (2009). Examining Tuskegee: The infamous syphilis study and its legacy. University of North Carolina Press.
Roberts, R. (2018). Capitalism on campus: Sex work, academic freedom and the market. John Hunt Publishing.
Roberts, R., Bergström, S., & La Rooy, D. (2007). Sex work and students: An exploratory study. Journal of Further and Higher Education, 31(4), 323-334.
Roberts, R., Sanders, T., Myers, E., & Smith, D. (2010). Participation in sex work: Students' views. Sex Education, 10(2), 145-156.
Salmon, C. A., & Hehman, J. A. (2022). Perceptions of sexual images: Factors influencing responses to the ubiquitous external ejaculation. Archives of Sexual Behavior, 51(2), 1271-1280.
Sieber, J. E. (2008). Protecting the vulnerable: who are they? Journal of Empirical Research on Human Research Ethics, 3(1), 1-2.
Tavris, C. (2002). The high cost of skepticism. Skeptical Inquirer, 26(4), 41-44.
Tierney, W. G., & Corwin, Z. B. (2007). The tensions between academic freedom and institutional review boards. Qualitative Inquiry, 13(3), 388-398.
US Code of Federal Regulations. Title 45CFR, part 46. https://www.hhs.gov/ohrp/regulations-and-policy/regulations/45-cfr-46/index.html. Accessed August 19, 2022.
Yeater, E., Miller, G., Rinehart, J., & Nason, E. (2012). Trauma and sex surveys meet minimal risk standards: Implications for institutional review boards. Psychological Science, 23(7), 780-787.
Zimbardo, P. G. (1973). On the ethics of intervention in human psychological research: With special reference to the Stanford prison experiment. Cognition, 2(2), 243-256.
Important article. Something that concerns me about IRB protocols at my uni and I imagine elsewhere is the handling of collected data.
The ideal proposal says it will be in a locked cabinet in a locked office to which only the PI has access, securely encrypted on any computers, and destroyed after 5 to 7 years. The idea that there could be value in making data accessible to checking, follow up, future study, challenge by other researchers, or even history is completely absent. The operative assumption is that all data should be handled like personal confidential medical records.
As an academic and former IRB member, I can say that the flip side of this problem is just as bad. My institution’s IRB would reflexively approve garbage research (I would be outvoted) as long as the proposal pointed toward an approved outcome. One study invited first-year students to visit the university library and search for signs of white supremacy. Of course, there were no objective criteria for these assessments. Attempts to point out glaring methodological flaws were pointless when the committee (as many are) is drawn from across the institution. Those familiar with science, much less human subjects research, were often outnumbered.