Last Fri., I attended a talk by Sarah Babb of Boston College. In her talk, titled “Beyond the Horror Stories: Non-Experimental Social Researchers’ Encounters with Institutional Review Boards (IRB),” Babb revealed findings that included misconceptions about federal guidelines for human subjects. Contrary to what some IRB review boards demand from principal investigators (PIs) undertaking qualitative research, the federal guidelines do not require:
– signed consent from a low risk population
– an institutional research permission slip
To repeat, the above two are “not in federal regulations at all.”
Babb noted that at larger institutions, IRB boards often involve nonprofessionals – that is, those who don’t have appropriate professional expertise – in the decision-making processes about proposals. Moreover, qualitative research don’t fit well into the one-size-fits-all medical template often used to vet research proposals. Compounding these challenges is the lack of accountability in terms of IRB review boards’ responsibilities to PIs. Only 20% of IRBs that Babb examined had an appeals procedure that would allow PIs to contest decisions.
Not surprisingly, this talk evoked spirited discussion of the myriad problems encountered by researchers going through the IRB process at their institutions, as well as the unintended consequences of a review process ostensibly intended to protect human subjects. The audience noted the following unintended and undesired consequences: (1) normalized deviance,* (2) chilling effect upon the types of research undertaken, and (3) mission creep in which IRB review boards critique the suitability or worth of the research design, rather than evaluating risk to human subjects. In particular, senior researchers worried that tenure-track faculty and graduate students face great uncertainty about whether their project proposals will successfully navigate the IRB process in a timely fashion.
Audience members asked whether the sociologists’ professional association, the American Sociological Association (ASA), had taken an official position on IRB guidelines. None present were aware of any such activities (if you know of anything brewing from this or other associations, do write them in the comments). Attendees noted that because a tenured faculty member may be more able to surmount IRB issues on his/her own (or not need to go through the IRB process because of the type of research conducted), fashioning IRB standards that are more appropriate for a wider variety of research methods is a collective action problem.
I opined that these identified problems need to be considered a commons issue. Those with more power should consider it a professional responsibility to help budding researchers – undergraduate students, graduate students, junior faculty – go through an IRB process that is appropriate to their research methods and questions, especially if researchers hope to have future generations of audiences and colleagues. Unfortunately, dark humor may not be sufficient to get the point across – when a psychology colleague sent his IRB board a proposal to reproduce the Stanley Milgram experiment on April Fool’s Day, an IRB staffer called to inquire if the proposal was serious.
* One of my past posts discussing the IRB draws a steady stream of traffic from those searching for the answer to one of the quiz questions on the online Collaborative Institutional Training Initiative (CITI), a certification program mandatory for researchers and students at some institiutions.