agreements and disagreements with rob warren
Rob Warren, of the University of Minnesota, wrote some very engaging and insightful comments about his time as the editor of Sociology of Education. Jeff Guhin covered this last week. Here, I’ll add my own comments. First, a strong nod of agreement:
First, a large percentage of papers had fundamental research design flaws. Basic methodological problems—of the sort that ought to earn a graduate student a B- in their first-year research methods course—were fairly common.4 (More surprising to me, by the way, was how frequently reviewers seemed not to notice such problems.) I’m not talking here about trivial errors or minor weaknesses in research designs; no research is perfect. I’m talking about problems that undermined the author’s basic conclusions. Some of these problems were fixable, but many were not.
Yes. Professor Warren is correct. Once you are an editor, or simply an older scholar who has read a lot of journal submissions, you quickly realize that there a lot of papers that really, really flub research methods 101. For example, a lot of paper rely on convenience samples, which lead to biased results. Warren has more on this issue.
Now, let me get to where I think Warren is incorrect:
Second, and more surprising to me: Most papers simply lacked a soul—a compelling and well-articulated reason to exist. The world (including the world of education) faces an extraordinary number of problems, challenges, dilemmas, and even mysteries. Yet most papers failed to make a good case for why they were necessary. Many analyses were not well motivated or informed by existing theory, evidence, or debates. Many authors took for granted that readers would see the importance of their chosen topic, and failed to connect their work to related issues, ideas, or discussions. Over and over again, I kept asking myself (and reviewers also often asked): So what?
About five years ago, I used to think this way. Now, I’ve mellowed and come to a more open minded view. Why? In the past, I have rejected a fair number of papers on “framing” grounds. Later, I will see them published in other journals, often with high impact. Also, in my own career, leading journals have rejected my work on “framing” grounds and when it gets published in another leading journal, the work will get cited. The framing wasn’t that bad. Lesson? A lot of complaints about are framing are actually arbitrary. Instead, let the work get published and let the wider community decide, not the editor and a few peer reviewers.
The evidence on the reliability of the peer review process suggests that there is a lot of randomness in the process. If some of these “soul-less” papers had been resubmitted a few months later, some of them would have been accepted with enthusiastic reviews. Here’s a 2006 review of the literature on journal reliability and here’s the classic 1982 article showing that a lot of journal acceptance is indeed random. Ironically, Peters and Ceci (1982) note that “serious methodological flaws” are a common reason for rejecting papers – that had already been accepted!
This brings me to Warren’s third point – a complaint about people who submit poorly developed papers. He suggests that there are job pressures and a lack of training. On the training point, there is nothing to back up his assertion. Most social science programs have a fairly standard sequence of quantitative methods courses. The basic issues regarding causation v. description, identification, and assessment of instrument quality are all pretty easy to learn. Every year, the ICPSR offers all kinds of training. Training we have, in spades.
On the jobs point, I would like to blame people like Professor Warren and his colleagues on hiring and promotion committees (which includes me!!). The job market for the better positions in sociology (R1 jobs and competitive liberal arts schools) has essentially evolved into whoever gets into the top journals in graduate school plus graduate program reputation.
I’d suggest we simply think about the incentives here. Junior scholars live in a world where a lot of weight is placed on a very small number of journals. They also live in a world where journal acceptance is random. They also live in a world where journals routinely lose papers, reject after multiple R&R rounds and takes years (!) to respond (see my journal horror stories post). How would any rational person respond to this environment? Answer: just send out a lot of stuff until something hits. There is no incentive to develop a paper well if it will be randomly rejected after sitting at the journal for 16 months.
This is why I openly praise and encourage reforms of the journal system. I praise “platform” publishing like PLoS One. I praise “up or down” curated publishing, like Sociological Science. I praise Socius, the open access ASA journal. I praise socArxiv for creating an open pre-print portal. I praise editors who speed the review process and I praise multiple submissions practices. The basic issues that Professor Warren discusses are real. But the problem isn’t training or stressed out junior scholars. The problem is the archaic journal system. Let’s make it better.