Archive for the ‘academia’ Category
Rejection is never fun, but I don’t think it’s unfair. Vague and conflicting reviews aren’t fun either, but that’s life. What I do think is unfair is when editors are negligent and incompetent, which leads to enormous wasting of time and, in some cases, can end careers. In this post, I’d like to share a few of my personal horror stories with journals so you know what often happens in the review process. Here are a few of my “favorites:”
- The Journal of Mathematical Sociology (previous editor, not current) once waited 24 months to send me a rejection based on a 1 sentence review.
- Social Networks (also a previous editor) lost a manuscript twice, resulting in a paper being tied in review for almost two years before it got a review.*
- The American Journal of Public Health never reviewed a paper that was submitted. After a few months, I was asked to review my own paper! Then, after I complained, they never obtained reviews. About a year later, after me and my co-author complained, the paper was not reviewed or even desk rejected. Technically, I suppose, it might still be under review!
On top of this, some folks are plain dishonest. For example, a previous editor of Sociology of Education rejected a paper of mine after the R&R (which is fair) and said that they don’t do 2nd R&Rs but then asked me to review a paper on the 2nd R&R. A book editor, after rejecting my manuscript told me face to face that they simply didn’t accept books by first time authors. That press, and most others, actually publish first time book authors, including friends of mine.
I am under no pretense that we can eliminate incompetence and dishonesty. But there are simple reforms that can lessen the cost of poor editing. For example, I am an advocate of multiple submission for journals – you can submit to as many journals at once. That way, if you get an incompetent editor, you can simply take your business elsewhere and not bother waiting for a response or dealing with chaotic and contradictory reviews. I also think Sam Lucas is onto something when he suggests that we should not allow reviewers to write open ended and vague reviews. Bottom line: the journal system allows people to do all kinds of bad things, but simple reforms can reduce the risk to authors.
* The paper was rejected on “frustrating” grounds – a computer simulation was sent to experimentalists who wanted to see an experiment. Not unfair, but frustrating and a waste of time. If the journal doesn’t accept computational papers, it should have been desk rejected.
Samuel R. Lucas is professor of sociology at the University of California, Berkeley. He works on education, social mobility, and research methods. This guest post proposes a reform of the journal review process.
On-going discussion about the journal publication process is laudable. I support many of the changes that have been suggested, such as the proposal to move to triple-blind review, and implemented, such as the rise of new journals that reject “dictatorial revi”–oops, I mean “developmental review.” I suggest, however, that part of the problem is that reviewers are encouraged to weigh in on anything–literally anything! I’ve reviewed papers and later received others’ reviews only to find a reviewer ignored almost all of the paper, weighing in on such issues as punctuation and choice of abbreviations for some technical terms. Although such off-point reviews are rare, they indicate that reviewers perceive it legitimate to weigh in on anything and everything. But a system allowing unlimited bases of review is part of the problem with peer review, for it shifts too much power to reviewers while at the same time providing insufficient guidance on what will be helpful in peer review. I contend that we need dispense with our kitchen-sink reviewing system by removing from reviewer consideration two aspects of papers: framing and findings.
Framing is a matter of taste and, as there is no accounting for taste, framing offers fertile ground for years of delay. Framing is an easy way to hold a paper hostage, because most solid papers could be framed in any one of several ways, and often multiple frames are equally valid. Authors should be allowed to frame their work as they see fit, not be forced to alter the frame because a reviewer reads the paper differently than the author. A reviewer who feels a paper should be framed differently should wait for its publication and then submit a paper that notes that the paper addressed Z but missed its connection to Q. Such an approach would make any worthwhile debate on framing public while freeing authors to place their ideas into the dialogue as well.
As for findings, peer review should be built on the following premise: if you accept the methods, then you accept the findings enough for the paper to enter the peer-reviewed literature. Thus, reviewers should assess whether the paper’s (statistical, experimental, qualitative) research design can answer the paper’s research question, but not the findings produced by the solid research design. Allowing reviewers to evaluate findings allows reviewers to (perhaps inadvertantly) scrutinize papers differently depending on the findings. To prevent such possibilities, journals should allow authors to request a findings-embargoed review, for which the journal would remove the findings section of the paper as well as the findings from: 1)the abstract, and, 2)the discussion/conclusion section of the paper before delivering the paper for review. As some reviewers may regard reading soon-to-be-published work early as a benefit of reviewing, reviewers could be sent full manuscripts if the paper is accepted for publication.
A review system in which reviewers do not review framing and findings is a design-focused review system. Once a paper passes a design-focused review, editors can conduct an in-house assessment to assure findings are accurately conveyed and the framing is coherent. The editors, unlike reviewers, see the population of submissions, and thus, unlike reviewers, are well-placed to fairly and consistently assess any other issues. Editors will be even more enabled to make such calls if they can make them only for the papers reviewers have determined satisfy the basic criterion of having a design solid enough to answer the question the paper poses.
The current kitchen-sink review system has become increasingly time-consuming and perhaps capricious, hardly positive features for effective peer review. If findings were embargoed and reviewers were discouraged from treating their preferred frame as essential to a quality paper, review times could be chopped dramatically and revise and resubmit processes would be focused on solidifying design. As a result, design-focused review could lower our collective workload by reducing the number of taste-driven rounds of review we experience as authors and reviewers, while simultaneously reducing authors’ potentially paralyzing concern that mere matters of taste will block their research from timely publication. Design-focused review may thus make peer review work better for everyone.
Inside Graduate Admissions: Merit, Diversity and Gatekeeping by Julie Posselt is an exploration of how faculty in leading doctoral programs choose graduate students. The book is fitting successor to Michele Lamont’s How Professors Think, which was a book about how professors select elite fellowship recipients (see the orgtheory discussion here). The method is the same in each book – observe and interview academics as they deliberate and meet in committees.
Posselt provides a nice overview of how admissions committees operate. The take home points are intuitive and they should resonate with any faculty member who has served on such a committee: there are disciplinary standards; people choose others like themselves; there are internal politics and department level fit issues; people search for a hard to defined “talent” and diversity is paid lip service but doesn’t have much of an impact. There are also nice discussions of international students, conservatives, and students from low status schools.
Overall, a really solid contribution to the ethnographic study of group deliberation and a required reading for students of higher education and the disciplines. My one criticism is that Posselt gets the role of GRE’s wrong and comes to a conclusion that I would not have. She correctly notes that GRE are imperfect but in some sections of the book espouses the view that GRE’s are terribly flawed. Yet, in the conclusion, Posselt comes back to the view that GRE’s have only been “misused.”
As I’ve noted on this blog often, GRE’s are actually quite useful and that is backed up by enormous research. It saddens me to see that Posselt is not familiar with this literature. But there’s a deeper issue. Posselt’s ethnography reveals the importance of GRE scores. If it weren’t for GRE scores, graduate admissions committees would simply replicate themselves by choosing white, male Apple computer fanatics. You think I jest, but Posselt actually has an entire section about how professors like choosing students who mimic their personal style (she calls it “cool” homophily), which includes using a lot of Apple products. So I say this – the GRE’s may be flawed, but a world without them would probably be much worse.
Disclaimer: I am a proud author of an article in PLoS One and I am extremely biased.
Michael Eisen, one of the biologists who promoted PLoS One, has an interesting blog post discussing the economics of PLoS One. He wrote it in response to the discovery that PLoS One’s management gets paid *very well.* His response is a nice discussion of how PLoS One works as a business. And just for the record, I am not against anyone making a profit so long as they produce something that benefits the rest of us. PLoS One has really opened the door for a lot of scholarship to come out quickly and free to the public.
Back to Eisner:
If I weren’t involved with PLOS, and I’d stumbled upon PLOS’s Form 990 now, I’d have probably raised a storm about it. I have absolutely no complaints about Andy’s efforts to understand what he was seeing – non-profits are required to release this kind of financial information precisely so that people can scrutinize what they are doing. And I understand why Andy and others find some of the info discomforting, and share some of his concerns. But having spent the last 15 years trying to build PLOS and turn it into a stable enterprise, I have a different perspective, and I’d like to explain it.
The reality is, however, that it costs PLOS a lot more than $0 to handle a paper. We handle a lot of papers – close to 200 a day – each one different. There’s a lot of manual labor involved in making sure the submission is complete, that it passes ethical and technical checks, in finding an editor and reviewers and getting them to handle the paper in a timely and effective manner. It then costs money to turn the collection of text and figures and tables into a paper, and to publish it and maintain a series of high-volume websites. All together we have a staff of well over 100 people running our journal operations, and they need to have office space, people to manage them, an HR system, an accounting system and so on – all the things a business has to have. And for better or worse our office is in San Francisco (remember that two of the three founders were in the Bay Area, and we couldn’t have started it anywhere else), which is a very expensive place to operate. We have always aimed to keep our article processing charges (APCs) as low as possible – it pains me every time we’ve had to raise our charges, since I think we should be working to eliminate APCs, not increase them. But we have to be realistic about what publishing costs us.
I’ve argued for a long time that we should do away with selective journals, but so long as people want to publish in them, they’re going to have this weird economics. And note this is not just true of open access journals – higher impact subscription journals bring in a lot more money per published paper than low impact subscription journals, for essentially the same reason.
A recent Atlantic article by Victoria Clayton makes the case that the GRE should be ditched based on some new research. The case for the GRE rests on the following:
- The GRE does actually, if modestly, predict early graduate school grades and you need to do well in courses to get the degree.
- Many other methods of evaluating graduate school applications are garbage. For example, nearly all research on letters of recommendation shows that they do not predict performance.
To reiterate: nobody says GRE scores are perfect predictor. I also believe that their predictive ability is lower for some groups. But the point is not perfection. Th point is that the GRE sorta, kinda works and the alternatives do not work
So what is the new evidence? Actually, the evidence is lame in some cases. For example, Clayton cites a 1997 Cornell study claiming that GRE’s don’t correlate with success. True, but if you actually read the research on GRE’s there have been meta-analyses that compile data from multiple studies and find that the GRE does actually predict performance. This study compiles data from over 1,700 samples and shows that, yes, GRE does predict performance. Sorry, it just does, test haters.
Clayton also cites a Nature column by Miller and Stassun that correctly laments the fact that standardized tests sometimes miss good students, especially minorities. As I pointed out above, no one claims the GRE makes perfect predictions. Only that the correlation is there and that is better than the alternatives that simply don’t predict performance. But at least Miller and Stassun offer a new alternative – in depth interviews. Miller and Stassun cite a study of 67 graduate students at Fisk and Vanderbilt selected via this method and note that their projected (not actual) completion rate is 80% – much better than the typical 50% of most grad programs.
Two comments: 1. I am intrigued. If the results can be replicated in other places, I would be thrilled. But so far, we have one (promising) study of a single program. Let’s see more. 2. I am still not about to ditch GRE’s because I am not persuaded that academia is ready to implement a very intensive in-depth interview admissions system as its primary selection mechanism. The Miller and Stassun column refers to a study of physics graduate students – small numbers. What is realistic for grad programs with many applicants is that you need to screen people for interviews and that screen will include, you guessed it, standardized tests.
Bottom line: The GRE is far from perfect but it is usable. There is no evidence to systematically undermine that claim. Some alternatives don’t work and the new proposed method, in depth interviews, will probably need to be coupled with GREs.