orgtheory.net

i rejected a famous article

I find it interesting that I almost never hear anyone take back a review. Some famous articles have been rejected at one or more journals. Theory of the Firm was rejected, as was the DiMaggio and Powell iron cage article, I believe. There’s a saying in science departments: The history of science is strewn with articles rejected by Nature and Science. Most people don’t like to admit they were wrong, but I would expect at least a few people would fess up and explain why, at the time, they thought their review of a now famous article was (or remains) justified. Any examples?

Written by fabiorojas

October 25, 2009 at 3:06 am

Posted in academia, fabio

16 Responses

Subscribe to comments with RSS.

  1. That’d be interesting! I would be curious to hear from the referees that rejected Akerlof’s “Market for ‘Lemons'” paper in three journals. According to Akerlof himself:

    “By June of 1967 the paper was ready and I sent it to The American Economic Review for publication. I was spending the academic year 1967-68 in India. Fairly shortly into my stay there, I received my first rejection letter from The American Economic Review. The editor explained that the Review did not publish papers on subjects of such triviality. In a case, perhaps, of life reproducing art, no referee reports were included.

    Michael Farrell, an editor of The Review of Economic Studies, had visited Berkeley in 1966-67, and had urged me to submit “Lemons” to The Review, but he had also been quite explicit in giving no guarantees. I submitted “Lemons” there, which was again rejected on the grounds that the The Review did not publish papers on topics of such triviality.

    The next rejection was more interesting. I sent “Lemons” to the Journal of Political Economy, which sent me two referee reports, carefully argued as to why I was incorrect. After all, eggs of different grades were sorted and sold (I do not believe that this is just my memory confusing it with my original perception of the egg-grader model), as were other agricultural commodities. If this paper was correct, then no goods could be traded (an exaggeration of the claims of the paper). Besides — and this was the killer — if this paper was correct, economics would be different.” (http://nobelprize.org/nobel_prizes/economics/articles/akerlof/)

    Like

    pll

    October 25, 2009 at 1:07 pm

  2. It’s interesting: Did Michael Ferrell believe the article was good, but obliged to go with the reviewers? I’d be curious to know.

    Like

    fabiorojas

    October 25, 2009 at 6:00 pm

  3. Besides — and this was the killer — if this paper was correct, economics would be different

    This is the 50-dollar-bill-on-the-sidewalk of reviews.

    Like

    Kieran

    October 25, 2009 at 6:49 pm

  4. Fabio:

    You seem to assume that the fact that a paper becomes famous means that the paper should have been accepted for publication. I, for one, have recommended rejection for at least two semi-famous papers (one has ~100 citations and the other has ~150 citations [and is part of a research program with thousands of citations], both in ~10 years) and I have no regrets on either score.

    Like

    ezrazuckerman

    October 25, 2009 at 7:30 pm

  5. E.Z.:

    Fair enough. If we made an honest conclusion about the quality of a paper, we shouldn’t cave to peer pressure. As I noted in my post, I’d like to hear people “explain why, at the time, they thought their review of a now famous article was (or remains) justified.” Well? Fess up, dude. Without giving away names (unless you want to!), what, in broad terms, was the basis of your judgment? What do other people see? Inquiring minds want to know!

    F Ro

    Like

    fabiorojas

    October 25, 2009 at 7:44 pm

  6. EZ:

    Maybe the deeper question is this: do reviewers have papal infallibility? Do they ever make mistakes? I’d probably agree that reviewers are right on the average, but we do make mistakes. Sometimes big ones. Are we big enough to admit it?

    F Ro

    Like

    fabiorojas

    October 25, 2009 at 7:47 pm

  7. Papal infallibility! Ha! I think there was a discussion on scatterplot about how a small percentage of scholars seem to account for a large percentage of reviewers, thus creating de facto gatekeepers. That being said, there seem to be at least two issues here.

    1) Famous papers that were rejected by at least one journal — you would expect that there some highly divisive subfields would produce papers that are famous but are rejected by generalist journals, especially if given to reviewers on the opposite side of the divide.

    2) Cutting-edge papers that are acknowledged ex post as important papers but are marginal enough at the time of publication as to be divisive or misunderstood.

    I would think, especially in the case of purely theoretical papers, that these papers are often used as the foundation of a theoretical research program but are insufficiently empirical to be published in a major/generalist journal.

    Like

    Trey

    October 25, 2009 at 8:00 pm

  8. Yes, reviewers are infallible. Authors, on the other hand, seem to do nothing but fail. Or is it the other way around? Depends on which role I’m playing… Reminds me of how I curse drivers when I’m a pedestrian, but as a pedestrian… (Of course, editors suck regardless!)

    To answer your question more seriously, I actually see value in each of those papers. And with regard to one of those papers in particular, I now have a better appreciation for what others see in it. However: (a) I still believe that the paper and the associated line of research are way way oversold; (b) the author ignored the first-round reviewers’ [I came in on the second round] very productive suggestions for improving the paper; and (c) I have since engaged the paper more deeply (by trying to implement the model in it) and discovered that it is even more flawed than I first thought. So… definitely no regrets.

    And no, I don’t think I’m at liberty to discuss specifics. I’ll send you the two reviews by email though and let you offer any thoughts you have on them if you don’t give away specifics.

    Ezra

    Like

    ezrazuckerman

    October 25, 2009 at 8:15 pm

  9. Are the results of (c) publishable?

    Like

    Thomas

    October 25, 2009 at 9:43 pm

  10. Thomas: In general, a refutation or re-analysis is publishable if it shows something important. The Coleman/Katz stuff, I believe, was re-analyzed numerous times in journals.

    Like

    fabiorojas

    October 25, 2009 at 9:44 pm

  11. I think your remark is more general than mine. Any paper that “shows something is important” is presumably publishable. My question goes to the narrower case: an already important but “flawed” paper. Is the discovery of the flaw publishable (simply by virtue of the original paper already being “important”)? Obviously, we’d have to be talking about a “significant” flaw.

    Like

    Thomas

    October 25, 2009 at 10:02 pm

  12. that “is” before “important” is a (published) flaw in my comment

    Like

    Thomas

    October 25, 2009 at 10:03 pm

  13. I haven’t been in the business long enough to recommend rejecting a famous paper, but the editor of a good journal recently accepted a paper that I recommended they reject. In that case, like Ezra, I didn’t think that the author addressed the major concerns raised in the first round and actually introduced additional problems in the analysis on the second round. Even with my reservations, however, I respect the editor’s prerogative and think that it’s healthy for editors to occasionally side with one reviewer over another (rather than just rejecting every paper that doesn’t have reviewer consensus). Reviewers often shy away from controversial ideas (perhaps the market for lemons paper is an example of this) and so unless editors are willing to take positions that counter a reviewer’s recommendation, those ideas may never make it into the most influential journals. So even though the editor didn’t follow my recommendation, I’m okay with his decision to accept the paper.

    Like

    brayden

    October 25, 2009 at 10:11 pm

  14. Thomas: There are many many published papers with serious flaws. Where to begin (end end)?

    Brayden: In principle, I agree. However, the editor’s “prerogative” cuts both ways. Authors are pissed when arbitrary editorial decisions go against them. Reviewers who have done a careful job and found fatal flaws in a paper are right to be just as pissed when editors dismiss their review as just one opinion that might be weighed against others. Of course, if the editor “takes a (coherent, reasonable) position,” neither the author nor the reviewer should be too aggrieved. Not always the case, I’m afraid.

    Like

    ezrazuckerman

    October 26, 2009 at 12:55 am

  15. As cones of silence descended over Ann Arbor and Cambridge, encrypted messages were exchanged via semaphore. And Fabio agreed that Ezra had made some good points.

    On a more serious level, I’d say Ezra articulated criticisms which are well known about the papers and the research traditions they belong to. If you buy the criticisms, then the negative evaluations stand up.

    Thomas: Ok, I see where you are coming from. Here is what I might consider publishable refutations:

    – redoing the data and showing grave error (often reported in a short comment on a paper)

    – redoing the data in a new way (deserves a new paper)

    – showing an implication that was not obvious, but raises serious questions (depends – paper or comment)

    – plugging the paper into a new framework and showing it doesn’t work (new paper)

    Like

    fabiorojas

    October 26, 2009 at 1:41 am

  16. The conversation here seems to be conflating a few issues. The question as posed seems to be about rejections of any sort. pil’s comments about Akerlof are mostly about editors and decisions about impact or relevance in the context of a high status journal. Most of the back and fourth with EZ seems to about more concrete errors (methodological or otherwise) in the papers.

    Top journals reject most papers entirely based on relevance or impact. I’ve heard that in Science or Nature this is well over 90%. Of course, it is sometimes hard to determine the revolutionary radical departures from the purely irrelevant explorations and I’ve assumed that that is why so many research streams have their seminal paper published in a lower status venue.

    PLoS ONE is not a social science journal but I think it is on to something really interesting in this regard. It is an open access journal that publishes absolutely everything that is determined (by peer review, of course) as being solidly executed science. By completely taking impact decisions out of the mix and publish upwards of 70% of what is submitted.

    This sort of hints at what I think is the most useful piece of Fabio’s sextuple blind review suggestion from a few years back.

    Like

    Benjamin Mako Hill

    November 14, 2009 at 10:56 pm


Comments are closed.

%d bloggers like this: