orgtheory.net

some limits of good journals

My view is that academia works, but there’s a lot of noise. Quality correlates with reputation and rewards, at least enough of the time. I found more evidence for this view from a recent article in Economica by Andrew Oswald. He compared the citation patterns of 2 leading econ journals, 2 mid tiers, and 2 not so great journals. The key findings:

  • The average article in a top journal does get more citations than average articles in lower ranked journals. Once again, for the umpteenth time, the rankings aren’t totally baseless.
  • However, the most cited articles in the lowest prestige journals are cited more often than the 4 least popular articles in leading journals – combined. In other words, the best article in a low ranking journal does better than the entire tail end of a top end journal.
  • Lead articles do tend to better, suggesting that editors know what is actually good.

Implications: there is significant error in the journal system at the level on individual articles. A significant portion of top journal articles simply don’t get cited, and the top of the crop from lower journals is probably as good as the average top level article. At the same time, it doesn’t undermine the value of journal rankings and using them as aggregate measures of quality. If a department publishes most work in low tier journals, it’s probably a sign that their work isn’t having a measurable impact. It’s only the very best articles in that batch that are highly regarded, not the typical one. Bottom line: journal rankings are good for departments, but use with care for individuals.

Bonus round: My guess about the non-cited elite articles is that they are (a) in areas that aren’t popular anymore or areas that are of passing interest, (b) highly technical – they impress reviewers, but average scholars won’t care, or (c) approved because of ties with reviewers/editors or author repuation.

Written by fabiorojas

November 17, 2008 at 5:16 am

Posted in academia, fabio

5 Responses

Subscribe to comments with RSS.

  1. I think intuitively all of this makes sense but I don’t know if the analysis has really demonstrated anything since some unknown portion of the effect (but almost certainly not all of it) could just be cumulative advantage. I know that personally, I’m fairly likely to browse the top journals but I only come across articles in lesser journals if I’m actively doing a lit review. And when I browse I’m more likely to read the lead article just because I often get distracted before I work through an issue. There are all sorts of reasons why I’m more likely to read and cite articles from top journals that aren’t directly related to quality. In this respect the most surprising finding is that the citation distributions of low and high prestige journals overlap so much.
    Still, it’s interesting research and I can imagine ways to improve it further. For instance, many journals report the time between an article’s initial submission and it’s final acceptance. This is probably a better proxy for reviewer enthusiasm than “lead article” since a long review time indicates multiple and/or demanding R+R. Ideally you could get the cooperation of the journal and see if the peer reviewers actually know what they’re talking about when they check the box by “Publish this important contribution; it is likely to reorient thinking or be widely cited.”

    Like

    gabrielrossman

    November 17, 2008 at 5:52 am

  2. I’m not sure (c) explains low impact. I would think (though I’m not sure) that many invited pieces, and pieces otherwise published on the strength of the author’s reputation, are cited for the same reason: the strength of the author’s reputation.

    Like

    Thomas Basbøll

    November 17, 2008 at 6:02 am

  3. Sometimes an elite article can go without many citations for a long period of time and then suddenly experience a boom in citations. I expect this is the case for extremely novel articles that take a while to find their audience. Stewart Macaulay’s “Noncontractual Relations in Business” from ASR (1963) is an example of this. No one knew what to make of it when it was first published, although some editor had the good sense to publish it in a top journal. A couple of decades later the article was foundational for the new economic sociology.

    Like

    brayden

    November 17, 2008 at 2:55 pm

  4. One of the cite-less articles in the 1981 The American Economic Review is “The Private and Social Utility of Extortion” by John AC Conybeare. It was for me one of the most memorable articles in that volume of the AER (I stopped reading the AER on a regular basis a few years later). So why wasn’t it cited? The hypotheses listed don’t work too well. In this case, the problem seems to be an excess of originality or, perhaps, a mismatch between the author’s approach and the way economists usually think about problems — John is one a very small number of folks who have published articles in the AER, APSR, and AJS. But it is also the case that having made this argument the author moved on to other things — repetition matters, especially when proposing something newish.

    I think the whole process of journal acceptance, reception, and citation is fairly arbitrary and can probably be understood better by sociologists than economists.

    Like

    Fred Thompson

    November 22, 2008 at 6:16 pm

  5. […] a comment » My question: how should academia approach small, third tier journals?* In a previous post, I linked to research showing that these journals do actually publish some highly respected […]

    Like


Comments are closed.