orgtheory.net

measuring impact

Brayden

Via Peter Klein and the RePEc blog, in this article a team of biologists take on Thomson’s venerable citation impact factor – the score that most scientists use to calculate the impact/prestige of academic journals. The impact factors are flawed, the authors argue, because they use imprecise data and are a skewed measure of impact. But more importantly, the impact factor is not very transparent. Impact factors can’t be reproduced by third parties. Even when Thomson gives their data to a third party, as they did with the team who wrote the article, the impact factors can’t be reproduced exactly. The problem is akin to study findings that can’t be replicated because either the data are not available or because the data were flawed to begin with. This leads the authors to this very strong conclusion.

It became clear that Thomson Scientific could not or (for some as yet unexplained reason) would not sell us the data used to calculate their published impact factor. If an author is unable to produce original data to verify a figure in one of our papers, we revoke the acceptance of the paper. We hope this account will convince some scientists and funding organizations to revoke their acceptance of impact factors as an accurate representation of the quality—or impact—of a paper published in a given journal.

Journal impact factors are a pretty big deal these days. I’m not sure how they’re used in sociology departments, but many business schools decide which journals are A publications based on impact factors. It’s a rough estimate of quality, but when a field is as factured as organizational studies, impact factors can settle a lot of debate. Given their importance, a discussion of the validity of these measures seems needed.

About these ads

Written by brayden king

January 3, 2008 at 5:22 pm

4 Responses

Subscribe to comments with RSS.

  1. My impression is that impact factors are not given much attention in sociology, especially because the impact factor of journals in soc are strongly influenced by the fecundity of the fields to which the journal connects (i.e., journals connected to health seem to do well disproportionate to the prestige or selectivity with which they are regarded in soc).

    Like

    jeremy

    January 3, 2008 at 6:04 pm

  2. This comment is a bit off-point, but close enough…There’s been an insightful discussion on the BPS division listservs about measuring a scholars impact — e.g., author ordering and importance of journal outlet (mostly the previous point, but the outlet impact question of course is a perennial one).

    In my mind the best heuristic for looking at a scholar’s impact was offered by Will Mitchell and Jay Barney in their emails. READ THE PAPERS! I agree. There’s of course plenty to be said for top journals, but, there are also some tremendous, very daring pieces in lesser-“impactful” outlets as well — e.g., R&S, BBS, ICC come to mind. And, one’s overall program of research (and potential impact and direction) becomes evident if one looks at the full portfolio of work.

    Beyond journal outlet, the whole question of scholar impact is interesting. I had a colleague at Emory going up for tenure (with half dozen strong A publications, plenty to pass the hurdle), and he was being asked about impact (read: citations) at that stage…the publications of course had scarcely even hit the journals (one took 5+ years from first submission before it was actually published), many were forthcoming, and thus there were relatively few citations to his work. Incidentally, some business schools, given pub lag times and problems of measuring impact, have recently moved to an 8-yr tenure clock as a result — I heard Minnesota and NYU for example (correct me if I am wrong).

    Like

    tf

    January 3, 2008 at 8:41 pm

  3. Brayden – Dan Klein at GMU econ also has a serious beef with Thomson impact factors. His issue is somewhat different. He points out that Thomson has no clear standard for what counts as an academic journal and that they systematically exclude certain specialties. Klein’s a libertarian economist and he (rightfully) argues that there is no a prior reason to exclude something like the Review of Austrian Economics while admitting Dissent. I wouldn’t be surprised if Thomson’s list of legitimate journals would wither if we looked at other specialties.

    Like

    Fabio Rojas

    January 3, 2008 at 9:11 pm

  4. [...] probably not even reproducible by third parties (we’ve covered some of this ground before; here, here, and here), and all the rest, but I still wonder whether this signals some permanent [...]

    Like


Comments are closed.

Follow

Get every new post delivered to your Inbox.

Join 1,193 other followers

%d bloggers like this: