Via Peter Klein and the RePEc blog, in this article a team of biologists take on Thomson’s venerable citation impact factor – the score that most scientists use to calculate the impact/prestige of academic journals. The impact factors are flawed, the authors argue, because they use imprecise data and are a skewed measure of impact. But more importantly, the impact factor is not very transparent. Impact factors can’t be reproduced by third parties. Even when Thomson gives their data to a third party, as they did with the team who wrote the article, the impact factors can’t be reproduced exactly. The problem is akin to study findings that can’t be replicated because either the data are not available or because the data were flawed to begin with. This leads the authors to this very strong conclusion.
It became clear that Thomson Scientific could not or (for some as yet unexplained reason) would not sell us the data used to calculate their published impact factor. If an author is unable to produce original data to verify a figure in one of our papers, we revoke the acceptance of the paper. We hope this account will convince some scientists and funding organizations to revoke their acceptance of impact factors as an accurate representation of the quality—or impact—of a paper published in a given journal.
Journal impact factors are a pretty big deal these days. I’m not sure how they’re used in sociology departments, but many business schools decide which journals are A publications based on impact factors. It’s a rough estimate of quality, but when a field is as factured as organizational studies, impact factors can settle a lot of debate. Given their importance, a discussion of the validity of these measures seems needed.