orgtheory.net

journal impact factors: what are they good for?

The ISI journal impact factors for 2012 were released last month. Apparently 66 journals were banned from the list for trying to manipulate (through self-citations and “citation stacking”) their impact factors.

There’s a heated debate going on about impact factors: their meaning, use and mis-use, etc.  Science has an editorial discussing impact factor distortions.  One academic association, the American Society for Cell Biology, has put together a declaration (with 8500+ signers so far)–San Francisco Declaration on Research Assessment (DORA)–highlighting the problems caused by the abuse of journal impact factors and related measures. Problems with impact factors have in turn led to alternative metrics, for example see altmetrics.

I don’t really have problems with impact factors, per se.  They are one, among many, measures that might be used to measure journal quality.  Yes, I think some journals indeed  are better than others.  But using impact factors to somehow assess individual researchers can quickly lead to problems.  And, it is important to recognize that impact factors assume that articles within the journal are homogeneous, though within-journal citations of course are radically skewed.  Thus a few highly-cited pieces essentially prop up the vast majority of articles in any given journal. Citations might be a better measure, though also highly imperfect.  If you want to assess research quality: read the article itself.

On the whole, article effects trump journal effects (as Joel Baum’s article also points out, see here).  After all, we all have one-two+ favorite articles, published in some obscure journal no one has ever heard of.  Just do interesting work and send it to journals that you read.  OK, that’s a bit glib.  I know that all kinds of big issues hang in the balance when trying to assess and categorize research: tenure and promotion, resource flows, etc. Assessment and categorization is inevitable.

A focus on impact factors and related metrics can quickly lead to tiresome discussions about which journal is best, is that one better than this, what are the “A” journals, etc.  Boring.  I presented at a few universities in the UK a few years ago (the UK had just gone through its Research Assessment Exercise), and it seemed that many of my interactions with young scholars devolved into discussing which journal is an “A” versus “A-” versus “B.”  Our lunch conversations weren’t about ideas – it was disappointing, though also quite understandable since young scholars of course want to succeed in their careers.

Hopefully enlightened departments and schools will avoid the above traps and focus on the research itself.  I think the problems of impact factors are well-known by now and hopefully these types of metrics are used sparingly in any form of evaluation, and only as one imprecise datapoint among many others.

[Thanks for Joel Baum (U of Toronto) for sending me some of the above links.]

Written by teppo

July 9, 2013 at 9:26 pm

Posted in uncategorized

7 Responses

Subscribe to comments with RSS.

  1. Not to perpetuate the problem (I agree that there is huge within-journal variation in article quality), but has the question of journal quality been approached through AllOurIdeas yet? It could get tricky in terms of what journals to include within the same set, but for certain fields, it may be more straight forward than others.

    Like

    Eszter Hargittai

    July 9, 2013 at 10:42 pm

  2. Several years ago, I doubled checked the impact factor of Research in Organizational Behavior. This was fairly easy to do as there is only one “issue” per year with only 10-15 articles per “issue.” ISI will give you the underlying math used to determine the impact factor. ROB was not published in 2007 or 2005 but ISI gave the journal credit for articles supposedly published in those years.

    As many already know, the journal Human Resource Management (published in the US by Wiley) had an inflated citation rate for many years as ISI was attributing any citation that included the term “human resources” to the journal.

    I would suggest that a strong “attack” on the use of citation rates as a measure of journal quality could be achieved with a random audit of the impact factor of a subset of journals. I don’t put trust in the metric for the reasons you raise and their underlying unreliability.

    Like

    Tim Gardner

    July 9, 2013 at 10:42 pm

  3. Eszter: yes, we did do that with management/orgs journals – https://orgtheory.wordpress.com/2011/01/15/management-journal-rankings-crowdsourced/

    Some of the results were posted on the OMT website, though we haven’t done anything else with the data – http://omtweb.org/omt-blog/53-main/330-crowdsourcing-management-journal-rankings-

    I believe someone also ranked sociology journals as well (perhaps Kieran, I know he did soc departments), can’t remember (and couldn’t find any links/results for soc journals). We did also use allourideas to rank law schools, that ranking went viral, and got interesting results (given some natural experiments, like UC Irvine’s relatively new school, and law reviews).

    Tim: yes, it would be worthwhile to create more transparency around these things, and I’m guessing all the automation will facilitate that type of thing (ability to easily audit — and the emergence of ALL kinds of new measures). I do remember hearing about the HRM “scandal” – though interestingly reputations seem to be sticky, and that journal is for example listed in FT’s list of 45 journals that determine research rank (lots of politicking and lobbying has undoubtedly gone into that journal set – what is in and what is out – I observed from the sidelines how one journal managed to get themselves onto the list, which of course has all kinds of consequences). These things do matter, and lead to some weird/interesting dynamics in the long run. In the UK there will be another major research assessment next year, so all these issues will be raised to the surface there again (I believe something similar is done in Australia). I don’t know, I find these types of things a bit silly. I suppose some of this categorization and assessment is necessary and helpful, though it has so many long-run/unintended consequences.

    Like

    teppo

    July 10, 2013 at 2:53 am

  4. Here is a link to the soc journals: http://www.allourideas.org/socjournals/results

    Like

    BamaSoc

    July 10, 2013 at 3:25 am

  5. Impact factor is very important on the supplier side. The higher a journal’s impact factor, the more a journals publisher will charge for it to libraries, etc.

    Like

    Eric S

    July 10, 2013 at 1:31 pm

  6. Thanks for those pointers! This was all very timely as I was discussing communication journals with colleagues yesterday and wondered whether anyone has done it for journals in communication/media studies fields. I’ll have to look into that.

    Like

    Eszter Hargittai

    July 10, 2013 at 5:31 pm

  7. […] ISI journal impact factors for 2012 were recently released, as I found out the other day via Orgtheory, since I don’t pay any attention to things like journal impact factors. I continued not […]

    Like


Comments are closed.