orgtheory.net

what’s up with impact factors?

Usually when someone starts throwing citation impact data at me, my eyelids get heavy and I want to crawl into a corner for a nap. Like Teppo wrote a couple of years ago, “A focus on impact factors and related metrics can quickly lead to tiresome discussions about which journal is best, is that one better than this, what are the “A” journals, etc.  Boring.” I couldn’t agree more. Unfortunately, I’ve heard a lot about impact factors lately. The general weight of impact factors as a metric for assessing intellectual significance has seemed to skyrocket since the time I began training as a sociologist. Although my school is not one of them, I’ve heard of academic institutions using citation impact as a way to incentivize scholars to publish in certain journals and as a measure to assess quality in hiring and tenure cases. And yet it has never struck me as a very interesting or useful measure of scholarly worth. I can see the case for why it should be. Discussions about scholarly merit are inherently biased by people’s previous experiences, status, in-group solidarity, personal tastes, etc. It would be nice to have an objective indicator of a scholar’s or a journal’s intellectual significance, and impact factors pretend to be that. From a network perspective it makes sense. The more people who cite you, the more important your ideas should be.

My problem with impact factor is that I don’t trust the measure. I’m skeptical for a few reasons: gaming efforts by editors and authors have made them less reliable, lack of face validity, and instability in the measure. Let me touch on the gaming issue first.

We’ve all heard by now about the citation ring that Sage busted earlier this year. Sixty articles in a single journal were implicated. An author was able to get the editor to solicit positive reviewers by giving email accounts to the editor for a preferred reviewer that would then lead the unsuspecting (and apparently very careless) editor to send out the review to an account that was created by author himself. The scheme not only made it possible for the author to sneak fraudulent findings into the journal but it also boosted the citation impact of the articles (and journals) by promiscuous self-citing. The competition for citation impact among journals might make an overeager editor be willing to overlook such illicit activities. Although this might be the worst case of citation fraud out there, it’s become all too common for editors to try and game the system and boost their journal’s citation impact. One sneaky way is to request (and perhaps even require) authors to cite a certain number of articles from the journal after it has been conditionally accepted. Another gaming tactic is to periodically include review articles in the journal that cite mostly other articles from the journal. Reviews are generally cited more than empirical articles anyway, but this tactic has the added benefit of getting other authors to fixate on the other articles cited in the review, boosting the impact factor as a result. It’s not clear how much academic value these review pieces have, other than to increase citation impact. I’m sure there are other gaming tactics that editors use of which I’m not aware.

The second issue is that I just don’t think that citation impact has face validity. I’m speaking to a sympathetic audience when I note that sociology and management journals have fairly abysmal citation impact factors compared to other fields. If universities doled out resources based only on citation impact (and we might be moving in that direction), we’re near the bottom of the food chain. But okay, so maybe this status ordering really is reflective of our respective fields (I can accept that) but it might also be a function of other factors, such as the structure of the field itself. Looking closely at the management rankings, I really have to scratch my head because they just don’t make a lot of sense as a measure of actual impact. Here’s the top 10 for 2013:

  1. Academy of Management Review
  2. Academy of Management Annals
  3. Journal of Management
  4. MIS Quarterly
  5. Academy of Management Journal
  6. Personnel Psychology
  7. Journal of Operations Management
  8. Journal of Applied Psychology
  9. Organization Science
  10. Journal of Information Technology

The first thing that should strike you is that the top two journals are not outlets for empirical research but are review journals.  Would most scholars count them as having the greatest scholarly significance? I would hope not, because they’re not journals that report new discoveries/findings. And third is Journal of Management.  I imagine one reason this journal has such a high impact factor is because it publishes a lot of review articles as well.  Where is ASQ (you might ask)? It’s down at #34, one spot ahead of Tourism Management and just below Human Resource Management Journal.  Granted ASQ is not a journal where everyone would like to publish, but I doubt that anyone considering the actual impact of ASQ would place it in this position.

Sociology’s journal impact ranking makes a bit more sense, until it doesn’t.

  1. American Sociological Review
  2. American Journal of Sociology
  3. Annual Review of Sociology
  4. Annals of Tourism Research
  5. Sociological Theory
  6. Population and Development Review
  7. Sociological Methods and Research
  8. Sociology of Education
  9. Social Networks
  10. Sociology of Health and Illness

Okay, so ASR, AJS, and ARS. So far, so good, except that it’s debatable if ARS – a review journal – is as important as a number of high quality empirical journal. And then Annals of Tourism Research. Tourism? My jaw isn’t just dropping because it appears to be a categorical misfit but it makes very little sense that an Annals publication about a very specific topic would have greater intellectual significance than any number of other sociological journals. Another stunner here is that Social Problems is ranked #26. I don’t understand what to make of a ranking that puts International Political Sociology six spots ahead of Social Problems. The latter targets a much more general sociology audience than the former. Even if you think that SP is over-rated, I don’t see how you could justify putting it so low.

My final issue with the impact factor is that it is actually very unstable.  Last year, Organization Science was #19, and this year it is #9. Last year, Management Science was #49, and this year they are #29. Social Problems dropped 7 spots between last year and this year. Given everything we know about status, reputation, and quality (and the correlation between them), why would we ever expect intellectual quality or significance to be so volatile? We wouldn’t. These numbers must be shaped by something other than any of those things.

So basically, I think it’s time that we cast a skeptical eye on impact factor. If our universities are going to weight impact factor heavily when assessing scholarly quality and reputation, then it’s worth pointing out the problems behind impact factors. In certain universities impact factors are being used in a very blunt way to determine who deserves tenure and who doesn’t. Do we want such an unreliable measure to determine important decisions like this?

Moreover, I think as social scientists we ought to have some sort of explanation for what is going on in impact factors that shapes why they change and how they are formed (actively or passively). Why is it so volatile? What explains the relatively low impact of social science journals to the natural sciences? One possible determinant of differences in impact factors is the structure of the field itself. In high consensus fields, like the natural sciences, we ought to see more stability in journal impact factors; whereas in fields that are fragmented and heterogeneous, like sociology, we ought to see more instability and lower impact. I also think that the volume of publications also ought to matter. In some fields, especially those based in labs, scholars publish very quickly and in great quantities. In others, like economics, the rate of publication is very low. Not surprisingly, economics, despite being a high consensus field, has a low impact factor. The median impact factor for a journal in economics is .79, and they are ranked at #196 in relative impact factor. Sociology is at #192. Compare them to psychology, which has a much higher rate of productivity and is ranked #49 in median impact.  So I haven’t done any systematic investigation of this, but just by eyeing the relative impact of fields and journals, it appears that there are a lot of field-level structural characteristics that might explain differences. Does this mean we should just throw the measure out? No, I don’t think so. But we should certainly be skeptical and use it sparingly when making important decisions, like funding, hiring, or tenure.

Written by brayden king

August 8, 2014 at 6:15 pm

15 Responses

Subscribe to comments with RSS.

  1. One big reason for the low impact factors of sociology/management journals is that the formula is extremely myopic, only counting articles that are cited in year t but were published in years t-1,…,t-2. This means most sociology/management citations are right-censored for two reasons. One, we love to cite old articles. It doesn’t raise ASR‘s 2014 impact factor for DiMaggio and Powell 1983 to get another 400 citations this year. Second, we have an extremely slow review cycle. If I read a brand new publication today and graft it into the references section for the manuscript just before I submit it for review tomorrow, it might still be the case that my article takes so long to work through the review process and then publication backlog that my citing the article doesn’t end up benefitting the impact factor of the journal that published it.

    For instance, my 2014 Sociological Theory cites just over a hundred things. Of these eight were published in 2012 or 2013, but only half of those recent citations were journal articles. I cited another ten things from 2010 or 2011 (also about half journal articles), but those don’t count. So despite doing over a hundred citations, for purposes of impact factor my article only contributed four points to the numerator in impact factor. (You’re welcome Politics & Society).

    Given this set of facts it’s actually somewhat miraculous that impact factors aren’t zero.

    Liked by 2 people

    gabrielrossman

    August 8, 2014 at 6:47 pm

  2. Great observation Gabriel. You just gave sneaky journal editors another tool for gaming the system: tell authors to stop citing anything older than three years ago!

    Like

    brayden king

    August 8, 2014 at 6:50 pm

  3. Brayden,
    They could also publish as an annual in January rather than just loading up the denominator with impact factor deadweight by being so foolish as to publish a November issue.

    Like

    gabrielrossman

    August 8, 2014 at 6:56 pm

  4. Maybe Morningstar should start rating journals. Then we could compare the median number of citations to articles they’ve published in the last year, five years, and 10 years. The long window is most important, but the shorter windows provide some indication of the success of the current team. The median would probably be more useful than the mean here, because outliers. Past performance is no guarantee of future results …

    Like

    uggen

    August 8, 2014 at 7:56 pm

  5. Brayden, are you using the one-year or five-year impact factors? The latter are more stable than the former.

    Like

    Randy

    August 8, 2014 at 10:31 pm

  6. Randy – great question and I wasn’t clear about that. The unstable one that I referred to is a 2 year impact factor. The 5 year is more stable.

    Like

    brayden king

    August 8, 2014 at 10:45 pm

  7. Brayden:
    While I agree that too much emphasis on citation impact factors may be a problem, I think you dismiss them much too readily.
    First, a big part of the problem is that the two year citation impact is not very meaningful for management or sociology. Two years is much more important for the natural sciences, which are more paradigmatic, and the value of a paper can be established in the short run.
    But the JCR also provides alternative measures. i would recommend the 5 year citation impact, which covers articles published in the last five years, and even more the article influence score, which takes into account the citation impact of the journals that are citing each paper.
    For the 2013 the Article Influence Scores were ranked as follows:
    1 Academy of Management Annals 6.560
    2 ADMINISTRATIVE SCIENCE QUARTERlY 5.386
    3 ACADEMY OF MANAGEMENT REVIEW 5.317
    4 ACADEMY OF MANAGEMENT JOURNAL 5.239
    5 JOURNAL OF MANAGEMENT 4.134
    6 ORGANIZATION SCIENCE 3.595
    7 JOURNAL OF APPLIED PSYCHOLOGY 3.538
    7 Research in Organizational Behavior 3.538
    9 PERSONNEL PSYCHOLOGY 3.459
    10 STRATEGIC MANAGEMENT JOURNAL 3.087
    For 2012 they were as follows:
    1 ACADEMY OF MANAGEMENT REVIEW 6.229
    2 ADMINISTRATIVE SCIENCE QUARTERLY5.593
    3 ACADEMY OF MANAGEMENT JOURNAL 5.573
    4 Academy of Management Annals 4.649
    5 JOURNAL OF MANAGEMENT 4.069
    6 JOURNAL OF APPLIED PSYCHOLOGY 3.684
    7 PERSONNEL PSYCHOLOGY 3.628
    8 ORGANIZATION SCIENCE 3.422
    9 STRATEGIC MANAGEMENT JOURNAL 3.128
    10 MIS QUARTERLY 3.071
    Note that there is both face validity and remarkable stability in the rankings compared to the 2 year impact factor.
    Second, I think your assessment of AMR and the Annals ignores their importance to the field. In low-paradigm fields like management review papers are essential to bringing order and understanding to what otherwise be a disparate set of empirical findings.
    While I don’t believe two year impact factors are a good measure for our field other measures such as the one above do provide useful data to complement more subjective assessment. And the articles influence score is useful as guides for where to send papers out for review.

    Like

    wocasio

    August 8, 2014 at 10:59 pm

  8. Thanks for the comment Willie. First, I agree that the two year impact factor, the one that is most often invoked, is a lousy measure for journal significance in the social sciences. As you’ve shown, the five year impact factor has much more face validity and is more stable. It seems to root out the troublemakers that have figured out how to game the system and artificially distort short-term impact. But it’s still characterized by the “problem” of review journals being given more weight.

    So this is obviously a personal preference of mine. Not everyone would agree with me that review pieces shouldn’t be treated the same as original research. It’s not that I don’t think review pieces are valuable. I’ve written one and have benefited from the citations I get from it. I just think they are qualitatively different types of contribution. Some review pieces are theoretical development, and others are summaries of the state of the field with some emphasis at synthesis. Some review pieces try do both. Some pieces, like the paper I wrote with Dave Whetten and Teppo Felin in the Journal of Management, are normative essays that try to establish new research practices. My issue isn’t with the individual review articles – which I think we can assess on a one-by-one basis to figure out what is novel and interesting in them – but rather with assuming that the “impact” of review journals can be translated as greater importance or significance to the field. We would never have review pieces without interesting empirical findings, and ultimately their purpose is to generate more empirical research.

    Mostly, I would prefer to not worry much about impact factor at all. I wrote this blog post because irritation has been building up inside of me for some time about the way we as a field glorify and constantly fret about impact factor. It prevents us from talking about more important questions that should be more relevant to our debates about ideas.

    Liked by 1 person

    brayden king

    August 9, 2014 at 12:31 am

  9. Brayden, you seem to be ascribing agency to impact factors. It’s not impact factors that give review journals more weight, it is the authors of papers in other peer-reviewed journals that do so. So if there is a “problem,” it is not a problem with the measure, it is a “problem” with authors or perhaps with the field more generally. Or perhaps a “problem” with journals that don’t publish reviews or theory papers.

    In my judgment, article influence scores are good measures of what authors view as useful journals to them. Because organization management scholarship is so fragmented, review articles and theoretical pieces in the Annals, JOM, and AMR are important markers of key concepts and theories to guide our understanding of the field.

    One important thing to note is that impact factors are democratic measures, not elite ones. Some journals are more controlled by elite faculty than others and these rarely publish theory papers or reviews. ASQ used to publish theory papers and even important review pieces. It doesn’t do that any more. Org Science rarely publishes theory papers although it does publish perspective papers and had one invited special issue in 2011 on review pieces (which probably had some influence on increasing its 2 year impact factor for 2013).

    Perhaps an implication of the data is that journals such as ASQ and Org Science consider publishing theory and review papers that advance theory.

    By the way I find it a bit ironic that as one of the key bloggers in orgTHEORY.net you state that the ultimate purpose of our scholarship is to generate more EMPIRICAL research, and seem in the process to devalue theoretical contributions.

    Empirical research without theory very rarely, if ever, gets published in any top journal. Can we truly understand empirical phenomena without theory? Should we value empirical research over theory? That is the direct implication of your statements although i suspect you don’t really believe that.

    Should organization “theory” become more like demography or medicine, research areas with limited theory? I think not.

    If we continue to believe that theory is important for our research then we need journals that advance theory and theoretical development. At this moment there are few outlets for this form of scholarship, and it should therefore come as little surprise that the journals that do so have such high impact on others researchers.

    Like

    wocasio

    August 9, 2014 at 12:19 pm

  10. You’re right, authors tend to give more weight to review articles, in part because they’re easier to cite when making an abstract claim than referring to the specific empirical articles from which the findings that generated the review articles come. Review articles are often used like shorthand in our citation patterns. I benefit from this, given that I have a well-cited review paper in ARS. I’m glad for the citations, of course, but I wouldn’t say that it’s one of the three most important papers I’ve written even though it ranks third in my citation list.

    I don’t understand why it’s so controversial to say that good theoretical work should generate more empirical research. I never said theory wasn’t important. As someone who has written theory papers, I wouldn’t say they’re not important to the development of our field. My point was that ultimately their influence is determined by how much they influence empirical scholarship. A theory piece that does its job will ultimately spawn a wave of new research on a topic or theoretical concept, and if this is why review journals get cited more, then I should back off my stance on review journals and impact factors. But my sense is that most papers in review journals don’t do theoretical development of this type. And the reason they get cited isn’t usually because they change the way people think about the field. They get cited because they’re a convenient shorthand.

    Like

    brayden king

    August 9, 2014 at 3:18 pm

  11. Shorthands in citations are endemic to the field– but this is not only true of citations to review papers, but to theory and empirical papers also. And perhaps this is the underlying “problem” with impact factors and with citations overall. Yes review papers (and review journals) probably exacerbate the phenomenon but I don’t think they are the underlying cause.

    It should also be noted that a large majority of review papers (not always true of theory papers) are written by authors who have already undertaken important empirical work in the fields that are being surveyed. Writing a review paper is useful to make sense of one’s own empirical research and how it relates to other empirical and theoretical research in the field. It’s the sensemaking that get cited more than specific empirical findings. But that has a lot to say about how ideas vs. specific empirical findings are valued by researchers.

    Currently my third most highly cited work is also a review piece, but a book chapter, not in a journal (Thornton & Ocasio, 2008 in The Handbook of Organizational Institutionalism), I also don’t think it’s my third most important work, but it helped authors understand the institutional logics perspective better and therefore helped advance the field.

    I guess my overall point is that review articles are useful to advance the field — both to synthesize knowledge and to guide future research. Sometimes they also provide important theoretical and meta-theoretical developments.

    I also believe impact factors are useful data- although two-year impact factors not so much. Like any data they should be used carefully and supplemented with other evaluations- both quantitative and qualitative.

    And impact factors do show that authors find review journals and review articles useful. Is this useful data to have? I believe so. What you do with it, if anything, is another question.

    Like

    wocasio

    August 9, 2014 at 4:24 pm

  12. The citation timespan is also quite different in review articles and more novel contributions (getting back to the 2-year window). I can easily incorporate citations to the most recent review articles because they are really aligned with how I think about the field already. If there is a really novel contribution, chances are that it is looking at something that doesn’t really fit the data or analysis in my current papers under work but may shape what I will do in a few years from now.

    Like

    Henri

    August 10, 2014 at 6:51 pm

  13. I never see AMR as a “review” journal, despite the name. Editors of AMR made it abundantly clear in many editorials that AMR is a theory journal and articles that merely review would not be published.

    Like

    Peter

    August 11, 2014 at 12:27 am

  14. Article influence score does not count the citations within the same journal, so it solves the problem of editor’s inflation of the articles. It also solves the potential problem that if someone cites his or her own work in non-influential journals, that will influence the score only a little bit, because citations are weighted by the the influence of the journal. Author self-cites are possibly an important problem in low-impact journals that publish few articles a year, because the IF is very sensitive to a few more cites.

    This is what the ranking of the first 20 looks like. Annals of Tourism Research is moved to number 51. I bet this is probably what you would have largely expected.

    1 ANNU REV SOCIOL
    2 AM SOCIOL REV
    3 AM J SOCIOL
    4 SOCIOL THEOR
    5 SOC NETWORKS
    6 SOCIOL EDUC
    7 SOCIOL METHOD RES
    8 SOC PROBL
    9 POLIT SOC
    10 BRIT J SOCIOL
    11 J CONSUM CULT
    12 ECON SOC
    13 GENDER SOC
    14 J MARRIAGE FAM
    15 POPUL DEV REV
    16 SOC FORCES
    17 GLOBAL NETW
    18 SOC SCI RES
    19 LAW SOC REV
    20 SOCIOLOGY

    What I am curious to know is why is Social Forces considered the third top place to publish, after ASR and AJS, when Social Problems, also a generalist journal, is more influential (Art. Infl. Score 1.642 vs 1.308). On the other hand, what I do understand, but bothers me, is that people in the US value a publication in Social Forces more than one in BJS, which is more influential in Europe (and perhaps elsewhere).

    On another note: International Political Sociology appeals to a broader audience than Social Problems, because it is read especially in International Relations and Political Science, and is read and cited by authors beyond the US. So that explains that ranking.

    Like

    Sebastian

    August 12, 2014 at 5:44 am

  15. There has been quite a lot of activity within the bibliometrics community about using bibliometrics to evaluate social science, and the consensus, as far as I am aware of, is that this sort of thing works well for economics and psychology, but not sociology (for reasons already discussed above, in part). See for example:

    Diana M. Hicks. “The Four Literatures of Social Science” Handbook of Quantitative Science and Technology Research,. Ed. Henk Moed. Kluwer Academic, 2004.

    Some of the additional problems in sociology include journal coverage, since sociologists are more likely to cite and be cited by publications not indexed by WoS, for example. Or that we are also partially a book discipline, etc.

    In any case, you can’t really compare citations cross discipline, as they are also highly dependent on field size and have a power law distribution (i.e., larger fields will have exponentially more citations, think about the network aspect of it).

    And finally, there is also the fact that since citations haven’t really been used as criteria within sociology before, we are generally not as good at gaming it. I published a paper in a non-sociological journal that has an impact factor larger than AJS or ASR, and eventually realized why: my paper was made available on the “online first” system back in early 2012, but has only actually come out in print recently. Most measures of impact factor won’t take this into account, and so my paper has had 2+ years to collect citations, but will count as being published this year.

    Like

    dlp

    August 20, 2014 at 3:41 pm


Comments are closed.