what’s up with impact factors?
Usually when someone starts throwing citation impact data at me, my eyelids get heavy and I want to crawl into a corner for a nap. Like Teppo wrote a couple of years ago, “A focus on impact factors and related metrics can quickly lead to tiresome discussions about which journal is best, is that one better than this, what are the “A” journals, etc. Boring.” I couldn’t agree more. Unfortunately, I’ve heard a lot about impact factors lately. The general weight of impact factors as a metric for assessing intellectual significance has seemed to skyrocket since the time I began training as a sociologist. Although my school is not one of them, I’ve heard of academic institutions using citation impact as a way to incentivize scholars to publish in certain journals and as a measure to assess quality in hiring and tenure cases. And yet it has never struck me as a very interesting or useful measure of scholarly worth. I can see the case for why it should be. Discussions about scholarly merit are inherently biased by people’s previous experiences, status, in-group solidarity, personal tastes, etc. It would be nice to have an objective indicator of a scholar’s or a journal’s intellectual significance, and impact factors pretend to be that. From a network perspective it makes sense. The more people who cite you, the more important your ideas should be.
My problem with impact factor is that I don’t trust the measure. I’m skeptical for a few reasons: gaming efforts by editors and authors have made them less reliable, lack of face validity, and instability in the measure. Let me touch on the gaming issue first.
We’ve all heard by now about the citation ring that Sage busted earlier this year. Sixty articles in a single journal were implicated. An author was able to get the editor to solicit positive reviewers by giving email accounts to the editor for a preferred reviewer that would then lead the unsuspecting (and apparently very careless) editor to send out the review to an account that was created by author himself. The scheme not only made it possible for the author to sneak fraudulent findings into the journal but it also boosted the citation impact of the articles (and journals) by promiscuous self-citing. The competition for citation impact among journals might make an overeager editor be willing to overlook such illicit activities. Although this might be the worst case of citation fraud out there, it’s become all too common for editors to try and game the system and boost their journal’s citation impact. One sneaky way is to request (and perhaps even require) authors to cite a certain number of articles from the journal after it has been conditionally accepted. Another gaming tactic is to periodically include review articles in the journal that cite mostly other articles from the journal. Reviews are generally cited more than empirical articles anyway, but this tactic has the added benefit of getting other authors to fixate on the other articles cited in the review, boosting the impact factor as a result. It’s not clear how much academic value these review pieces have, other than to increase citation impact. I’m sure there are other gaming tactics that editors use of which I’m not aware.
The second issue is that I just don’t think that citation impact has face validity. I’m speaking to a sympathetic audience when I note that sociology and management journals have fairly abysmal citation impact factors compared to other fields. If universities doled out resources based only on citation impact (and we might be moving in that direction), we’re near the bottom of the food chain. But okay, so maybe this status ordering really is reflective of our respective fields (I can accept that) but it might also be a function of other factors, such as the structure of the field itself. Looking closely at the management rankings, I really have to scratch my head because they just don’t make a lot of sense as a measure of actual impact. Here’s the top 10 for 2013:
- Academy of Management Review
- Academy of Management Annals
- Journal of Management
- MIS Quarterly
- Academy of Management Journal
- Personnel Psychology
- Journal of Operations Management
- Journal of Applied Psychology
- Organization Science
- Journal of Information Technology
The first thing that should strike you is that the top two journals are not outlets for empirical research but are review journals. Would most scholars count them as having the greatest scholarly significance? I would hope not, because they’re not journals that report new discoveries/findings. And third is Journal of Management. I imagine one reason this journal has such a high impact factor is because it publishes a lot of review articles as well. Where is ASQ (you might ask)? It’s down at #34, one spot ahead of Tourism Management and just below Human Resource Management Journal. Granted ASQ is not a journal where everyone would like to publish, but I doubt that anyone considering the actual impact of ASQ would place it in this position.
Sociology’s journal impact ranking makes a bit more sense, until it doesn’t.
- American Sociological Review
- American Journal of Sociology
- Annual Review of Sociology
- Annals of Tourism Research
- Sociological Theory
- Population and Development Review
- Sociological Methods and Research
- Sociology of Education
- Social Networks
- Sociology of Health and Illness
Okay, so ASR, AJS, and ARS. So far, so good, except that it’s debatable if ARS – a review journal – is as important as a number of high quality empirical journal. And then Annals of Tourism Research. Tourism? My jaw isn’t just dropping because it appears to be a categorical misfit but it makes very little sense that an Annals publication about a very specific topic would have greater intellectual significance than any number of other sociological journals. Another stunner here is that Social Problems is ranked #26. I don’t understand what to make of a ranking that puts International Political Sociology six spots ahead of Social Problems. The latter targets a much more general sociology audience than the former. Even if you think that SP is over-rated, I don’t see how you could justify putting it so low.
My final issue with the impact factor is that it is actually very unstable. Last year, Organization Science was #19, and this year it is #9. Last year, Management Science was #49, and this year they are #29. Social Problems dropped 7 spots between last year and this year. Given everything we know about status, reputation, and quality (and the correlation between them), why would we ever expect intellectual quality or significance to be so volatile? We wouldn’t. These numbers must be shaped by something other than any of those things.
So basically, I think it’s time that we cast a skeptical eye on impact factor. If our universities are going to weight impact factor heavily when assessing scholarly quality and reputation, then it’s worth pointing out the problems behind impact factors. In certain universities impact factors are being used in a very blunt way to determine who deserves tenure and who doesn’t. Do we want such an unreliable measure to determine important decisions like this?
Moreover, I think as social scientists we ought to have some sort of explanation for what is going on in impact factors that shapes why they change and how they are formed (actively or passively). Why is it so volatile? What explains the relatively low impact of social science journals to the natural sciences? One possible determinant of differences in impact factors is the structure of the field itself. In high consensus fields, like the natural sciences, we ought to see more stability in journal impact factors; whereas in fields that are fragmented and heterogeneous, like sociology, we ought to see more instability and lower impact. I also think that the volume of publications also ought to matter. In some fields, especially those based in labs, scholars publish very quickly and in great quantities. In others, like economics, the rate of publication is very low. Not surprisingly, economics, despite being a high consensus field, has a low impact factor. The median impact factor for a journal in economics is .79, and they are ranked at #196 in relative impact factor. Sociology is at #192. Compare them to psychology, which has a much higher rate of productivity and is ranked #49 in median impact. So I haven’t done any systematic investigation of this, but just by eyeing the relative impact of fields and journals, it appears that there are a lot of field-level structural characteristics that might explain differences. Does this mean we should just throw the measure out? No, I don’t think so. But we should certainly be skeptical and use it sparingly when making important decisions, like funding, hiring, or tenure.