nrc rankings: was it a big fail?
The baseline for evaluating the NRC is something like:
Is it better than if you randomly grabbed a bunch of people in the lobby at ASA and just asked them about the most famous programs? Is there more information than if you just counted articles in the top 2-3 journals?
From what people have seen, the NRC assessment seems to have some serious issues. First, the rankings are five years out of date, so productivity can easily have gone up or down. Second, as noted here, but in other places, the NRC seems to have some seriously mistaken information. Third, their processing of information seems off. For example, in sociology, a book is weighted the same as an article, which, if you know the field, is seriously mistaken. Fourth, by creating ratings based on the size of confidence intervals, you make some serious errors.
Despite these problems, the assessment seems to have gotten the general landscape of the field right, but there are serious errors in the details, which are likely generated by the problems listed above. This probably speaks to the fact that status orders are robust in many ways. Even flawed measurements will tell you that Florida is below New York. But as an exercise in illuminating the dynamics and subtly of academic status orders, you’d be better spending some time as the annual convention’s general reception.