orgtheory.net

nrc rankings: was it a big fail?

The baseline for evaluating the NRC is something like:

Is it better than if you randomly grabbed a bunch of people in the lobby at ASA and just asked them about the most famous programs? Is there more information than if you just counted articles in the top 2-3 journals?

From what people have seen, the NRC assessment seems to have some serious issues. First, the rankings are five years out of date, so productivity can easily have gone up or down. Second, as noted here, but in other places, the NRC seems to have some seriously mistaken information. Third, their processing of information seems off. For example, in sociology, a book is weighted the same as an article, which, if you know the field, is seriously mistaken. Fourth, by creating ratings based on the size of confidence intervals, you make some serious errors.

Despite these problems, the assessment seems to have gotten the general landscape of the field right, but there are serious errors in the details, which are likely generated by the problems listed above. This probably speaks to the  fact that status orders are robust in many ways. Even flawed measurements will tell you that Florida is below New York. But as an exercise in illuminating the dynamics and subtly of academic status orders, you’d be better spending some time as the annual convention’s general reception.

Written by fabiorojas

September 30, 2010 at 12:39 am

9 Responses

Subscribe to comments with RSS.

  1. A book is not weighted the same as an article. A book is NOT COUNTED AT ALL. The NRC Report appendix is clear on this. Less clear is whether citations to books count, but it appears not.

    Like

    jeremy

    September 30, 2010 at 12:41 am

  2. Books are so 20th century. I look forward to the 2021 round of rankings dropping the measure of article production, too, and choosing instead to count tweets about articles, mentions on certain key blogs, or possibly the number of Facebook friends a department has — these are after all measures amenable to reliable automated data collection and analysis and good proxies for the overall quality of a unit.

    Like

    Kieran

    September 30, 2010 at 12:49 am

  3. Seems to me like the baseline of the NRC rankings should be “Is it better than US News & World Report?” since why bother putting every university through a taxpayer-funded bureaucratic ordeal when we could just rely on some private source. Complain as people might about the US News rankings, they don’t make any error equivalent to ranking Berkeley #35 or Miami #7.

    Another baseline could be “Is it better than the component information that goes into the ranking?” In this case, had they just added up the verbal and math GRE scores of individuals in the program, which they had at their disposal, they almost certainly would have done better.

    Like

    jeremy

    September 30, 2010 at 1:01 am

  4. This is the point where we ask how, exactly, the whole process came to produce this particular set of rankings. It’s clear there’s strong political context.

    Seems to me like the baseline of the NRC rankings should be “Is it better than US News & World Report?” … Another baseline could be “Is it better than the component information that goes into the ranking?”

    Or, how about, “Is the analysis done in a way that wouldn’t lead you to immediately yell at any grad student who presented it to you as a draft dissertation chapter?”

    Like

    Kieran

    September 30, 2010 at 1:08 am

  5. There may also be an interdisciplinary penalty. You get points for interdisciplinary faculty, but I believe you also have to split the pubs and cites if faculty are cross-listed. So people with joint appointments only get half their productivity in sociology. From a bookkeeping point of view, that’s easier to program, but from a quality point of view, that seems crazy.

    Like

    Philip Cohen

    September 30, 2010 at 1:20 am

  6. Ripping indicators to shreds: one of the favourite pastimes of sociologists. One feels at home here.

    Like

    Guillermo

    September 30, 2010 at 11:55 am

  7. Unfortunately, these rankings do have real world consequences. My institution is taking them seriously and will be at least partially making decisions about restructuring graduate programs based on the NRC results.

    Like

    Bedhaya

    September 30, 2010 at 12:50 pm

  8. Yes. The more I look at these, the more amazed I am by how bad they are.

    On the other hand, this should be a really great year for the graduate admissions committee at the University of Miami.

    Like

    Steve Vaisey

    September 30, 2010 at 6:07 pm

  9. A statistician at the University of Chicago shares the consensus view of the comments here: http://chronicle.com/article/A-Critic-Sees-Deep-Problems-in/124725/.

    What’s most troubling about the rankings-itis is how seriously the rankings can influence a school. When I was at UT (Austin) as an MBA the dean basically got fired because of the BusinessWeek rankings (the first year they came out). If I recall, the confidence interval spanned the top 20 schools.

    Poor Texas had been ranked 13th in the last US News (or whatever) ranking. And BusinessWeek had them outside of the top 25 (I think).

    It is ironic that schools who work so hard to produce careful scholarship and well trained students get hammered by such lazy use of measures and statistics.

    Like

    David Hoopes

    September 30, 2010 at 7:50 pm


Comments are closed.