rank slam!

For previous orgtheory posts on college rankings: b-schools, sociology PhD programs, b-schools again,Sauder on the SAT/ETS.

Inside Higher Ed has a nice article on the annual US News college ranking. As to be expected for the rest of my natural life, two or three schools jockey for top spot based on a bizarre weighting of good and bad quality measures. But the Inside Higher Ed crew has a nice article where they examined the survey forms submitted by 48 public institutions. According to “Reputation without Rigor” by Stephanie Lee, a cursory inspection shows fairly little gaming, but a lot of poorly completed surveys. A few choice clips:

Among those that did provide their responses, several revealed major oddities. At the University of Wisconsin at Madison, the provost’s most recent peer assessment form gave the highest possible rating, “distinguished,” to just two institutions: its own and the New School.

To every other university but one, Madison’s response gave the second-lowest rating, “adequate.” Those 260 “adequate” institutions included Harvard, Yale and the rest of the Ivy League, the University of California at Berkeley, the Massachusetts Institute of Technology and Stanford University. Only Arizona State University scored below all the rest, given the lowest rating of “marginal.”

This news surprised Julie Underwood, then-interim provost, when informed by Inside Higher Ed this month. That’s because she didn’t fill out the survey submitted in her name. As many other officials do, she sent it to an administrator to complete on her behalf — in this case, Aaron Brower, vice provost for teaching and learning, who had never filled it out before. Underwood did not give her input or approval before he returned it to U.S. News.

Brower says he was responding as neutrally as possible to a “bad survey” and an “impossible question” that have gained what he views as a “shocking” degree of importance. Universities are “good in some areas and not good in others,” he says, and catch-all ratings ignore those nuances.

“I first looked at this and started considering every institution and trying to fill it out that way. And I thought, if anyone were to ever ask me this and say, ‘Why did you put this institution as strong versus good?,’ I wouldn’t have an answer,” says Brower, who has been researching higher education for 25 years and became vice provost in 2007. “That was to me less defensible than saying, ‘It’s a mixed bag, here’s a neutral response.’ ”

Brower’s idea of a neutral response was to deem every institution “adequate” — which he interpreted to mean “good enough” — except for those he says he knows extremely well. Having worked at Madison since 1986, he says he is “very confident about saying that we’re an excellent, distinguished school.” As for the New School, Brower says he admires its focus on writing seminars, internships and other programs aimed at improving learning outcomes. One of his sons is a rising sophomore at the New School, but Brower says he has long been impressed with the university, regardless of family ties.

He would not elaborate on why he singled out Arizona State as “marginal,” saying, “They were hit very hard by the economy and I know their program and felt like I couldn’t rate them just kind of neutrally.”

Some other odd balls from the batch:

  • The surveys submitted by the president and provost at Ohio State were virtually identical to each other in 2007, 2008 and 2009. For this year’s rankings, the president and provost rated Ohio State “strong” and gave an “adequate” rating to 108 and 104 institutions, respectively. Both gave identical ratings to all members of the Big 10 and the Ivy League, including “strong” ratings for Cornell University, Columbia University, Brown University, Dartmouth College and the University of Pennsylvania. They also both identified the same eight institutions as “distinguished.” Officials at Ohio State did not respond to repeated requests for comments or explanations about the similarities.
  • The presidents and/or provosts of 15 of the 18 universities rated their institutions “distinguished,” from Berkeley (no. 21 on last year’s list) to the University of Missouri at Columbia (No. 96).
  • At Berkeley in 2008, the chancellor rated other “top” publics — including the University of Virginia, the University of Michigan at Ann Arbor and the University of North Carolina at Chapel Hill – “strong.” However, he rated all of the University of California campuses “distinguished,” with the exceptions of Santa Cruz and Riverside, which were also “strong.” (Merced was not on the list.)
  • In a 2009 survey, an official at the University of California at San Diego (No. 35) rated that campus “distinguished,” above the University of Pennsylvania, Duke University, Dartmouth College, Northwestern University and Johns Hopkins University (all “strong”).
  • The president of the University of Florida (No. 49) rated his campus “distinguished” in this year’s survey — along with Harvard, Stanford and MIT — and no other institution in Florida above “good,” as reported by the Gainesville Sun.

The whole article is worth reading. Bottom line: Evaluating schools for quality is a worthy goal, but the US News & World report survey seems to be a sham. There seems to be little effort put into designing a quality instrument and almost no thought put into accounting for respondent biases. Anxious people (including a younger me) cling to snake oil because of the anxiety of choosing and selecting colleges. Maybe this will encourage other organizations to develop better rankings that will provide concrete guidance for students and ways for colleges to improve. Until then, high school seniors might as well go to the lady down the street who will let you chat with your dead pets.


Written by fabiorojas

August 20, 2009 at 4:54 am

2 Responses

Subscribe to comments with RSS.

  1. Supposedly, they know they have a good ranking as long as it places Harvard and Princeton at the top.



    August 20, 2009 at 2:55 pm

  2. In his autobiography, Trotsky tells a story about how, as a schoolboy, he could not convince the local peasants that his trigonometrical way of measuring the area of a field was quicker and more accurate than their tedious bit-by-bit method. Because whenever they compared the results of the two methods, Trotsky’s was always wrong.



    August 20, 2009 at 3:37 pm

Comments are closed.

%d bloggers like this: