orgtheory.net

Archive for the ‘rankings and reputation’ Category

the “caren” ranking of sociology programs

A few months ago, Neal Caren posted a citation analysis of sociology journals. The idea is simple – you can map sociology by looking at clusters of citations. Pretty cool, right? You know what’s cooler – using the same technique you can come up with a new ranking of soc programs. The method is simple:

  1. Start with a cluster analysis of journal cites. Stick to the last five years or so.
  2. Within each cluster, award a department credit for each article that makes, say, the top 20 in that cluster. Exclude dead or retired authors. Exclude authors who have moved to a new campus.
  3. Weight the credit by co-authorship – but keep track of where they teach. E.g., Princeton sociology gets 1/2 for DiMaggio and Powell (1983). Stanford soc does NOT get credit because Woody Powell teaches in the education school. Courtesy appointments do not count.
  4. You can then rank within a cluster (e.g., top 5 institutions/movements depts) or create an overall ranking based on adding up scores in all clusters.

Disadvantages: This method excludes cites in books. For example, most of the cites to my Black power book are by historians, who mainly write books. This also points to another problem. It emphasizes in-discipline cites. So, if you do education research, and they love you in the AERJ, this won’t pick you up. Another issue is that if you are spread around clusters, your count is ignored.

Advantages: Based on behavior and not susceptible to halo effects because it is not a reputation survey. Also, it’s a measure of what people think is important, not what gets into specific journals. However, we would expect the typical highly central person in the cluster to appear because of a well cited article in a top journal. Another advantage is the transparency. No bizarre formulas, aside from standard network measures. Finally, it is easy to measure robustness. For example, if you think that fractional weighting for co-authors is misleading, it’s easy to drop and redo the analysis in a way that you think is correct.

Next step: Neal Caren should set up a wiki where we can quickly execute this and replace the misguided NRC/US News rankings.

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

October 24, 2013 at 12:01 am

Reactivity of Rankings: a Pure Case within Reach

MIT Soc is #1

The Chronicle reports on a new ranking of “Faculty Media Impact” conducted by the Center for a Public Anthropology. The ranking “seeks to quantify how often professors engage with the public through the news media” and was done by trawling Google News to see which faculty were mentioned in the media most often. The numbers were averaged and “and then ranked relative to the federal funds their programs had received” to get the rankings. As you can see from the screenshot above, the ranking found that the top unit at MIT was the Sociology Department. This is fantastic news in terms of impact, because MIT doesn’t actually have a Sociology Department. While we’ve known for a while that quantitative rankings can have interesting reactive effects on the entities they rank, we are clearly in new territory here.

Of course, there are many excellent and high-profile sociologists working at MIT in various units, from the Economic Sociology group at Sloan to sociologists of technology and law housed elsewhere in the university. So you can see how this might have happened. We might draw a small but significant lesson about what’s involved in cleaning, coding, and aggregating data. But I see no reason to stop there. The clear implication, it seems to me, is that this might well become the purest case of the reactivity of rankings yet observed. If MIT’s Sociology Department has the highest public profile of any unit within the university, then it stands to reason that it must exist. While it may seem locally less tangible than the departments of Brain & Congitive Sciences, Economics, and Anthropology on the actual campus, this is obviously some sort of temporary anomaly given that it comfortably outranks these units in a widely-used report on the public impact of academic departments. The only conclusion, then, is that the Sociology Department does in fact exist and the MIT administration needs to backfill any apparent ontic absence immediately and bring conditions in the merely physically present university into line with the platonic and universal realm of being that numbers and rankings capture. I look forward to giving a talk at MIT’s Sociology Department at the first opportunity.

Written by Kieran

October 8, 2013 at 5:43 pm

more higher education bashing or the end of univerisities as we know them?

In the past couple of weeks, two journalists who I enjoy reading wrote controversial diatribes about the travesties of contemporary higher education. Both Matt Taibbi  and Thomas Frank, each in their own brilliantly polemical ways, compared higher education to the housing bubble that led to our last serious financial crisis. Both writers attacked the integrity and ethics of the administrators of the current regime of academia. Both bashed a system that would allow students to acquire more debt than they could possibly pay given the job prospects for which their education prepares them. These are real nuggets that academics ought to consider seriously. Ignore, if it offends you, the abrasive rhetoric, but at the heart of both of their arguments is a logic that ought to resonate with our sociological sensibilities.

Here is Taibbi:

[T]he underlying cause of all that later-life distress and heartache – the reason they carry such crushing, life-alteringly huge college debt – is that our university-tuition system really is exploitative and unfair, designed primarily to benefit two major actors.

First in line are the colleges and universities, and the contractors who build their extravagant athletic complexes, hotel-like dormitories and God knows what other campus embellishments. For these little regional economic empires, the federal student-loan system is essentially a massive and ongoing government subsidy, once funded mostly by emotionally vulnerable parents, but now increasingly paid for in the form of federally backed loans to a political constituency – low- and middle-income students – that has virtually no lobby in Washington.

Next up is the government itself. While it’s not commonly discussed on the Hill, the government actually stands to make an enormous profit on the president’s new federal student-loan system, an estimated $184 billion over 10 years, a boondoggle paid for by hyperinflated tuition costs and fueled by a government-sponsored predatory-lending program that makes even the most ruthless private credit-card company seem like a “Save the Panda” charity.

Read the rest of this entry »

Written by brayden king

September 8, 2013 at 1:13 pm

protect your self on the internet – the brayden and eszter way

C0-blogger Brayden King and leading Internet scholar Eszter Hargittai wrote a nice post for Kellogg’s Executive Education newsletter. The topic: how to cultivate your reputation in the age of social media. A few choice clips:

Let others in your social network do the talking for you. People see impression management as most genuine when others they already trust and respect do it on your behalf. When third parties say positive things about you, they help cement your reputation and create a halo around your activities.

and

 Engage critiques from legitimate sources directly and alleviate their concerns openly. As anyone who has spent any time online knows, people love to criticize others and sling a little mud. In many cases these attacks can be ignored, especially when they come from “trolls,” or individuals whose sole intent is to pester others, usually from behind a veil of anonymity. In some cases, however, criticism will come from legitimate sources and be a reputational threat.

They are now writing a book on this topic. Recommended.

Adverts: From Black Power/Grad Skool Rulz 

Written by fabiorojas

June 6, 2013 at 12:39 am

the scandal triangle

scandal_nyhan

Brendan Nyhan has a nice post on the sociology of scandal. He summarizes his research on presidential scandal in this way:

My research suggests that the structural conditions are strongly favorable for a major media scandal to emerge. First, I found that new scandals are likely to emerge when the president is unpopular among opposition party identifiers. Obama’s approval ratings are quite low among Republicans (10-18% in recent Gallup surveys), which creates pressure on GOP leaders to pursue scandal allegations as well as audience demand for scandal coverage. Along those lines, John Boehner is reportedly “obsessed” with Benghazi and working closely with Darrell Issa, the House committee chair leading the investigation. You can expect even stronger pressure from the GOP base to pursue the IRS investigations given the explosive nature of the allegations and the way that they reinforce previous suspicions about Obama politicizing the federal government.

In addition, I found that media scandals are less likely to emerge as pressure from other news stories increases. Now that the Boston Marathon bombings have faded from the headlines, there are few major stories in the news, especially with gun control and immigration legislation stalled in Congress. The press is therefore likely to devote more resources and airtime/print to covering the IRS and Benghazi stories than they would in a more cluttered news environment.

I’d also add that “events” have properties. It is easier to scandalize, say, the IRS investigation issue because it is simple. In contrast, the issue of whether the attack in Libya should have been labeled terrorism is probably to esoteric for most folks. If you buy that argument, you get a nice story about the “scandal triangle.” The likelihood of scandal increases when partisan opposition, bored media, and clearly norm-broaching events come together.

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

May 17, 2013 at 12:01 am

do college rankings matter?

The New York Times recently asked a panel of higher education practitioners and scholars about rankings. How much to college rankings influence colleges? I thought Michael Bastedo, organizational scholar and higher ed dude at Michigan, had the right answer – not much, except for elites who are obsessed with prestige:

[that most students are not influenced by ranking] …  is confirmed by research I have done on rankings with Nicholas Bowman at Bowling Green State University. If selective colleges move up a few places in the rankings, the effects on admissions indicators, like average SAT and class rank, are minimal. Moving into the top tier of institutions is much more influential — something we called “the front page effect.”

It turns out funders aren’t strongly influenced by rankings either. There’s no question that industry and private foundations are more likely to fund prestigious colleges than nonprestigious ones. But our research shows that funders outside higher education barely notice when rankings go up or down.

So who are influenced by rankings? It’s people inside higher education, the ones who are most vulnerable to the vagaries of college prestige. Alumni are more likely to donate when rankings go up, and out-of-state students are more likely to enroll and pay higher tuition. The faculty who make decisions on federal research applications also seem to be influenced.

Check out the whole panel. Definitely worth reading.

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

October 1, 2012 at 12:01 am

the ncaa and penn state’s history

Ever since the NCAA announced they would sanction Penn State for its cover-up of the Sandusky sex abuse scandal, I’ve been thinking about writing a post related to institutional jurisdictions, authority, and reputation.  I completely understand the NCAA’s response to the scandal, especially in light of the findings of the Freeh report, and I think this was a very predictable response. Was the punishment harsh? Yes. Was it excessively harsh as a condemnation of the crimes of Sandusky? No.  Was the NCAA operating within its jurisdiction and exercising proper use of authority by making these sanctions? That’s debatable (and I’m sure it will be in the months to come).

My colleague Gary Alan Fine, who has thought a lot about scandal and collective memory (e.g., Fine 1997), has offered his thoughts on the sanctions in a New York Times op-ed. Gary questions “history clerks” who attempt to rewrite history as a response to a contemporary event/scandal.

The more significant question is whether rewriting history is the proper answer. And while this is not the first time that game outcomes have been vacated, changing 14 seasons of football history is a unique and  disquieting response. We learn bad things about people all the time, but should we change our history? Should we, like Orwell’s totalitarian Oceania, have a Ministry of Truth that has the authority to scrub the past?  Should our newspapers have to change their back files? And how far should we go?

This is a tricky issue. Everyone can agree that what happened at Penn State was deplorable. However, I think it’s perfectly reasonable to question whether the NCAA made these moves more as an effort to protect its own reputation and to safeguard the purity of college football, rather than as a reasoned response to the institutional crimes committed by Penn State’s decision-making authorities.  This scandal isn’t disappearing anytime soon, and so I expect we’ll hear a lot more about this in the months and years to come.

Written by brayden king

July 25, 2012 at 3:37 pm

theories of stupid media

There’s nothing Brendan Nyhan loves more than documenting illogical political reporting. Over Twitter, I bugged him. Does he ever tire of cataloging all the lame media pronouncements? No, he’s just loves it. He’s a true media hound.

But still, is there any evidence that the media becomes more accurate over time? Sure, I can believe that they are accurate in the sense of correctly reporting a quote, but there isn’t much evidence that they can do more than that. Heck, there’s evidence that the more famous a pundit is on TV, the less accurate they become (see Phil Tetlock’s research).

But why is the media so dumb? A few theories:

  1. They aren’t capable of it. Journalists are good at presentation and narrative, not analysis. They are selected for story writing, not understanding sampling or experimental design.
  2. Incentives. Outrageous predictions are fun and get you more attention.
  3. Desire. Maybe journalists simply don’t care about what a well substantiated economic or political analysis looks like.

Tetlock’s research focuses on #2 (e.g., having an academic degree doesn’t increase pundit accuracy). Other evidence?

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

July 2, 2012 at 12:01 am

An Elaboration on Brayden King’s reaction to Neal Caren’s list of most cited works in sociology

Several people have pointed out Neal Caren lists of most cited works. I appreciate how hard it is to do something like this and I appreciate the work Neal Caren has done. So my criticism is intended more to get us closer to the truth here and to caution against this list getting reified. I also have some suggestions for Neal Caren’s next foray here.

The idea, as I understand it, is to try and create a list of the 100 most cited sociology books and papers in the period 2005-2010. Leaving aside the fact that the SSCI under counts sociology cites by a wide margin, (maybe a factor of 400-500% if you believe what comes out of Google Scholar), the basic problem with the list is that it is not based on a good sample of all of the works in sociology. Because the journals were chosen on an ad hoc basis, one has no idea as to what the bias is in making that choice. The theory Neal Caren is working with, is that these journals are somehow a sample of all sociology journals and that their citation patterns reflect the discipline at large. The only way to make this kind of assertion is to randomly sample from all sociology journals.

The idea here is that if Bourdieu’s Distinctions is really the most cited work in sociology (an inference people are drawing from the list), then it should be equally likely to appear in all sociology articles and all sociology journals at a similar rate. The only way to know if this is true, is to sample all journals or all articles, not some subset chosen purposively. Adding ASQ to this, does not matter because it only adds one more arbitrary choice in a nonrandom sampling scheme. .

I note that the Social Science Citation Index follows 139 Sociology journals. A random sample of 20% would yield 28 journals and looking at those papers across a random sample of journals is going to get us a better idea at finding out which works are the most cited in sociology.

Is there any evidence that the nonrandom sample chosen by Neal Caren is nonrandom? The last three cites on his list include one by Latour (49 cites), Byrk (49 cites) and Blair Loy (49 cites). If one goes to the SSCI and looks up all of the cites to these works  from 2005-2010, not just the ones that appear in these journals, one comes to a startling result: Latour has 1266 cites, Bryk, 124, and Blair Loy 152. At the top of the list, Bourdieu’s Distinctions has 218 on Neal Caren’s list but the SSCI shows Distinctions as having 865 cites overall.  Latour’s book should put him at the top of the list, but the way the journals are chosen here puts him at the bottom. It ought to make anyone who looks at this list nervous, that Latour’s real citation count is 25 times larger than reported and it puts him ahead of Bourdieu’s Distinctions.

The list is also clearly nonrandom for what is left off. Brayden King mentioned that the list is light on organizational and economic sociology. So, I did some checking. Woody Powell’s 1990 paper called “Neither markets nor hierarchies” has 464 cites from 2005-2010 and his paper with three other colleagues that appeared in the AJS in 2005, “Networks dynamics and field evolution” has 267 cites. In my own work, my 1996 ASR paper “Markets as politics” has 363 cites and my 2001 book “The Architecture of Markets” has 454 from 2005-2010. If without much work, I can find four articles or books that have more cites than two of the three bottom cites on the list (i.e. Byrk’s 124 and Blair Loy’s 152 done the same way), there must be lots more missing.

This suggests that if we really want to understand what are the most cited and core works in sociology in any time period,  we cannot use purposive samples of journals. What is required is a substantial number of journals being sampled, and then all of the cites to the papers or books tallied for those books and papers from the SSCI in order to see which works really are the most cited. I assume that many of the books and papers on the list will still be there, i.e. things like Bourdieu, Granovetter, DiMaggio and Powell, Meyer and Rowan, Swidler, and Sewell. But because of the nonrandom sampling, lots of things that appear to be missing are probably, well, missing.

Written by fligstein

June 5, 2012 at 9:32 pm

where is the org. theory in the most cited works in sociology?

Neal Caren has compiled a list of the 102 most cited works in sociology journals over the last five years. There are a lot of familiar faces at the top of the list. Bourdieu’s Distinction, Raudenbush’s and Bryk’s Hierarchical Linear Models, Putnam’s Bowling Alone, Wilson’s The Truly Disadvantaged, and Grannovetter’s “Strength of Weak Ties” make up the top 5.  It’s notable that Grannovetter’s 1973 piece is the only article in the top 5. The rest are books. I was also interested to see that people are still citing Coleman.  He has three works on the list, including his 1990 book at the number 6 spot.  Sadly, Selznick is nowhere to be found on the list (but then neither is Stinchcombe).  Much of the work is highly theoretical and abstract. There is a smaller, but still prominent, set of work dedicated to methods (e.g., Raudenbush and Bryk). I’m glad to see there is still a place for big theory.

It’s striking, however, how little organizational theory there is on the list.  Not counting Granovetter, whose work is really about networks and the economy broadly, no organizational theory appears on the list until 15 and 16, where Hochschild’s The Managed Heart (which might be there due to the number of citations it gets from gender scholars) and Dimaggio’s and Powell’s 1983 paper show up.  There are several highly influential papers in organizational theory that I was surprised were not on the list. One could deduce from the list that sociology and organizational theory have parted ways.

I don’t think this is really true, but I think it speaks to some trends in sociology. The first is that most organizational sociology, excluding research on work and occupations, no longer appears in generalist sociology journals outside of the American Sociological Review and the American Journal of Sociology. Journals like Social Forces or Social Problems just don’t publish a lot of organizational theory.  Now, there are a lot of great organizational papers that get published in ASR and AJS, but that is a very small subset of the entire population of sociology articles. The second is that Administrative Science Quarterly no longer seems to count in most sociologists’ minds as a sociology journal anymore.  Perhaps its omission  leads to some significant pieces of organizational sociology being underrepresented (or perhaps not since ASQ publishes fewer articles than many of the sociology journals). To be fair to Neal, I don’t think he’s unique among sociologists as failing to recognize ASQ as an important source of sociology.* One reason for this, I’m guessing, is because a lot of non-sociologists publish in it. But a lot of non-sociologists publish in other journals that are on the list as well, including Social Psychological Quarterly, Mobilization, and Social Science Research. Another reason may just be that it’s because a lot of organizational sociology is no longer taking place in sociology departments, making the subfield invisible to our peer sociologists.  Although I have no data to support this, my intuition is that fewer organizational theory classes are taught in sociology Phd programs today than were taught twenty years ago. Because of this, younger sociologists are not coming into contact with organizational theory, and so they are not citing it.  Again, I have no evidence that this is the case.

I don’t think organizational research is waning in quality.  A lot of organizational research still gets published in ASR and AJS. But a lot of it is probably not read or consumed by most sociologists.

UPDATE: Neal has updated the analysis to include ASQ. The major effect has been to boost DiMaggio and Powell to number 10.

*And yes, I’m lobbying Neal to include ASQ in future citation analyses.

Written by brayden king

June 3, 2012 at 11:49 pm

new book – the organization of higher education

My friend Michael Bastedo at Michigan recently published a new collection of essays called The Organization of Higher Education: Managing Colleges for a New Era. The book introduces the reader to cutting edge research in universities, strategy, and organizational behavior.

Chapters include Brian Pusser and Simon Marginson on global rankings, J. Douglas Toma on strategy and Anna Neuman on organizational cognition in higher education. Social movement fans should check out my chapter on movements and higher education.

Recommended!

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

May 13, 2012 at 12:01 am

open letter from creative writing faculty against ranking

Oh do we love rankings around here.  Poets & Writers recently came out with their 2012 ranking of MFA programs (here’s their methodology and eighteen measures).  The ranking has problems.  Creative writing faculty address some of these problems in an open letter.  Here’s the New York Observer story (the letter is at the end).  The open letter is also posted below the fold.

(Hat tip to Harriet.)

Read the rest of this entry »

Written by teppo

September 13, 2011 at 4:37 am

joel baum: article effects > journal effects

Joel Baum has written a provocative article which argues and shows that, essentially, article effects are larger than journal effects.

In other words, impact factors and associated journal rankings give the impression of within-journal article homogeneity.  But top journals of course have significant variance in article-level citations, and thus journals and less-cited articles essentially “free-ride” on the citations of a few, highly-cited pieces.  A few articles get lots of citations, most get far less — and the former provide a halo for the latter.  And, “lesser” journals also publish articles that become hits (take Jay Barney’s 1991 Journal of Management article, with 17,000+ google scholar citations), hits that are more cited than average articles in “top” journals.

The whole journal rankings obsession (and associated labels: “A” etc journal) can be a little nutty, and I think Joel’s piece nicely reminds us that, well, article content matters.  There’s a bit of a “count” culture in some places, where ideas and content get subverted by “how many A pubs” someone has published.  Counts trump ideas.  At times, high stakes decisions — hiring, tenure, rewards — also get made based on counts, and thus Joel’s piece on article-effects is a welcome reminder.

Having said that, I do think that journal effects certainly remain important (a limited number of journals are analyzed in the paper), no question. And citations of course are not the only (nor perfect) measure of impact.

But definitely a fascinating piece.

Here’s the abstract:

The simplicity and apparent objectivity of the Institute for Scientific Information’s Impact Factor has resulted in its widespread use to assess the quality of organization studies journals and by extension the impact of the articles they publish and the achievements of their authors.  After describing how such uses of the Impact Factor can distort both researcher and editorial behavior to the detriment of the field, I show how extreme variability in article citedness permits the vast majority of articles – and journals themselves – to free-ride on a small number of highly-cited articles.  I conclude that the Impact Factor has little credibility as a proxy for the quality of either organization studies journals or the articles they publish, resulting in attributions of journal or article quality that are incorrect as much or more than half the time.  The clear implication is that we need to cease our reliance on such a non-scientific, quantitative characterization to evaluate the quality of our work.

Here’s the paper (posted with permission from Joel):  “Free-riding on power laws: Questioning the validity of the impact factor as a measure of research quality in organization studies.” The paper is forthcoming in Organization.

The article has some great links to the Bill Starbuck piece that Brayden discussed. Here’s Bill’s OMT blog post.

Written by teppo

February 3, 2011 at 4:34 am

more on rankings – quality v. prestige

Teppo’s efforts to create a crowdsourced ranking of management journals tipped off quite a debate on the OMT (Organization and Management Theory) listserv about the validity of such rankings.  The debate centered on whether crowdsourced rankings were too subjective, merely representing prestige differences rather than actual quality differences, and ignored objective data (e.g., citation patterns) for assessing journal quality. Teppo and Bill Starbuck were kind enough to post on the OMT blog some thoughts about the ranking. Bill knows something about journal prestige and quality. In 2005 he published a paper in Organization Science that questioned whether the most prestigious journals actually published the highest quality articles. Here’s the abstract for that paper:

Articles in high-prestige journals receive more citations and more applause than articles in less-prestigious journals, but how much more do these articles contribute to knowledge?  This article uses a statistical theory of review processes to draw inferences about differences value between articles in more-prestigious versus less-prestigious journals. This analysis indicates that there is much overlap in articles in different prestige strata. Indeed, theory implies that about half of the articles published are not among the best ones submitted to those journals, and some of the manuscripts that belong in the highest-value 20% have the misfortune to elicit rejections from as many as five journals. Some social science departments and business schools strongly emphasize publication in prestigious journals. Although one can draw inferences about an author’s average manuscript from the percentage in top-tier journals, the confidence limits for such inferences are wide. A focus on prestigious journals may benefit the most prestigious departments or schools but add randomness to the decisions of departments or schools that are not at the very top. Such a focus may also impede the development of knowledge when mediocre research receives the endorsement of high visibility.

Written by brayden king

January 28, 2011 at 5:37 pm

Follow

Get every new post delivered to your Inbox.

Join 975 other followers