orgtheory.net

Search Results

Crowdsourcing Sociology Department Rankings: 2013 Edition

TL;DR version: Click here to Vote in the OrgTheory/AOI 2013 Sociology Department Ranking Survey

As many of you are by now aware, U.S. News and World Report released the 2013 Edition of its Sociology Rankings this week. I find rankings fascinating, not least because of what you might call the “legitimacy ratchet” they implement. Winners insist rankings are absurd but point to their high placing on the list. Here’s a nice example of that from the University of Michigan. The message here is, “We’re not really playing, but of course if we were we’d be winning.” Losers, meanwhile, either remain silent (thus implicitly accepting their fate) or complain about the methods used, and leave themselves open to accusations of sour grapes or bad faith. They are constantly tempted to reject the enterprise and insist they should’ve been ranked higher, and so end up sounding like the apocryphal Borscht Belt couple complaining that the food here is terrible and the portions are tiny as well.

The best thing to do is to implement your own system, and do it better, if only to introduce confusion by way of additional measures. Omar Lizardo and Jessica Collett have already pointed out that U.S. News decided to cook the rankings by averaging the results from this year’s survey with the previous two rounds. They provide an estimate of what the de-averaged results probably looked like. Back in 20011, Steve Vaisey and I ran a poll using Matt Salganik’s excellent All Our Ideas website, which creates rankings from multiple pairwise comparisons. It’s easy to run and generates rankings with high face validity in a way that’s quicker, more fun, and much, much cheaper than the alternatives. So, we’re doing it again this year. Here is OrgTheory/AOI 2013 Sociology Department Ranking Survey. Go and vote! Chicago people will be happy to hear can vote as often as you like. So, participate in your own quantitative domination and get voting.

Written by Kieran

March 13, 2013 at 1:08 pm

citizen science: crowdsourcing observation, data and coding

@viil linked to a Boston Herald article that talks about how crowdsourcing is changing science.  Lots of cool initiatives going on related to “citizen science” — for example, check out zooniverse.org and scistarter.com and projectnoah.org (very cool, including iPhone app to help catalogue species).

Here’s David Kirsch’s previous post on a citizen science for organization studies and other orgtheory crowdsourcing posts.

Written by teppo

November 19, 2011 at 11:51 pm

guttenplag – crowdsourcing the extent of karl-theodor zu guttenberg’s plagiarism

Germany’s (now former) minister of defense Karl-Theodor zu Guttenberg plagiarized much of his dissertation (a presentation I saw today referred to this incident) — here’s the scoop.  There’s an effort to “crowdsource” the exact extent of the plagiarism, titled “Guttenplag,” a wiki where readers can identify plagiarized pages.  So far readers have found that 76% of the dissertation’s pages have plagiarized content.

Gaddafi’s son’s dissertation is also being checked for plagiarism (well, there might just be a ghost-writer).  And here’s an article on the control+c, control+v plagiarism boom.

With increased digitizing, I’m guessing this type of thing will become more and more automated (text-to-text comparison is easy) and we’re bound to find additional instances (even from the distant past).

Written by teppo

March 3, 2011 at 10:11 pm

crowdsourcing the news: an experiment

Some journalists and a Carnegie Mellon team are experimenting with crowdsourcing the news.   Here’s some intuition on crowdsourcing complex tasks.  Here’s an article with some background.

Written by teppo

February 8, 2011 at 8:56 pm

crowdsourcing sociology department rankings

I think we can all agree that the NRC rankings were a disaster. So, Steve Vaisey and I think that we can generate a new list from scratch. Using Matt Salganik’s excellent “All Our Ideas” site, we’ve set up a tool for pairwise comparison of Sociology Departments. The goal is to get as many head-to-head snap judgments as possible. You can vote as many times as you like — in fact, it’s encouraged. The implicit ranking is generated from the whole collection of pairwise comparisions. Just go to

http://www.allourideas.org/socrankings/

and get voting!

To learn more about All Our Ideas see here.

Update: So, based on publicity from (a) this post, (b) a tweet, and (c) some Facebook status updates, we’re pushing towards 20,000 pairwise votes after about five hours. Not bad. And the results so far look pretty good, certainly if you think of them less as perfectly differentiated ranks and more as banded estimates.

Update 2: Incidentally, several people have taken advantage of the “suggest your own idea” option. Suggestions fall into three categories. First, universities outside North America, which I haven’t been including. Second, schools that are already on the list: San Diego, Riverside, Penn — if you haven’t come across them in your voting, it’s because you haven’t waited long enough to see them. All of the schools ranked in Sociology by the NRC are included. Third, options such as “Harvard Social Relations c. 1955”, “People Who Have Left the University of Wisconsin-Madison”, and (the cleverest one so far) “Tri-Valley Center for Human Potential”. These aren’t going to make it on the list.

Written by Kieran

January 7, 2011 at 8:03 pm

rethinking Jerry Davis

I’ve spent the past few days at the EGOS meetings in Rotterdam. If you’re not an organizational scholar, EGOS is the acronym for the European Group for Organizational Studies – an interdisciplinary network of organizational scholars from both sides of the ocean. The theme of this year’s meeting was about reimagining and rethinking organizations during unsettled times. Naturally, they asked Jerry Davis – who has done more reimagining and rethinking of organizational theory than most – to be the keynote speaker.

Jerry’s keynote was, as expected, a witty, concise, empirically-driven argument for why the corporation has ceased to be a major institution in society (the impromptu dancing was an unexpected delight). If you’re not familiar with his argument, you should really read his book, Managed by the Markets, a real page-turner that explains how the growth of financial markets accompanied the deterioration of the public corporation as a major employer and provider of public welfare in contemporary society.  I’ve heard him give a version of this talk several times, and like every other time I left his talk feeling uncomfortable with some of his conclusions. Feeling uncomfortable is an understatement. I disagree with his conclusions. But I still think that Jerry has done an excellent job of marshaling data that can lead to a scarier and even more cynical conclusion than the one he claims.

Read the rest of this entry »

Written by brayden king

July 5, 2014 at 3:53 pm

Posted in brayden, power, the man

Sociology Department Rankings for 2013

Update: I updated these analyses (fixing the double-counting problem). The results changed a little, so reload to see the new figures.

Last week we launched the OrgTheory/AAI 2013 Sociology Department Ranking Survey, taking advantage of Matt Salganik’s excellent All Our Ideas service to generate sociology rankings based on respondents making multiple pairwise comparisons between department. That is, questions of the form “In your judgment, which of the following is the better Sociology department?” followed by a choice between two departments. Amongst other advantages, this method tends to get you a lot of data quickly. People find it easier to make a pairwise choice between two alternatives than to assign a rating score or produce a complete ranking amongst many alternatives. They also get addicted to the process and keep making choices. In our survey, over 600 respondents made just over 46,000 pairwise comparisons. In the original version of this post I used the Session IDs supplied in the data, forgetting that the data file also provides non-identifying (hashed) IP addresses. I re-ran the analysis using voter-aggregated rather than session-aggregated data, so now there is no double-counting. The results are a little cleaner. Although the All Our Ideas site gives you the results itself, I was interested in getting some other information out of the data, particularly confidence intervals for departments. Here is a figure showing the rankings for the Top 50 departments, based on ability scores derived from a direct-comparison Bradley-Terry model.

"Top 50."

The model doesn’t take account of any rater effects, but given the general state of the U.S. News ranking methodology I am not really bothered. As you can see, the gradation looks pretty smooth. The first real “hinge” in the rankings (in the sense of a pretty clean separation between a department and the one above it) comes between Toronto and Emory. You could make a case, if you squint a bit, that UT Austin and Duke are at a similar hinge-point with respect to the departments ranked above and below them. Indiana’s high ranking is due solely to Fabio mobilizing a team of role-playing enthusiasts to relentlessly vote in the survey. (This is speculation on my part.)

Read the rest of this entry »

Written by Kieran

March 25, 2013 at 3:58 pm

USNWR’s Small N problem

While we’re running our Crowdsourced Sociology Rankings, people have been looking a little more closely at the U.S. News and World Report rankings. Over at Scatterplot, Neal Caren points out that U.S. News’s methods page has some details on the survey sample size and response rates. They’re bad:

Surveys were conducted in fall 2012 by Ipsos Public Affairs … Questionnaires were sent to department heads and directors of graduate studies (or, alternatively, a senior faculty member who teaches graduate students) at schools that had granted a total of five or more doctorates in each discipline during the five-year period from 2005 through 2009, as indicated by the 2010 "Survey of Earned Doctorates." … The surveys asked about Ph.D. programs in criminology (response rate: 90 percent), economics (25 percent), English (21 percent), history (19 percent), political science (30 percent), psychology (16 percent), and sociology (31 percent). … The number of schools surveyed in fall 2012 were: economics—132, English—156, history—151, political science—119, psychology—246, and sociology—117. In fall 2008, 36 schools were surveyed for criminology.

So, following Neal, this tells us the Sociology rankings are based on a survey of 117 Heads and Directors with a response rate of 31 percent, which is thirty six people in total. For Economics you have 33 people, for History 29 people, for Political Science 36 people, for Psychology 40 people, and for English 33 people. The methods page also notes that they calculate the scores using a trimmed mean, so they throw out two observations each time (the highest and the lowest). The upshot is that the average score of a department is likely to have rather wide confidence intervals.

But, don’t let all that get in the way of contemplating the magic numbers. The press releases from strongly-ranked departments are already coming thick and fast.

Update: These numbers are too low. Read on.

I guess it’s possible that U.S. News *might* mean that the *effective* N of, e.g., the Sociology survey is 117, and that’s the result of a larger initial survey which yielded a 31 percent response rate. On that interpretation they they initially contacted 378 departments (or thereabouts). That would be a non-standard way of describing what you did. Normally, if you give a raw number for the sample size and tell us the response rate, the raw number is the N you began with, not the N you ended up with. A quick check of the Survey of Earned Doctorates suggests that there were 167 Ph.D granting Sociology programs in the United States in 2010, which suggests that 117 is about right for the number who had awarded five or more in the past five years. Same goes for Economics, which has 179 Ph.D programs in the 2010 SED. Then again, the wording in the methods can also be read as saying every department might have received two surveys (“Questionnaires were sent to department heads and directors of graduate studies … at schools that had granted a total of five or more doctorates … during the five-year period from 2005 through 2009”). Looking again at the available SED data for 2006 to 2010 (one year off the USNWR dates, unfortunately), I found that 115 Sociology Departments met the stated criteria of having awarded five our more doctorates in the previous five years. If both the Dept Head and DGS in all those departmetns got a survey, this makes for an initial maximum N of 230, which is still quite far from the 378 or so needed, if 117 is supposed to mean the 31 percent who responded rather than the total number initially surveyed.

It seems like the most plausible interpretation is that for Sociology the number of schools surveyed is in fact 117, that every school received two copies of the questionnaire (one to the Head, one to the DGS or equivalent), but that the 31 percent response rate means “schools from which at least one response was received”, and so the total N surveys for Sociology is somewhere between 36 and 72 people, with a similar range of between 30 and 80 for the other departments.

Update: While I was offline dealing with other things, then looking at the SED data I’d downloaded, then writing the last few paragraphs above, I see others have come to the same conclusion as I do here by more direct and informed means.

Written by Kieran

March 14, 2013 at 4:28 pm

Posted in academia, mere empirics

using mechanical turk for research

Perspectives on Psychological Science has a short piece on using Amazon’s Mechanical Turk as a subject pool: “Amazon’s Mechanical Turk: A New Source for Inexpensive, Yet High-Quality, Data?”

As Google Scholar shows, Mechanical Turk is being used in lots of clever ways.

Mechanical Turk has been called a digital sweatshop.  Here are two perspectives – an Economic Letters piece: “The condition of the Turking class: Are online employers fair and honest?”   And, a piece calling for intervention: “Working the crowd: Employment and labor law in the crowdsourcing industry.”

Here’s the Mechanical Turk page.  Here are some research-related tasks that you can get paid for.

Written by teppo

November 25, 2011 at 12:52 am

joel baum is growing a mustache in movember

Joel Baum is growing a mustache in November.  It’s for a very good cause. He is crowdsourcing the style of mustache he’ll grow.  Any famous social theorists that Joel could emulate?

Be sure to contribute to the cause!  Joel is raising money for prostate cancer awareness.  He asked if I would, in solidarity, also grow a ‘stache in November.  I think I’ll contribute $ instead.

He’ll post pics and ‘stache updates on his “mo space.”  Here’s the email Joel sent (posted with permission).

Dear all –

I am about to embark on an epic, and historic journey.  Specifically, starting tomorrow (Nov 1), I will begin growing a mustache.  And, I will grow it until Nov 30.

It will not be pretty.  It will be grey.

It will not make me look any younger.  It will be itchy. Read the rest of this entry »

Written by teppo

October 31, 2011 at 9:26 pm

open peer review

Academia is one step closer to embracing open peer review (hat tip to David Kirsch). The Andrew W. Mellon foundation has given NYU and MediaCommons $50,000 to develop and test an open peer review system for academic journals. We’ve had a lot of debate here about how to improve the review process, which has included some modest and crazy proposals. Open peer review is one potential solution to the problems inherent in the review process – e.g., getting good reviewers, determining the quality of papers, etc. Open peer review would allow authors to post their papers online and anyone could step in and serve as a reviewer, offering public comments and suggestions through multiple iterations of paper revision.  Editors use the feedback posted on the online system and monitor revisions to determine which papers should ultimately make it into their journal.  It uses a crowdsourcing logic to move papers to publication. If  you get more eyes on a paper, the author gets better feedback during the revision process and editors will be better at filtering out lower quality papers.

A couple of journals have already tried open peer review. In 2006 Nature‘s experiment was seen as a failure, but four years later the humanities journal, Shakespeare Quarterly, used it quite successfully, which has prompted the journal to try it again. I like the idea of making peer review more open and competitive. I’m not one of those who thinks the peer review system is currently broken (by and large I’m happy with the quality of articles published in our fields’ top journals), but I think we should be embrace technological opportunities to improve the system. One upside of an open peer review system would be improvement in paper quality, but more importantly I think that open peer review could speed up the peer review process. If you allow more people to quickly gain access to a paper, you wouldn’t have to wait months and months to hear back from the editors’ assigned reviewers. Feedback and revision could occur simultaneously, which is really an ideal model for social science.

There are some serious downsides to consider. Some authors will resist having their work vetted openly.  Public criticism can be hard to take, especially if you’re a junior person seeking tenure. The system might frustrate scholars of all rank and status who don’t want to let the public in to see their half-baked ideas and analysis. In some ways we’re all invested in the illusion that great scholarship just blossoms on its own – we’d rather not let everyone see how the sausage is made, especially when it’s of our own making. There is also the potential for a tragedy of the commons scenario. Currently, the direct incentive to peer review is to maintain one’s good standing with journals we’d like to publish in some day.  If no one is calling on you to review, the system completely relies on professional norms and reviewers’ good will. Open peer review might work well for one or two special issues, but when the novelty wears off and the system is congested with hundreds of submissions, willingness to review might dissipate. Sadly, I also think it’s possible that the system could be overloaded quickly if everyone just starts posting their crap online. Open peer review would require that authors take responsibility in submitting papers selectively.

I think we could overcome these obstacles, but it would requires some innovative solutions (e.g., editors could choose which papers get posted online for review and desk reject the rest). Someone will eventually have to take the risk and volunteer to be the journal to try the model out in our field. Given the risk involved, my guess is that the instigator will have to be one of the well established, high status journals if this is going to have any chance of success. Perhaps a special issue of ASQ or AJS is in order.

Written by brayden king

April 14, 2011 at 3:05 pm

angry birds for the thinking person: digitalkoot

The Finnish National Library, in cooperation with the Finnish company Microtask, has set up an effort to digitize Finnish culture by tapping into the crowd.  Specifically, digitized archives have problems due to mistakes that occur in the scanning process (when translating an image to text), and the Library has set up a Lemmings/Whac-A-Mole-type game to catch these mistakes.  Angry birds for the thinking person.  Here’s a youtube clip of the gameMore here.  You can play the digitalkoot game here.

This is sort of in the genre of save-the-world-via-gaming, a la Jane McDonigal, blog here.

BONUS:  If contributing to Finnish culture isn’t your thing, then you might take note of DARPA’s effort to crowdsource combat vehicle design.

BONUS 2:  Or, the band R.E.M. is (sorta) doing a crowdsourcing-type thing with their forthcoming album, specifically by letting fans re-mix a song etc.  Here are a bunch of crowdsourced versions of R.E.M.’s forthcoming song ‘It Happened Today.’

Written by teppo

February 9, 2011 at 7:15 am

joel baum: article effects > journal effects

Joel Baum has written a provocative article which argues and shows that, essentially, article effects are larger than journal effects.

In other words, impact factors and associated journal rankings give the impression of within-journal article homogeneity.  But top journals of course have significant variance in article-level citations, and thus journals and less-cited articles essentially “free-ride” on the citations of a few, highly-cited pieces.  A few articles get lots of citations, most get far less — and the former provide a halo for the latter.  And, “lesser” journals also publish articles that become hits (take Jay Barney’s 1991 Journal of Management article, with 17,000+ google scholar citations), hits that are more cited than average articles in “top” journals.

The whole journal rankings obsession (and associated labels: “A” etc journal) can be a little nutty, and I think Joel’s piece nicely reminds us that, well, article content matters.  There’s a bit of a “count” culture in some places, where ideas and content get subverted by “how many A pubs” someone has published.  Counts trump ideas.  At times, high stakes decisions — hiring, tenure, rewards — also get made based on counts, and thus Joel’s piece on article-effects is a welcome reminder.

Having said that, I do think that journal effects certainly remain important (a limited number of journals are analyzed in the paper), no question. And citations of course are not the only (nor perfect) measure of impact.

But definitely a fascinating piece.

Here’s the abstract:

The simplicity and apparent objectivity of the Institute for Scientific Information’s Impact Factor has resulted in its widespread use to assess the quality of organization studies journals and by extension the impact of the articles they publish and the achievements of their authors.  After describing how such uses of the Impact Factor can distort both researcher and editorial behavior to the detriment of the field, I show how extreme variability in article citedness permits the vast majority of articles – and journals themselves – to free-ride on a small number of highly-cited articles.  I conclude that the Impact Factor has little credibility as a proxy for the quality of either organization studies journals or the articles they publish, resulting in attributions of journal or article quality that are incorrect as much or more than half the time.  The clear implication is that we need to cease our reliance on such a non-scientific, quantitative characterization to evaluate the quality of our work.

Here’s the paper (posted with permission from Joel):  “Free-riding on power laws: Questioning the validity of the impact factor as a measure of research quality in organization studies.” The paper is forthcoming in Organization.

The article has some great links to the Bill Starbuck piece that Brayden discussed. Here’s Bill’s OMT blog post.

Written by teppo

February 3, 2011 at 4:34 am

more on rankings – quality v. prestige

Teppo’s efforts to create a crowdsourced ranking of management journals tipped off quite a debate on the OMT (Organization and Management Theory) listserv about the validity of such rankings.  The debate centered on whether crowdsourced rankings were too subjective, merely representing prestige differences rather than actual quality differences, and ignored objective data (e.g., citation patterns) for assessing journal quality. Teppo and Bill Starbuck were kind enough to post on the OMT blog some thoughts about the ranking. Bill knows something about journal prestige and quality. In 2005 he published a paper in Organization Science that questioned whether the most prestigious journals actually published the highest quality articles. Here’s the abstract for that paper:

Articles in high-prestige journals receive more citations and more applause than articles in less-prestigious journals, but how much more do these articles contribute to knowledge?  This article uses a statistical theory of review processes to draw inferences about differences value between articles in more-prestigious versus less-prestigious journals. This analysis indicates that there is much overlap in articles in different prestige strata. Indeed, theory implies that about half of the articles published are not among the best ones submitted to those journals, and some of the manuscripts that belong in the highest-value 20% have the misfortune to elicit rejections from as many as five journals. Some social science departments and business schools strongly emphasize publication in prestigious journals. Although one can draw inferences about an author’s average manuscript from the percentage in top-tier journals, the confidence limits for such inferences are wide. A focus on prestigious journals may benefit the most prestigious departments or schools but add randomness to the decisions of departments or schools that are not at the very top. Such a focus may also impede the development of knowledge when mediocre research receives the endorsement of high visibility.

Written by brayden king

January 28, 2011 at 5:37 pm

management journals ranking, crowdsourced

Is Administrative Science Quarterly really the #9 journal in management (as suggested by ISI/impact factors a few years ago)?  Pl-eez!  Is Management Science really #24 (as ranked by ISI in 2009) among management journals?  Is the Journal of Product Innovation Management, ahem, really a better management journal than Organization Science (relegated to #13! in 2008)?

Now you can decide.

Inspired by Kieran and Steve’s ranking initiative (of sociology departments, see here), here’s an effort to crowdsource management journal rankings:

RANK HERE: http://www.allourideas.org/management

Sure, a ranking like this has lots of problems: apples and oranges (organizational behavior, strategy, org theory journals all in one), the lack of disciplinary journals (for now), etc. It’s certainly not definitive.  But I think a crowdsourced ranking of management journals might nonetheless be quite informative, and it certainly won’t make the mistake of keeping ASQ, Organization Science or Management Science out of the top 5.  Well, we’ll see.

Updated map of where the votes are coming from:

Written by teppo

January 15, 2011 at 12:20 am

sociology department rankings: now with added legitimacy

So, where should you go to find sociology rankings? The NRC? Ahahahaha. U.S. News and World Report? Perhaps, if you also need to catch up on AARP-related news and events. Google, however, shows there’s a new player in the game:

OrgTheory is your source for Rankings

So there you have it. And if you search for sociology department rankings you’ll find OrgTheory-related material in the first and third hits, with US News managing 2nd (for now) and NRC back somewhere in the dust.

We’ve now gotten about 65,000 votes on the rankings page Steve and I set up. Steve has done some initial analysis of the rankings, too, including this visualization of the rank order as a network where ties are defined by having an overlapping 90 percent confidence interval. The image and the data used to make it can be seen on his site.

Written by Kieran

January 12, 2011 at 9:15 pm

Posted in uncategorized

Tagged with

ncaa basketball tournament, possibilities

There are 9.3 quintillion possible ways to fill out the NCAA March Madness brackets (for non-US readers, here’s a primer on the madness) — that’s a lot of possibilities.  In other words, not all possible brackets will get filled out by the millions of people engaged in this activity.

So, what’s the best way to make your picks?  Random guessing won’t get you too far.  If you just use the ex ante seeds for filling out your bracket, you’ll do ok. On that, here’s an interesting 2001 Management Science piece highlighting various bracket strategies: “March Madness and the Office Pool.” Here’s a shorter primer essentially on the same piece.

The New York Times has a piece on crowdsourcing the NCAA tournament.

And, Science Daily works through some of the same intuition.

And finally, some Georgia Tech engineering profs say they have it figured out — here’s their LRMC model.  Here are their picks.

Written by teppo

March 19, 2010 at 2:31 am