orgtheory.net

Archive for the ‘rankings and reputation’ Category

inequality perpetuated via organizations – a view from cultural sociology

Sociologists are increasingly recognizing how  organizations facilitate and perpetuate inequality.  Check out the recently published Socio-Economic Review paper, “What is Missing? Cultural Processes and Causal Pathways to Inequality” by Michèle Lamont, Stefan Beljean, and Matthew Clair.

Building on Weber’s concept of rationalization, the authors argue that organizations’ propensity for standardization and evaluation (along with other processes) contribute to inequalities.  Standardization flattens inputs and outputs, subjecting these to comparisons along narrow dimensions.  In addition, those that conform to standards can receive needed resources, leaving outliers to scrap for the remainders:

Standardization is the process by which individuals, groups and institutions construct ‘uniformities across time and space’ through ‘the generation of agreed-upon rules’ (Timmermans and Epstein, 2010, p. 71). While the process implies intention (‘agreed-upon rules’) on the part of social actors, standardization as a process in everyday life frequently has unintended consequences.  The construction of uniformities becomes habitual and taken for granted once the agreed-upon rules are set in place and codified into institutional and inter-subjective scripts (often formal, albeit sometimes also informal). In its industrial and post-industrial manifestations, the process of standardization is part and parcel of the rationalization and bureaucratization of society (Carruthers and Espeland, 1991; Olshan, 1993; Brunsson and Jacobsson, 2000; Timmermans and Epstein, 2010).

….Moreover, the effects of standardization on inequality are often unintended or indeterminate. Indeed, standards are often implemented with the intent of developing a common benchmark of success or competence and are frequently motivated by positive purposes (e.g. in the case of the adoption of pollution standards or teaching standards). Yet, once institutionalized, standards are often mobilized in the distribution of resources. In this process, in some cases, those who started out with standard relevant resources may be advantaged (Buchmann et al., 2010). In this sense, the consequences of standardization for inequality can be unintentional, indirect and open-ended, as it can exacerbate or abate inequality.Whether they are is an empirical issue to be assessed on a case-by-case basis.

One example of this interaction between standardization and social inequality is the use of standards in education as documented by Neckerman (2007). Among other things, her work analyses the rise of standardized and IQ testing in the 1920s in American education and local Chicago education policy. It shows how standardized test scores came to be used to determine admission to Chicago’s best vocational schools, with the goal of imposing more universalist practices. Yet, in reality, the reform resulted in diminished access to the best schooling for the city’s low-income African-American population…. (591-592).

Similarly, evaluation facilitates and legitimates differential treatment of individual persons:

Evaluation is a cultural process that—broadly defined—concerns the negotiation, definition and stabilization of value in social life (Beckert and Musselin, 2013). According to Lamont (2012, p. 206), this process involves several important sub-processes, most importantly categorization (‘determining in which group the entity [. . .] under consideration belongs’) and legitimation (‘recognition by oneself and others of the value of an entity’).

In the empirical literature, we find several examples of how evaluation as a cultural process can contribute to inequality, many of which are drawn from sociological research on hiring, recruiting and promotion in labour markets. The bulk of these studies concern how evaluation practices of organizations favour or discriminate against certain groups of employees (see, e.g. Castilla and Benard, 2010) or applicants (see, e.g. Rivera, 2012). Yet, some scholars also examine evaluation processes in labour markets from a broader perspective, locating evaluation not only in hiring or promotion but also in entire occupational fields.

For instance, Beljean (2013b) studied standards of evaluation in the cultural industry
of stand-up comedy. Drawing on interviews with comedians and their employers as well as ethnographic fieldwork, he finds that even though the work of stand-up comedians is highly uniform in that they all try to make people laugh, there is considerable variation in how comedians are evaluated across different levels of stratification of the comedy industry. Thus, for example, newcomer comedians and star performers are judged against different standards: while the former must be highly adaptable to the taste of different audiences and owners of comedy clubs, the latter are primarily judged by their ability to nurture their fan-base and to sell out shows. Even though this difference does not necessarily translate into more inequality among comedians, it tends to have negative effects on the career prospects of newcomer comedians. Due to mechanisms of cumulative advantage, and because both audiences and bookers tend to be conservative in their judgement, it is easier for more established comedians to maintain their status than for newcomers to build up a reputation. As a result, a few star comedians get to enjoy a disproportionally large share of fame and monetary rewards, while a large majority of comedians remain anonymous and marginalized. (593)

Those looking for ways to curb inequality will not find immediate answers in this article.  The authors do not offer remedies for how organizations can combat such unintended consequences, or even, have its members become more self-aware of these tendencies.   Yet, we know from other research that organizations have attempted different measures to minimize bias.  For example, during the 1970s and 1980s, orchestras turned to “blind” auditions to reduce gender bias when considering musicians for hire.  Some have even muffled the floor to prevent judges from hearing the click of heels that might give away the gender of those auditioning.

ORCHESTRAL REPERTOIRE WORKSHOP: MOCK AUDITION

An example of a blind audition, courtesy of Colorado Springs Philharmonic.

In any case, have a look at the article’s accompanying discussion forum, where fellow scholars Douglas S. Massey, Leslie McCall, Donald Tomaskovic-Devey, Dustin Avent-Holt, Philippe Monin, Bernard Forgues, and Tao Wang weigh in with their own essays.

 

 

what should sociology’s image be?

Previously, I have argued that sociology has an image problem. Too much social problems, not enough science. But that still leaves the question open: what, specifically, should our public image be? A few suggestions:

  • Openly embrace positivism/science as our motivation and professional model.
  • The use of science to study social problems, not the science of social problems.
  • The holistic social science that employs different types of data for a rich picture of human life.
  • “The crossroads of the academy:” We legitimately can speak to fields ranging from the biomedical sciences to the most interpretive of the humanities.
  • Offer a few, simple to understand tools for those in the policy world that are focused on either policy evaluation or measuring social well being (i.e., go beyond “social studies of policy”).

Of course, we already do a lot of this in our research, we just don’t tell the public about it. In other words, sociology should be the queen of the social sciences, not the museum of social dysfunction.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

December 16, 2015 at 12:01 am

picking the right metric: from college ratings to the cold war

Two years ago, President Obama announced a plan to create government ratings for colleges—in his words, “on who’s offering the best value so students and taxpayers get a bigger bang for their buck.”

The Department of Education was charged with developing such ratings, but they were quickly mired in controversy. What outcomes should be measured? Initial reports suggested that completion rates and graduates’ earnings would be key. But critics pointed to a variety of problems—ranging from the different missions of different types of colleges, to the difficulties of measuring incomes along a variety of career paths (how do you count the person pursuing a PhD five years after graduation?), to the reductionism of valuing college only by graduates’ incomes.

Well, as of yesterday, it looks like the ratings plan is being dropped. Or rather, it’s become a “college-rating system minus the ratings”, as the Chronicle put it. The new plan is to produce a “consumer-facing tool” where students can compare colleges on a variety of criteria, which will likely include data on net price, completion rates, earning outcomes, and percent Pell Grant recipients, among other metrics. In other words, it will look more like U-Multirank, a big European initiative that was similarly a response to the political difficulty of producing a single official ranking of universities.

A lot of political forces aligned to kill this plan, including Republicans (on grounds of federal mission creep), the for-profit college lobby, and most colleges and universities, which don’t want to see more centralized control.

But I’d like to point to another difficulty it struggled with—one that has been around for a really long time, and that shows up in a lot of different contexts: the criterion problem.

Read the rest of this entry »

Written by epopp

June 26, 2015 at 1:48 pm

how to judge a phd program

When people look at PhD programs, they usually base their judgment on the fame of its scholars or the placement of graduates. Fair enough, but any seasoned social scientist will tell you that is a very imperfect way to judge an institution. Why? Performance is often related to resources. In other words, you should expect the wealthiest universities to hire away the best scholars and provide the best environment for training.

Thus, we have a null model for judging PhD program (nothing correlates with success) and a reasonable baseline model (success correlates with money). According to the baseline, PhD program ranks should roughly follow measures of financial resources, like endowments. Thus, the top Ivy League schools should all have elite (top 5) programs in any field in which they choose to compete, anything less is severe under performance. Similarly, for a research school with a modest endowment to have a top program (say Rutgers in philosophy) is wild over performance.

According to this wiki on university endowments, the top ten wealthiest institutions are Harvard, Texas (whole system), Yale, Stanford, MIT, Texas A&M (whole system), Northwestern, Michigan, and Penn. This matches roughly with what you’d expect, except that Texas and Texas A&M are top flight engineering and medicine but much weaker in arts and sciences (compared to their endowment rank). This is why I remain impressed with my colleagues at Indiana sociology. Our system wide endowment is ranked #46 but our soc programs hovers in that 10-15 range. We’re pulling our weight.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street!!

Written by fabiorojas

March 19, 2015 at 12:34 am

“there’s no rankings problem that money can’t solve” – the tale of how northeastern gamed the college rankings

There’s a September 2014 Boston.com article on Northeastern University and how it broke the top-100 in the US News & World Report of colleges and universities. The summary goes something like this: Northeastern’s former president, Richard Freeland, inherited a school that was a poorly endowed commuter school. In the modern environment, that leads you to a death spiral. A low profile leads to low enrollments, which leads to low income, which leads to an even lower profile.

The solution? Crack the code to the US News college rankings. He hired statisticians to learn the correlations between inputs and rankings. He visited the US News office to see how they built their system and bug them about what he thought was unfair. Then, he “legally” (i.e., he didn’t cheat or lie) did things to boost the rank. For example, he moved Northeastern from commuter to residential school by building more dorms. He also admitted a different profile of student that wouldn’t the depress the mean SAT score and shifted student to programs that were not counted in the US News ranking (e.g., some students are admitted in Spring admissions and do not count in the US News score).

Comments: 1. In a way, this is admirable. If the audience for higher education buys into the rankings and you do what the rankings demand, aren’t you giving people what they want? 2. The quote in the title of the post is from Michael Bastedo, a higher ed guru at Michigan, who is pointing out that rankings essentially reflect money. If you buy fancier professors and better facilities, you get better students. The rank improves. 3. Still, this shows how hard it is to move. A nearly billion dollar drive moves you from a so-so rank of about 150 to a so-so rank of about 100-ish. Enough to be “above” the fold, but not enough to challenge the traditional leaders of higher ed.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street!!

Written by fabiorojas

February 13, 2015 at 12:01 am

my ref prediction

I’m kind of obsessed with the REF, considering that it has zero direct impact on my life. It’s sort of like watching a train wreck in progress, and every time there’s big REF news I thank my lucky stars I’m in the U.S. and not the U.K.

For those who might not have been paying attention, the REF is the Research Excellence Framework, Britain’s homegrown academic homage to governmentality. Apologies in advance for any incorrect details here; to an outsider, the system’s a bit complex.

Every six years or so, U.K. universities have to submit samples of faculty members’ work – four “research outputs” per person – to a panel of disciplinary experts for evaluation. The panel ranks the outputs from 4* (world leading) to 1* (nationally recognized), although work can also be given no stars. Universities submit the work of most, but not all, of their faculty members; not being submitted to the REF is not, shall we say, a good sign for your career. “Impact” and “environment,” as well as research outputs, are also evaluated at the department level. Oh, and there’s £2 billion of research funding riding on the thing.

The whole system is arcane, and every academic I’ve talked to seems to hate it. Of course, it’s not meant to make academics happy, but to “provide…accountability for public investment in research and produce…evidence of the benefits of this investment.” Well, I don’t know that it’s doing that, but it’s certainly changing the culture of academia. I’d actually be very interested to hear a solid defense of the REF from someone who’s sympathetic to universities, so if you have one, by all means share.

Anyway, 2014 REF results were announced on Friday, to the usual hoopla. (If you’re curious but haven’t been following this, here are the results by field, including Sociology and Business and Management Studies; here are a few pieces of commentary.)

In its current form, outputs are “reviewed” by a panel of scholars in one’s discipline. This was strongly fought for by academics on the grounds that only expert review could be a legitimate way to evaluate research. This peer review, however, has become something of a farce, as panelists are expected to “review” massive quantities of research. (I can’t now find the figure, but I think it’s on the order of 1000 articles per person.)

At the same time, the peer-review element of the process (along with the complex case-study measurement of “impact”) has helped to create an increasingly elaborate, expensive, and energy-consuming infrastructure within universities around the management of the REF process. For example, universities conduct their own large-scale internal review of outputs to try to guess how REF panels will assess them, and to determine which faculty will be included in the REF submission.

All this has led to a renewed conversation about using metrics to distribute the money instead. The LSE Impact of Social Sciences blog has been particularly articulate on this front. The general argument is, “Sure, metrics aren’t great, but neither is the current system, and metrics are a lot simpler and cheaper.”

If I had to place money on it, I would bet that this metrics approach, despite all its limitations, will actually win out in the not-too-distant future. Which is awful, but no more awful than the current version of the REF. Of course metrics can be valuable tools. But as folks who know a thing or two about metrics have pointed out, they’re useful for “facilitating deliberation,” not “substitut[ing] for judgment.” It seems unlikely that any conceivable version of the REF would use metrics as anything other than a substitute for judgment.

In the U.S., this kind of extreme disciplining of the research process does not appear to be just around the corner, although Australia has partially copied the British model. But it is worth paying attention to nonetheless. The current British system took nearly thirty years to evolve into its present shape. One is reminded of the old story about the frog placed in the pot of cool water who, not noticing until too late that it was heating up, inadvertently found himself boiled.

 

Written by epopp

December 22, 2014 at 4:07 pm

the dilemmas of funding: a commentary on the united negro college fund by melissa wooten

Melissa Wooten is an Assistant Professor of Sociology at the University of Massachusetts, Amherst. Her forthcoming book In the Face of Inequality: How Black Colleges Adapt (SUNY Press 2015) documents how the social structure of race and racism affect an organization’s ability to acquire the financial and political resources it needs to survive.

“Look…Come on…It’s $10 million dollars” is how the Saturday Night Live parody explains the Los Angeles chapter of the National Association for the Advancement of Colored People’s (NAACP) decision to accept donations from now disgraced, soon-to-be former, NBA franchise owner, Donald Sterling. This parody encapsulates the dilemma that many organizations working for black advancement face. Fighting for civil rights takes money. But this money often comes from strange quarters. While Sterling’s personal animus toward African Americans captivated the public this spring, his organizational strategy of discriminating against African Americans and Hispanic Americans had already made him infamous among those involved in civil rights years earlier. So why would the NAACP accept money from a man known to actively discriminate against the very people it seeks to help?

A similar question arose when news of the Koch brothers $25 million donation to the United Negro College Fund (UNCF) emerged in June. Not only did the UNCF’s willingness to accept this donation raise eyebrows, it also cost the organization the support of AFSCME, a union with which the UNCF had a long-standing relationship. The Koch brothers support of policies that would limit early voting along with their opposition to minimum wage legislation are but a few of the reasons that have made some skeptical of a UNCF-Koch partnership. So why would the UNCF accept a large donation from two philanthropists known to support policies that would have a disproportionately negative affect on African American communities?

Read the rest of this entry »

Written by fabiorojas

September 2, 2014 at 12:01 am

what’s up with impact factors?

Usually when someone starts throwing citation impact data at me, my eyelids get heavy and I want to crawl into a corner for a nap. Like Teppo wrote a couple of years ago, “A focus on impact factors and related metrics can quickly lead to tiresome discussions about which journal is best, is that one better than this, what are the “A” journals, etc.  Boring.” I couldn’t agree more. Unfortunately, I’ve heard a lot about impact factors lately. The general weight of impact factors as a metric for assessing intellectual significance has seemed to skyrocket since the time I began training as a sociologist. Although my school is not one of them, I’ve heard of academic institutions using citation impact as a way to incentivize scholars to publish in certain journals and as a measure to assess quality in hiring and tenure cases. And yet it has never struck me as a very interesting or useful measure of scholarly worth. I can see the case for why it should be. Discussions about scholarly merit are inherently biased by people’s previous experiences, status, in-group solidarity, personal tastes, etc. It would be nice to have an objective indicator of a scholar’s or a journal’s intellectual significance, and impact factors pretend to be that. From a network perspective it makes sense. The more people who cite you, the more important your ideas should be.

My problem with impact factor is that I don’t trust the measure. I’m skeptical for a few reasons: gaming efforts by editors and authors have made them less reliable, lack of face validity, and instability in the measure. Let me touch on the gaming issue first.

Read the rest of this entry »

Written by brayden king

August 8, 2014 at 6:15 pm

the persistence of the old regime

Yesterday afternoon I ended up reading this Vox story about an effort to rank US Universities and Colleges carried out in 1911 by a man named Kendric Charles Babcock. On Twitter, Robert Kelchen remarks that the report was "squashed by Taft" (an unpleasant fate), and he links to the report itself, which is terrific. Babcock divided schools into four Classes, beginning with Class I:

And descending all the way to Class IV:

Babcock’s discussion of his methods is admirably brief (the snippet above hints at the one sampling problem that possibly troubled him), so I recommend you read the report yourself.

University reputations are extremely sticky, the conventional wisdom goes. I was interested to see whether Babcock’s report bore that out. I grabbed the US News and World Report National University Rankings and National Liberal Arts College Rankings and made a quick pass through them, coding their 1911 Babcock Class. The question is whether Mr Babcock, should he return to us from the grave, would be satisfied with how his rankings had held up—more than a century of massive educational expansion and alleged disruption notwithstanding.

It turns out that he would be quite pleased with himself.

Read the rest of this entry »

Written by Kieran

August 7, 2014 at 11:19 am

NGOs and reputations

A couple of weeks ago I was at a workshop at Oxford about NGOs and reputations. The workshop was sponsored by the Centre for Corporate Reputation and gathered scholars from a number of disciplinary backgrounds to explore how NGOs create and maintain reputations. In addition, we were interested in examining the reputational consequences that result from their interactions with corporations. At the end of the workshop I shared some of my takeaways.

It occurred to me that a number of the papers in the workshop conceptualized NGO reputation in a similar way to how we think about corporate reputations. For example, we assume that reputations are shared perceptions that reflect how an organization (successfully or unsuccessfully) differentiates itself from competitors, or we learn that organizations strategically try to manage the impressions of their key audiences in order to create a positive reputation. But if NGO reputations are similar in most ways to corporate reputations, do we learn anything new by studying NGOs that we couldn’t learn by studying for-profit organizations? Do NGO reputations differ fundamentally from corporate reputations?

I think they are different in at least one really important way: NGOs are valued because we believe they are somehow more morally authentic than other kinds of organizations. Therefore, a NGO’s reputation is grounded in how well it meets its audience’s expectations for moral authenticity. Two questions might come to mind as I try to make the link between moral authenticity and reputation. The first is, what does it mean to be authentic anyway? It’s quite possible that the term is too fuzzy to be analytically useful or perhaps we only ascribe authenticity to organizations in a post-hoc way. And second, why should NGOs be expected to be any more morally authentic than other organizations?

Read the rest of this entry »

Written by brayden king

July 30, 2014 at 10:19 pm

‘it’s like rating a blender’

The Obama administration is developing a proposal to rate colleges, a draft of which should come out this fall. Unsurprisingly, college presidents hate the idea. And honestly, so do I.

Org theorists know a thing or two about what happens when you rate things. People change their behavior. In this case, that’s the point — Arne Duncan et al. are hoping that the ratings will create incentives for colleges to graduate more students with less debt and higher post-graduation incomes.

Now, those are obviously not objectionable goals. There are some clear challenges in adjusting for the expected performance of different student bodies, and worries about disincentives to go into low-paying fields like teaching or social work, but who doesn’t want college to be more affordable, somehow?*

The big problem is the outcome that is missing in there: students who have learned things. If you create a system that measures access, completion, debt, and eventual income, and it has any teeth at all, you will get colleges that aim for those things. Unfortunately, those things have a limited relationship to actual learning. Where one conflicts with the other, learning will lose.

Of course, I’m kind of hesitant to say that, because heaven knows what would happen if we started trying to measure learning outcomes at the federal level. No Young Adult Left Behind, I guess. Coursera can sell us the curriculum.

* Another problem worth mentioning is that many adults without degrees don’t see graduation rates and average student debt levels as relevant to their college decision — they think it depends on them, not the school.

Written by epopp

June 11, 2014 at 4:00 pm

the “caren” ranking of sociology programs

A few months ago, Neal Caren posted a citation analysis of sociology journals. The idea is simple – you can map sociology by looking at clusters of citations. Pretty cool, right? You know what’s cooler – using the same technique you can come up with a new ranking of soc programs. The method is simple:

  1. Start with a cluster analysis of journal cites. Stick to the last five years or so.
  2. Within each cluster, award a department credit for each article that makes, say, the top 20 in that cluster. Exclude dead or retired authors. Exclude authors who have moved to a new campus.
  3. Weight the credit by co-authorship – but keep track of where they teach. E.g., Princeton sociology gets 1/2 for DiMaggio and Powell (1983). Stanford soc does NOT get credit because Woody Powell teaches in the education school. Courtesy appointments do not count.
  4. You can then rank within a cluster (e.g., top 5 institutions/movements depts) or create an overall ranking based on adding up scores in all clusters.

Disadvantages: This method excludes cites in books. For example, most of the cites to my Black power book are by historians, who mainly write books. This also points to another problem. It emphasizes in-discipline cites. So, if you do education research, and they love you in the AERJ, this won’t pick you up. Another issue is that if you are spread around clusters, your count is ignored.

Advantages: Based on behavior and not susceptible to halo effects because it is not a reputation survey. Also, it’s a measure of what people think is important, not what gets into specific journals. However, we would expect the typical highly central person in the cluster to appear because of a well cited article in a top journal. Another advantage is the transparency. No bizarre formulas, aside from standard network measures. Finally, it is easy to measure robustness. For example, if you think that fractional weighting for co-authors is misleading, it’s easy to drop and redo the analysis in a way that you think is correct.

Next step: Neal Caren should set up a wiki where we can quickly execute this and replace the misguided NRC/US News rankings.

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

October 24, 2013 at 12:01 am

Reactivity of Rankings: a Pure Case within Reach

MIT Soc is #1

The Chronicle reports on a new ranking of “Faculty Media Impact” conducted by the Center for a Public Anthropology. The ranking “seeks to quantify how often professors engage with the public through the news media” and was done by trawling Google News to see which faculty were mentioned in the media most often. The numbers were averaged and “and then ranked relative to the federal funds their programs had received” to get the rankings. As you can see from the screenshot above, the ranking found that the top unit at MIT was the Sociology Department. This is fantastic news in terms of impact, because MIT doesn’t actually have a Sociology Department. While we’ve known for a while that quantitative rankings can have interesting reactive effects on the entities they rank, we are clearly in new territory here.

Of course, there are many excellent and high-profile sociologists working at MIT in various units, from the Economic Sociology group at Sloan to sociologists of technology and law housed elsewhere in the university. So you can see how this might have happened. We might draw a small but significant lesson about what’s involved in cleaning, coding, and aggregating data. But I see no reason to stop there. The clear implication, it seems to me, is that this might well become the purest case of the reactivity of rankings yet observed. If MIT’s Sociology Department has the highest public profile of any unit within the university, then it stands to reason that it must exist. While it may seem locally less tangible than the departments of Brain & Congitive Sciences, Economics, and Anthropology on the actual campus, this is obviously some sort of temporary anomaly given that it comfortably outranks these units in a widely-used report on the public impact of academic departments. The only conclusion, then, is that the Sociology Department does in fact exist and the MIT administration needs to backfill any apparent ontic absence immediately and bring conditions in the merely physically present university into line with the platonic and universal realm of being that numbers and rankings capture. I look forward to giving a talk at MIT’s Sociology Department at the first opportunity.

Written by Kieran

October 8, 2013 at 5:43 pm

more higher education bashing or the end of univerisities as we know them?

In the past couple of weeks, two journalists who I enjoy reading wrote controversial diatribes about the travesties of contemporary higher education. Both Matt Taibbi  and Thomas Frank, each in their own brilliantly polemical ways, compared higher education to the housing bubble that led to our last serious financial crisis. Both writers attacked the integrity and ethics of the administrators of the current regime of academia. Both bashed a system that would allow students to acquire more debt than they could possibly pay given the job prospects for which their education prepares them. These are real nuggets that academics ought to consider seriously. Ignore, if it offends you, the abrasive rhetoric, but at the heart of both of their arguments is a logic that ought to resonate with our sociological sensibilities.

Here is Taibbi:

[T]he underlying cause of all that later-life distress and heartache – the reason they carry such crushing, life-alteringly huge college debt – is that our university-tuition system really is exploitative and unfair, designed primarily to benefit two major actors.

First in line are the colleges and universities, and the contractors who build their extravagant athletic complexes, hotel-like dormitories and God knows what other campus embellishments. For these little regional economic empires, the federal student-loan system is essentially a massive and ongoing government subsidy, once funded mostly by emotionally vulnerable parents, but now increasingly paid for in the form of federally backed loans to a political constituency – low- and middle-income students – that has virtually no lobby in Washington.

Next up is the government itself. While it’s not commonly discussed on the Hill, the government actually stands to make an enormous profit on the president’s new federal student-loan system, an estimated $184 billion over 10 years, a boondoggle paid for by hyperinflated tuition costs and fueled by a government-sponsored predatory-lending program that makes even the most ruthless private credit-card company seem like a “Save the Panda” charity.

Read the rest of this entry »

Written by brayden king

September 8, 2013 at 1:13 pm

protect your self on the internet – the brayden and eszter way

C0-blogger Brayden King and leading Internet scholar Eszter Hargittai wrote a nice post for Kellogg’s Executive Education newsletter. The topic: how to cultivate your reputation in the age of social media. A few choice clips:

Let others in your social network do the talking for you. People see impression management as most genuine when others they already trust and respect do it on your behalf. When third parties say positive things about you, they help cement your reputation and create a halo around your activities.

and

 Engage critiques from legitimate sources directly and alleviate their concerns openly. As anyone who has spent any time online knows, people love to criticize others and sling a little mud. In many cases these attacks can be ignored, especially when they come from “trolls,” or individuals whose sole intent is to pester others, usually from behind a veil of anonymity. In some cases, however, criticism will come from legitimate sources and be a reputational threat.

They are now writing a book on this topic. Recommended.

Adverts: From Black Power/Grad Skool Rulz 

Written by fabiorojas

June 6, 2013 at 12:39 am

the scandal triangle

scandal_nyhan

Brendan Nyhan has a nice post on the sociology of scandal. He summarizes his research on presidential scandal in this way:

My research suggests that the structural conditions are strongly favorable for a major media scandal to emerge. First, I found that new scandals are likely to emerge when the president is unpopular among opposition party identifiers. Obama’s approval ratings are quite low among Republicans (10-18% in recent Gallup surveys), which creates pressure on GOP leaders to pursue scandal allegations as well as audience demand for scandal coverage. Along those lines, John Boehner is reportedly “obsessed” with Benghazi and working closely with Darrell Issa, the House committee chair leading the investigation. You can expect even stronger pressure from the GOP base to pursue the IRS investigations given the explosive nature of the allegations and the way that they reinforce previous suspicions about Obama politicizing the federal government.

In addition, I found that media scandals are less likely to emerge as pressure from other news stories increases. Now that the Boston Marathon bombings have faded from the headlines, there are few major stories in the news, especially with gun control and immigration legislation stalled in Congress. The press is therefore likely to devote more resources and airtime/print to covering the IRS and Benghazi stories than they would in a more cluttered news environment.

I’d also add that “events” have properties. It is easier to scandalize, say, the IRS investigation issue because it is simple. In contrast, the issue of whether the attack in Libya should have been labeled terrorism is probably to esoteric for most folks. If you buy that argument, you get a nice story about the “scandal triangle.” The likelihood of scandal increases when partisan opposition, bored media, and clearly norm-broaching events come together.

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

May 17, 2013 at 12:01 am

do college rankings matter?

The New York Times recently asked a panel of higher education practitioners and scholars about rankings. How much to college rankings influence colleges? I thought Michael Bastedo, organizational scholar and higher ed dude at Michigan, had the right answer – not much, except for elites who are obsessed with prestige:

[that most students are not influenced by ranking] …  is confirmed by research I have done on rankings with Nicholas Bowman at Bowling Green State University. If selective colleges move up a few places in the rankings, the effects on admissions indicators, like average SAT and class rank, are minimal. Moving into the top tier of institutions is much more influential — something we called “the front page effect.”

It turns out funders aren’t strongly influenced by rankings either. There’s no question that industry and private foundations are more likely to fund prestigious colleges than nonprestigious ones. But our research shows that funders outside higher education barely notice when rankings go up or down.

So who are influenced by rankings? It’s people inside higher education, the ones who are most vulnerable to the vagaries of college prestige. Alumni are more likely to donate when rankings go up, and out-of-state students are more likely to enroll and pay higher tuition. The faculty who make decisions on federal research applications also seem to be influenced.

Check out the whole panel. Definitely worth reading.

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

October 1, 2012 at 12:01 am

the ncaa and penn state’s history

Ever since the NCAA announced they would sanction Penn State for its cover-up of the Sandusky sex abuse scandal, I’ve been thinking about writing a post related to institutional jurisdictions, authority, and reputation.  I completely understand the NCAA’s response to the scandal, especially in light of the findings of the Freeh report, and I think this was a very predictable response. Was the punishment harsh? Yes. Was it excessively harsh as a condemnation of the crimes of Sandusky? No.  Was the NCAA operating within its jurisdiction and exercising proper use of authority by making these sanctions? That’s debatable (and I’m sure it will be in the months to come).

My colleague Gary Alan Fine, who has thought a lot about scandal and collective memory (e.g., Fine 1997), has offered his thoughts on the sanctions in a New York Times op-ed. Gary questions “history clerks” who attempt to rewrite history as a response to a contemporary event/scandal.

The more significant question is whether rewriting history is the proper answer. And while this is not the first time that game outcomes have been vacated, changing 14 seasons of football history is a unique and  disquieting response. We learn bad things about people all the time, but should we change our history? Should we, like Orwell’s totalitarian Oceania, have a Ministry of Truth that has the authority to scrub the past?  Should our newspapers have to change their back files? And how far should we go?

This is a tricky issue. Everyone can agree that what happened at Penn State was deplorable. However, I think it’s perfectly reasonable to question whether the NCAA made these moves more as an effort to protect its own reputation and to safeguard the purity of college football, rather than as a reasoned response to the institutional crimes committed by Penn State’s decision-making authorities.  This scandal isn’t disappearing anytime soon, and so I expect we’ll hear a lot more about this in the months and years to come.

Written by brayden king

July 25, 2012 at 3:37 pm

theories of stupid media

There’s nothing Brendan Nyhan loves more than documenting illogical political reporting. Over Twitter, I bugged him. Does he ever tire of cataloging all the lame media pronouncements? No, he’s just loves it. He’s a true media hound.

But still, is there any evidence that the media becomes more accurate over time? Sure, I can believe that they are accurate in the sense of correctly reporting a quote, but there isn’t much evidence that they can do more than that. Heck, there’s evidence that the more famous a pundit is on TV, the less accurate they become (see Phil Tetlock’s research).

But why is the media so dumb? A few theories:

  1. They aren’t capable of it. Journalists are good at presentation and narrative, not analysis. They are selected for story writing, not understanding sampling or experimental design.
  2. Incentives. Outrageous predictions are fun and get you more attention.
  3. Desire. Maybe journalists simply don’t care about what a well substantiated economic or political analysis looks like.

Tetlock’s research focuses on #2 (e.g., having an academic degree doesn’t increase pundit accuracy). Other evidence?

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

July 2, 2012 at 12:01 am

An Elaboration on Brayden King’s reaction to Neal Caren’s list of most cited works in sociology

Several people have pointed out Neal Caren lists of most cited works. I appreciate how hard it is to do something like this and I appreciate the work Neal Caren has done. So my criticism is intended more to get us closer to the truth here and to caution against this list getting reified. I also have some suggestions for Neal Caren’s next foray here.

The idea, as I understand it, is to try and create a list of the 100 most cited sociology books and papers in the period 2005-2010. Leaving aside the fact that the SSCI under counts sociology cites by a wide margin, (maybe a factor of 400-500% if you believe what comes out of Google Scholar), the basic problem with the list is that it is not based on a good sample of all of the works in sociology. Because the journals were chosen on an ad hoc basis, one has no idea as to what the bias is in making that choice. The theory Neal Caren is working with, is that these journals are somehow a sample of all sociology journals and that their citation patterns reflect the discipline at large. The only way to make this kind of assertion is to randomly sample from all sociology journals.

The idea here is that if Bourdieu’s Distinctions is really the most cited work in sociology (an inference people are drawing from the list), then it should be equally likely to appear in all sociology articles and all sociology journals at a similar rate. The only way to know if this is true, is to sample all journals or all articles, not some subset chosen purposively. Adding ASQ to this, does not matter because it only adds one more arbitrary choice in a nonrandom sampling scheme. .

I note that the Social Science Citation Index follows 139 Sociology journals. A random sample of 20% would yield 28 journals and looking at those papers across a random sample of journals is going to get us a better idea at finding out which works are the most cited in sociology.

Is there any evidence that the nonrandom sample chosen by Neal Caren is nonrandom? The last three cites on his list include one by Latour (49 cites), Byrk (49 cites) and Blair Loy (49 cites). If one goes to the SSCI and looks up all of the cites to these works  from 2005-2010, not just the ones that appear in these journals, one comes to a startling result: Latour has 1266 cites, Bryk, 124, and Blair Loy 152. At the top of the list, Bourdieu’s Distinctions has 218 on Neal Caren’s list but the SSCI shows Distinctions as having 865 cites overall.  Latour’s book should put him at the top of the list, but the way the journals are chosen here puts him at the bottom. It ought to make anyone who looks at this list nervous, that Latour’s real citation count is 25 times larger than reported and it puts him ahead of Bourdieu’s Distinctions.

The list is also clearly nonrandom for what is left off. Brayden King mentioned that the list is light on organizational and economic sociology. So, I did some checking. Woody Powell’s 1990 paper called “Neither markets nor hierarchies” has 464 cites from 2005-2010 and his paper with three other colleagues that appeared in the AJS in 2005, “Networks dynamics and field evolution” has 267 cites. In my own work, my 1996 ASR paper “Markets as politics” has 363 cites and my 2001 book “The Architecture of Markets” has 454 from 2005-2010. If without much work, I can find four articles or books that have more cites than two of the three bottom cites on the list (i.e. Byrk’s 124 and Blair Loy’s 152 done the same way), there must be lots more missing.

This suggests that if we really want to understand what are the most cited and core works in sociology in any time period,  we cannot use purposive samples of journals. What is required is a substantial number of journals being sampled, and then all of the cites to the papers or books tallied for those books and papers from the SSCI in order to see which works really are the most cited. I assume that many of the books and papers on the list will still be there, i.e. things like Bourdieu, Granovetter, DiMaggio and Powell, Meyer and Rowan, Swidler, and Sewell. But because of the nonrandom sampling, lots of things that appear to be missing are probably, well, missing.

Written by fligstein

June 5, 2012 at 9:32 pm

where is the org. theory in the most cited works in sociology?

Neal Caren has compiled a list of the 102 most cited works in sociology journals over the last five years. There are a lot of familiar faces at the top of the list. Bourdieu’s Distinction, Raudenbush’s and Bryk’s Hierarchical Linear Models, Putnam’s Bowling Alone, Wilson’s The Truly Disadvantaged, and Grannovetter’s “Strength of Weak Ties” make up the top 5.  It’s notable that Grannovetter’s 1973 piece is the only article in the top 5. The rest are books. I was also interested to see that people are still citing Coleman.  He has three works on the list, including his 1990 book at the number 6 spot.  Sadly, Selznick is nowhere to be found on the list (but then neither is Stinchcombe).  Much of the work is highly theoretical and abstract. There is a smaller, but still prominent, set of work dedicated to methods (e.g., Raudenbush and Bryk). I’m glad to see there is still a place for big theory.

It’s striking, however, how little organizational theory there is on the list.  Not counting Granovetter, whose work is really about networks and the economy broadly, no organizational theory appears on the list until 15 and 16, where Hochschild’s The Managed Heart (which might be there due to the number of citations it gets from gender scholars) and Dimaggio’s and Powell’s 1983 paper show up.  There are several highly influential papers in organizational theory that I was surprised were not on the list. One could deduce from the list that sociology and organizational theory have parted ways.

I don’t think this is really true, but I think it speaks to some trends in sociology. The first is that most organizational sociology, excluding research on work and occupations, no longer appears in generalist sociology journals outside of the American Sociological Review and the American Journal of Sociology. Journals like Social Forces or Social Problems just don’t publish a lot of organizational theory.  Now, there are a lot of great organizational papers that get published in ASR and AJS, but that is a very small subset of the entire population of sociology articles. The second is that Administrative Science Quarterly no longer seems to count in most sociologists’ minds as a sociology journal anymore.  Perhaps its omission  leads to some significant pieces of organizational sociology being underrepresented (or perhaps not since ASQ publishes fewer articles than many of the sociology journals). To be fair to Neal, I don’t think he’s unique among sociologists as failing to recognize ASQ as an important source of sociology.* One reason for this, I’m guessing, is because a lot of non-sociologists publish in it. But a lot of non-sociologists publish in other journals that are on the list as well, including Social Psychological Quarterly, Mobilization, and Social Science Research. Another reason may just be that it’s because a lot of organizational sociology is no longer taking place in sociology departments, making the subfield invisible to our peer sociologists.  Although I have no data to support this, my intuition is that fewer organizational theory classes are taught in sociology Phd programs today than were taught twenty years ago. Because of this, younger sociologists are not coming into contact with organizational theory, and so they are not citing it.  Again, I have no evidence that this is the case.

I don’t think organizational research is waning in quality.  A lot of organizational research still gets published in ASR and AJS. But a lot of it is probably not read or consumed by most sociologists.

UPDATE: Neal has updated the analysis to include ASQ. The major effect has been to boost DiMaggio and Powell to number 10.

*And yes, I’m lobbying Neal to include ASQ in future citation analyses.

Written by brayden king

June 3, 2012 at 11:49 pm

new book – the organization of higher education

My friend Michael Bastedo at Michigan recently published a new collection of essays called The Organization of Higher Education: Managing Colleges for a New Era. The book introduces the reader to cutting edge research in universities, strategy, and organizational behavior.

Chapters include Brian Pusser and Simon Marginson on global rankings, J. Douglas Toma on strategy and Anna Neuman on organizational cognition in higher education. Social movement fans should check out my chapter on movements and higher education.

Recommended!

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

May 13, 2012 at 12:01 am

open letter from creative writing faculty against ranking

Oh do we love rankings around here.  Poets & Writers recently came out with their 2012 ranking of MFA programs (here’s their methodology and eighteen measures).  The ranking has problems.  Creative writing faculty address some of these problems in an open letter.  Here’s the New York Observer story (the letter is at the end).  The open letter is also posted below the fold.

(Hat tip to Harriet.)

Read the rest of this entry »

Written by teppo

September 13, 2011 at 4:37 am

joel baum: article effects > journal effects

Joel Baum has written a provocative article which argues and shows that, essentially, article effects are larger than journal effects.

In other words, impact factors and associated journal rankings give the impression of within-journal article homogeneity.  But top journals of course have significant variance in article-level citations, and thus journals and less-cited articles essentially “free-ride” on the citations of a few, highly-cited pieces.  A few articles get lots of citations, most get far less — and the former provide a halo for the latter.  And, “lesser” journals also publish articles that become hits (take Jay Barney’s 1991 Journal of Management article, with 17,000+ google scholar citations), hits that are more cited than average articles in “top” journals.

The whole journal rankings obsession (and associated labels: “A” etc journal) can be a little nutty, and I think Joel’s piece nicely reminds us that, well, article content matters.  There’s a bit of a “count” culture in some places, where ideas and content get subverted by “how many A pubs” someone has published.  Counts trump ideas.  At times, high stakes decisions — hiring, tenure, rewards — also get made based on counts, and thus Joel’s piece on article-effects is a welcome reminder.

Having said that, I do think that journal effects certainly remain important (a limited number of journals are analyzed in the paper), no question. And citations of course are not the only (nor perfect) measure of impact.

But definitely a fascinating piece.

Here’s the abstract:

The simplicity and apparent objectivity of the Institute for Scientific Information’s Impact Factor has resulted in its widespread use to assess the quality of organization studies journals and by extension the impact of the articles they publish and the achievements of their authors.  After describing how such uses of the Impact Factor can distort both researcher and editorial behavior to the detriment of the field, I show how extreme variability in article citedness permits the vast majority of articles – and journals themselves – to free-ride on a small number of highly-cited articles.  I conclude that the Impact Factor has little credibility as a proxy for the quality of either organization studies journals or the articles they publish, resulting in attributions of journal or article quality that are incorrect as much or more than half the time.  The clear implication is that we need to cease our reliance on such a non-scientific, quantitative characterization to evaluate the quality of our work.

Here’s the paper (posted with permission from Joel):  “Free-riding on power laws: Questioning the validity of the impact factor as a measure of research quality in organization studies.” The paper is forthcoming in Organization.

The article has some great links to the Bill Starbuck piece that Brayden discussed. Here’s Bill’s OMT blog post.

Written by teppo

February 3, 2011 at 4:34 am

more on rankings – quality v. prestige

Teppo’s efforts to create a crowdsourced ranking of management journals tipped off quite a debate on the OMT (Organization and Management Theory) listserv about the validity of such rankings.  The debate centered on whether crowdsourced rankings were too subjective, merely representing prestige differences rather than actual quality differences, and ignored objective data (e.g., citation patterns) for assessing journal quality. Teppo and Bill Starbuck were kind enough to post on the OMT blog some thoughts about the ranking. Bill knows something about journal prestige and quality. In 2005 he published a paper in Organization Science that questioned whether the most prestigious journals actually published the highest quality articles. Here’s the abstract for that paper:

Articles in high-prestige journals receive more citations and more applause than articles in less-prestigious journals, but how much more do these articles contribute to knowledge?  This article uses a statistical theory of review processes to draw inferences about differences value between articles in more-prestigious versus less-prestigious journals. This analysis indicates that there is much overlap in articles in different prestige strata. Indeed, theory implies that about half of the articles published are not among the best ones submitted to those journals, and some of the manuscripts that belong in the highest-value 20% have the misfortune to elicit rejections from as many as five journals. Some social science departments and business schools strongly emphasize publication in prestigious journals. Although one can draw inferences about an author’s average manuscript from the percentage in top-tier journals, the confidence limits for such inferences are wide. A focus on prestigious journals may benefit the most prestigious departments or schools but add randomness to the decisions of departments or schools that are not at the very top. Such a focus may also impede the development of knowledge when mediocre research receives the endorsement of high visibility.

Written by brayden king

January 28, 2011 at 5:37 pm