Archive for the ‘rankings and reputation’ Category

inequality perpetuated via organizations – a view from cultural sociology

Sociologists are increasingly recognizing how  organizations facilitate and perpetuate inequality.  Check out the recently published Socio-Economic Review paper, “What is Missing? Cultural Processes and Causal Pathways to Inequality” by Michèle Lamont, Stefan Beljean, and Matthew Clair.

Building on Weber’s concept of rationalization, the authors argue that organizations’ propensity for standardization and evaluation (along with other processes) contribute to inequalities.  Standardization flattens inputs and outputs, subjecting these to comparisons along narrow dimensions.  In addition, those that conform to standards can receive needed resources, leaving outliers to scrap for the remainders:

Standardization is the process by which individuals, groups and institutions construct ‘uniformities across time and space’ through ‘the generation of agreed-upon rules’ (Timmermans and Epstein, 2010, p. 71). While the process implies intention (‘agreed-upon rules’) on the part of social actors, standardization as a process in everyday life frequently has unintended consequences.  The construction of uniformities becomes habitual and taken for granted once the agreed-upon rules are set in place and codified into institutional and inter-subjective scripts (often formal, albeit sometimes also informal). In its industrial and post-industrial manifestations, the process of standardization is part and parcel of the rationalization and bureaucratization of society (Carruthers and Espeland, 1991; Olshan, 1993; Brunsson and Jacobsson, 2000; Timmermans and Epstein, 2010).

….Moreover, the effects of standardization on inequality are often unintended or indeterminate. Indeed, standards are often implemented with the intent of developing a common benchmark of success or competence and are frequently motivated by positive purposes (e.g. in the case of the adoption of pollution standards or teaching standards). Yet, once institutionalized, standards are often mobilized in the distribution of resources. In this process, in some cases, those who started out with standard relevant resources may be advantaged (Buchmann et al., 2010). In this sense, the consequences of standardization for inequality can be unintentional, indirect and open-ended, as it can exacerbate or abate inequality.Whether they are is an empirical issue to be assessed on a case-by-case basis.

One example of this interaction between standardization and social inequality is the use of standards in education as documented by Neckerman (2007). Among other things, her work analyses the rise of standardized and IQ testing in the 1920s in American education and local Chicago education policy. It shows how standardized test scores came to be used to determine admission to Chicago’s best vocational schools, with the goal of imposing more universalist practices. Yet, in reality, the reform resulted in diminished access to the best schooling for the city’s low-income African-American population…. (591-592).

Similarly, evaluation facilitates and legitimates differential treatment of individual persons:

Evaluation is a cultural process that—broadly defined—concerns the negotiation, definition and stabilization of value in social life (Beckert and Musselin, 2013). According to Lamont (2012, p. 206), this process involves several important sub-processes, most importantly categorization (‘determining in which group the entity [. . .] under consideration belongs’) and legitimation (‘recognition by oneself and others of the value of an entity’).

In the empirical literature, we find several examples of how evaluation as a cultural process can contribute to inequality, many of which are drawn from sociological research on hiring, recruiting and promotion in labour markets. The bulk of these studies concern how evaluation practices of organizations favour or discriminate against certain groups of employees (see, e.g. Castilla and Benard, 2010) or applicants (see, e.g. Rivera, 2012). Yet, some scholars also examine evaluation processes in labour markets from a broader perspective, locating evaluation not only in hiring or promotion but also in entire occupational fields.

For instance, Beljean (2013b) studied standards of evaluation in the cultural industry
of stand-up comedy. Drawing on interviews with comedians and their employers as well as ethnographic fieldwork, he finds that even though the work of stand-up comedians is highly uniform in that they all try to make people laugh, there is considerable variation in how comedians are evaluated across different levels of stratification of the comedy industry. Thus, for example, newcomer comedians and star performers are judged against different standards: while the former must be highly adaptable to the taste of different audiences and owners of comedy clubs, the latter are primarily judged by their ability to nurture their fan-base and to sell out shows. Even though this difference does not necessarily translate into more inequality among comedians, it tends to have negative effects on the career prospects of newcomer comedians. Due to mechanisms of cumulative advantage, and because both audiences and bookers tend to be conservative in their judgement, it is easier for more established comedians to maintain their status than for newcomers to build up a reputation. As a result, a few star comedians get to enjoy a disproportionally large share of fame and monetary rewards, while a large majority of comedians remain anonymous and marginalized. (593)

Those looking for ways to curb inequality will not find immediate answers in this article.  The authors do not offer remedies for how organizations can combat such unintended consequences, or even, have its members become more self-aware of these tendencies.   Yet, we know from other research that organizations have attempted different measures to minimize bias.  For example, during the 1970s and 1980s, orchestras turned to “blind” auditions to reduce gender bias when considering musicians for hire.  Some have even muffled the floor to prevent judges from hearing the click of heels that might give away the gender of those auditioning.


An example of a blind audition, courtesy of Colorado Springs Philharmonic.

In any case, have a look at the article’s accompanying discussion forum, where fellow scholars Douglas S. Massey, Leslie McCall, Donald Tomaskovic-Devey, Dustin Avent-Holt, Philippe Monin, Bernard Forgues, and Tao Wang weigh in with their own essays.



what should sociology’s image be?

Previously, I have argued that sociology has an image problem. Too much social problems, not enough science. But that still leaves the question open: what, specifically, should our public image be? A few suggestions:

  • Openly embrace positivism/science as our motivation and professional model.
  • The use of science to study social problems, not the science of social problems.
  • The holistic social science that employs different types of data for a rich picture of human life.
  • “The crossroads of the academy:” We legitimately can speak to fields ranging from the biomedical sciences to the most interpretive of the humanities.
  • Offer a few, simple to understand tools for those in the policy world that are focused on either policy evaluation or measuring social well being (i.e., go beyond “social studies of policy”).

Of course, we already do a lot of this in our research, we just don’t tell the public about it. In other words, sociology should be the queen of the social sciences, not the museum of social dysfunction.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

December 16, 2015 at 12:01 am

picking the right metric: from college ratings to the cold war

Two years ago, President Obama announced a plan to create government ratings for colleges—in his words, “on who’s offering the best value so students and taxpayers get a bigger bang for their buck.”

The Department of Education was charged with developing such ratings, but they were quickly mired in controversy. What outcomes should be measured? Initial reports suggested that completion rates and graduates’ earnings would be key. But critics pointed to a variety of problems—ranging from the different missions of different types of colleges, to the difficulties of measuring incomes along a variety of career paths (how do you count the person pursuing a PhD five years after graduation?), to the reductionism of valuing college only by graduates’ incomes.

Well, as of yesterday, it looks like the ratings plan is being dropped. Or rather, it’s become a “college-rating system minus the ratings”, as the Chronicle put it. The new plan is to produce a “consumer-facing tool” where students can compare colleges on a variety of criteria, which will likely include data on net price, completion rates, earning outcomes, and percent Pell Grant recipients, among other metrics. In other words, it will look more like U-Multirank, a big European initiative that was similarly a response to the political difficulty of producing a single official ranking of universities.

A lot of political forces aligned to kill this plan, including Republicans (on grounds of federal mission creep), the for-profit college lobby, and most colleges and universities, which don’t want to see more centralized control.

But I’d like to point to another difficulty it struggled with—one that has been around for a really long time, and that shows up in a lot of different contexts: the criterion problem.

Read the rest of this entry »

Written by epopp

June 26, 2015 at 1:48 pm

how to judge a phd program

When people look at PhD programs, they usually base their judgment on the fame of its scholars or the placement of graduates. Fair enough, but any seasoned social scientist will tell you that is a very imperfect way to judge an institution. Why? Performance is often related to resources. In other words, you should expect the wealthiest universities to hire away the best scholars and provide the best environment for training.

Thus, we have a null model for judging PhD program (nothing correlates with success) and a reasonable baseline model (success correlates with money). According to the baseline, PhD program ranks should roughly follow measures of financial resources, like endowments. Thus, the top Ivy League schools should all have elite (top 5) programs in any field in which they choose to compete, anything less is severe under performance. Similarly, for a research school with a modest endowment to have a top program (say Rutgers in philosophy) is wild over performance.

According to this wiki on university endowments, the top ten wealthiest institutions are Harvard, Texas (whole system), Yale, Stanford, MIT, Texas A&M (whole system), Northwestern, Michigan, and Penn. This matches roughly with what you’d expect, except that Texas and Texas A&M are top flight engineering and medicine but much weaker in arts and sciences (compared to their endowment rank). This is why I remain impressed with my colleagues at Indiana sociology. Our system wide endowment is ranked #46 but our soc programs hovers in that 10-15 range. We’re pulling our weight.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street!!

Written by fabiorojas

March 19, 2015 at 12:34 am

“there’s no rankings problem that money can’t solve” – the tale of how northeastern gamed the college rankings

There’s a September 2014 article on Northeastern University and how it broke the top-100 in the US News & World Report of colleges and universities. The summary goes something like this: Northeastern’s former president, Richard Freeland, inherited a school that was a poorly endowed commuter school. In the modern environment, that leads you to a death spiral. A low profile leads to low enrollments, which leads to low income, which leads to an even lower profile.

The solution? Crack the code to the US News college rankings. He hired statisticians to learn the correlations between inputs and rankings. He visited the US News office to see how they built their system and bug them about what he thought was unfair. Then, he “legally” (i.e., he didn’t cheat or lie) did things to boost the rank. For example, he moved Northeastern from commuter to residential school by building more dorms. He also admitted a different profile of student that wouldn’t the depress the mean SAT score and shifted student to programs that were not counted in the US News ranking (e.g., some students are admitted in Spring admissions and do not count in the US News score).

Comments: 1. In a way, this is admirable. If the audience for higher education buys into the rankings and you do what the rankings demand, aren’t you giving people what they want? 2. The quote in the title of the post is from Michael Bastedo, a higher ed guru at Michigan, who is pointing out that rankings essentially reflect money. If you buy fancier professors and better facilities, you get better students. The rank improves. 3. Still, this shows how hard it is to move. A nearly billion dollar drive moves you from a so-so rank of about 150 to a so-so rank of about 100-ish. Enough to be “above” the fold, but not enough to challenge the traditional leaders of higher ed.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street!!

Written by fabiorojas

February 13, 2015 at 12:01 am

my ref prediction

I’m kind of obsessed with the REF, considering that it has zero direct impact on my life. It’s sort of like watching a train wreck in progress, and every time there’s big REF news I thank my lucky stars I’m in the U.S. and not the U.K.

For those who might not have been paying attention, the REF is the Research Excellence Framework, Britain’s homegrown academic homage to governmentality. Apologies in advance for any incorrect details here; to an outsider, the system’s a bit complex.

Every six years or so, U.K. universities have to submit samples of faculty members’ work – four “research outputs” per person – to a panel of disciplinary experts for evaluation. The panel ranks the outputs from 4* (world leading) to 1* (nationally recognized), although work can also be given no stars. Universities submit the work of most, but not all, of their faculty members; not being submitted to the REF is not, shall we say, a good sign for your career. “Impact” and “environment,” as well as research outputs, are also evaluated at the department level. Oh, and there’s £2 billion of research funding riding on the thing.

The whole system is arcane, and every academic I’ve talked to seems to hate it. Of course, it’s not meant to make academics happy, but to “provide…accountability for public investment in research and produce…evidence of the benefits of this investment.” Well, I don’t know that it’s doing that, but it’s certainly changing the culture of academia. I’d actually be very interested to hear a solid defense of the REF from someone who’s sympathetic to universities, so if you have one, by all means share.

Anyway, 2014 REF results were announced on Friday, to the usual hoopla. (If you’re curious but haven’t been following this, here are the results by field, including Sociology and Business and Management Studies; here are a few pieces of commentary.)

In its current form, outputs are “reviewed” by a panel of scholars in one’s discipline. This was strongly fought for by academics on the grounds that only expert review could be a legitimate way to evaluate research. This peer review, however, has become something of a farce, as panelists are expected to “review” massive quantities of research. (I can’t now find the figure, but I think it’s on the order of 1000 articles per person.)

At the same time, the peer-review element of the process (along with the complex case-study measurement of “impact”) has helped to create an increasingly elaborate, expensive, and energy-consuming infrastructure within universities around the management of the REF process. For example, universities conduct their own large-scale internal review of outputs to try to guess how REF panels will assess them, and to determine which faculty will be included in the REF submission.

All this has led to a renewed conversation about using metrics to distribute the money instead. The LSE Impact of Social Sciences blog has been particularly articulate on this front. The general argument is, “Sure, metrics aren’t great, but neither is the current system, and metrics are a lot simpler and cheaper.”

If I had to place money on it, I would bet that this metrics approach, despite all its limitations, will actually win out in the not-too-distant future. Which is awful, but no more awful than the current version of the REF. Of course metrics can be valuable tools. But as folks who know a thing or two about metrics have pointed out, they’re useful for “facilitating deliberation,” not “substitut[ing] for judgment.” It seems unlikely that any conceivable version of the REF would use metrics as anything other than a substitute for judgment.

In the U.S., this kind of extreme disciplining of the research process does not appear to be just around the corner, although Australia has partially copied the British model. But it is worth paying attention to nonetheless. The current British system took nearly thirty years to evolve into its present shape. One is reminded of the old story about the frog placed in the pot of cool water who, not noticing until too late that it was heating up, inadvertently found himself boiled.


Written by epopp

December 22, 2014 at 4:07 pm

the dilemmas of funding: a commentary on the united negro college fund by melissa wooten

Melissa Wooten is an Assistant Professor of Sociology at the University of Massachusetts, Amherst. Her forthcoming book In the Face of Inequality: How Black Colleges Adapt (SUNY Press 2015) documents how the social structure of race and racism affect an organization’s ability to acquire the financial and political resources it needs to survive.

“Look…Come on…It’s $10 million dollars” is how the Saturday Night Live parody explains the Los Angeles chapter of the National Association for the Advancement of Colored People’s (NAACP) decision to accept donations from now disgraced, soon-to-be former, NBA franchise owner, Donald Sterling. This parody encapsulates the dilemma that many organizations working for black advancement face. Fighting for civil rights takes money. But this money often comes from strange quarters. While Sterling’s personal animus toward African Americans captivated the public this spring, his organizational strategy of discriminating against African Americans and Hispanic Americans had already made him infamous among those involved in civil rights years earlier. So why would the NAACP accept money from a man known to actively discriminate against the very people it seeks to help?

A similar question arose when news of the Koch brothers $25 million donation to the United Negro College Fund (UNCF) emerged in June. Not only did the UNCF’s willingness to accept this donation raise eyebrows, it also cost the organization the support of AFSCME, a union with which the UNCF had a long-standing relationship. The Koch brothers support of policies that would limit early voting along with their opposition to minimum wage legislation are but a few of the reasons that have made some skeptical of a UNCF-Koch partnership. So why would the UNCF accept a large donation from two philanthropists known to support policies that would have a disproportionately negative affect on African American communities?

Read the rest of this entry »

Written by fabiorojas

September 2, 2014 at 12:01 am