Archive for the ‘mere empirics’ Category
Gordon Gee, former president of Ohio State, made more than $6 million in FY 2013, including the $1.5 million “release payment” he got in exchange for
not letting the door hit him agreeing not to sue the university on his way out. Now the New York Times is reporting that the 25 public universities with the highest-paid presidents have greater increases in student debt and numbers of adjuncts than other publics.
I had a story for this, an organizational story. Ah, I thought. The NYT is implying that the high pay is taking away money that would be going to the other stuff. But really, this just reflects a new model for flagship publics: limit faculty costs (hence the adjuncts), increase the proportion of out-of-state students paying high tuition (hence the debt), and pursue corporate-style CEOs who can lead us into this brave new world (hence the salaries). The non-flagships can’t pursue this strategy successfully, so we’re seeing a divergence between the two groups.
But it turns out that the data don’t, in fact, support that story. They don’t really support any story. The NYT article is based on a report from the Institute for Policy Studies, a progessive think tank. And as I read it, things didn’t seem quite right. IPS reports on the number of adjunct faculty at these institutions, but I haven’t seen good data anywhere on the number of adjuncts. And administrative spending at publics increased 65% between FYs 2006 and 2012, as states slashed budgets?
Yeah, basically the IPS report is just a mess. IPEDS made some major redefinitions of terms in the middle — like who falls under “Part-time/Instruction, Research and Public Service,” what IPS is calling “Adjunct Labor” — so the years aren’t comparable with each other, and AFT appears to have mislabeled some of the years entirely. The University of Minnesota’s impressively fast PR office has a debunking report up, and while I haven’t checked all the numbers, my impression is that it’s right on target.
That doesn’t disprove my theory that there will be increasing divergence between the model for flagships and the path taken by the rest of the publics. And it’s entirely possible that universities with highly paid presidents have underwhelming outcomes in other areas. But if we’re going to argue over what to do about it, it would be nice if it were based on numbers that actually mean something.
Guest blogger emerita Jenn Lena and Danielle Lindemann have a forthcoming article in Poetics analyzing the self-identity of artists. The issue is that people often question whether they are artists. From the paper “Who is an Artist? New Data for an Old Question:”
Employment in the arts and creative industries is high andgrowing, yet scholars have not achieved consensus on who should be included in these professions. In this study, we explore the ‘‘professionalartist’’ as the outcome of an identity process, rendering it the dependent rather than the independent variable. In their responses to the 2010 Strategic National Arts Alumni Project survey (N=13,581)— to our knowledge, the largest survey ever undertaken of individuals who have pursued arts degrees in the United States—substantial numbers of respondents gave seemingly contradictory answers to questions asking about their artistic labor. These individuals indicated that they simultaneously had been and had never been professional artists, placing them in what we have termed the ‘‘dissonance group.’’An examination of these responses reveals meaningful differences and patterns in the interpretation of this social category. We find significant correlation between membership in this group and various markers of cultural capital and social integration into artistic communities. A qualitative analysis of survey comments reveals unique forms of dissonance over artistic membership within teaching and design careers.
When you get into the nitty gritty, the authors focus on embededness in institutions as decreasing ambiguity. There’s probably an Abbott side of the story where people in specific orgs or art systems successfully getting the high position in the field.
On the Soc Job Rumor Board, there was a discussion of the non-replicability of ethnography. I think this is mistaken. Ethnography is easily replicable, it’s just that ethnographers don’t want to do it. For example, ethnographers could:
- Stop making everything anonymous so others can verify and check. Mitch Duinier is right about this.
- Group ethnography. Have multiple observers and do inter-coder reliability.
- Standardize data collection – how field codes are done and recorded.
- Encourage others to revisit the same population (which is actually done in anthropological ethnography)
Of course, no single study can strive for replication in the same way and some folks do a good job addressing these issues. But still, the anti-positivist framing of much ethnography probably prevents ethnographers from developing intuitive and sensible things to create standards that would move the field away from the solo practitioner model of unique and non-replicable studies.
Jerry Kim and I have an op-ed in Sunday’s New York Times about our new paper on status bias in baseball umpiring. We analyzed over 700,000 non-swinging pitches from the 2008-09 season and found that umpires made numerous types of mistakes in calling strikes-balls. Most notably, we expected that umpires would be influenced by the status and reputation of the pitcher, and this is indeed what we found:
One of the sources of bias we identified was that umpires tended to favor All-Star pitchers. An umpire was about 16 percent more likely to erroneously call a pitch outside the zone a strike for a five-time All-Star than for a pitcher who had never appeared in an All-Star Game. An umpire was about 9 percent less likely to mistakenly call a real strike a ball for a five-time All-Star. The strike zone did actually seem to get bigger for All-Star pitchers and it tended to shrink for non-All-Stars.
An umpire’s bias toward All-Star pitchers was even stronger when the pitcher had a reputation for precise control, as measured by the career percentage of batters walked. We found that pitchers with a track record of not walking batters — like Greg Maddux — were much more likely to benefit from their All-Star status than similarly decorated but “wilder” pitchers like Randy Johnson.
Baseball insiders have long suspected what our research confirms: that umpires tend to make errors in ways that favor players who have established themselves at the top of the game’s status hierarchy. But our findings are also suggestive of the way that people in any sort of evaluative role — not just umpires — are unconsciously biased by simple “status characteristics.” Even constant monitoring and incentives can fail to train such biases out of us.
You can can download the paper, which is forthcoming in Management Science, if you’re interested in learning more about the analyses and their implications for theories about status characteristics and the Matthew Effect.
On Facebook, Vipul Naik asked the following question about research on crime rates of immigrants vs. natives:
It’s well known among scholars of crime that in the US, immigrants have somewhat lower crime rates than natives (both before and after controlling for ethnicity), whereas, in Western and Northern Europe, immigrants have somewhat higher crime rates than natives.
Various explanations have been posited, such as Western and Northern Europe being worse at assimilating immigrants.
But it seems to me that the simplest explanation is that the US has a higher base rate of native crime, so it’s easier for immigrants to “do better” than natives, whereas the native rate of crime in Western and Northern Europe is so low that the same immigrant crime rate looks worse in comparison. My impression (based on some quick look at the statistics) is that immigrants to Western and Northern Europe don’t have crime rates (substantially) higher than immigrants to the US.
This perspective doesn’t seem clearly articulated in discussions of the “do immigrants commit more crime than natives?” question. Why might that be so? And should we care about the relative crime rates, rather than whether the crime rates are high in absolute terms?
Bleg: What tricks do we have for increasing response rates for people working in organizations? The older literature suggests that org surveys have widely varying response rates. For example, this 1999 review in Human Relations finds that top management journals publish studies with an *average* response rate of 36%. This 2008 Human Relations article finds a response rate of 35%. So, how can we pop up the response rate? We have the Dillman method (letters), payment and multiple contacts. How else can we reach orgs?
My friend Colin Jerolmack and scatterista Shamus Khan have a new article in Sociological Methods and Research that criticizes the way many social scientists use interview data. From “Talk is Cheap:”
This article examines the methodological implications of the fact that what people say is often a poor predictor of what they do. We argue that many interview and survey researchers routinely conflate self reports with behavior and assume a consistency between attitudes and action. We call this erroneous inference of situated behavior from verbal accounts the attitudinal fallacy. Though interviewing and ethnography are often lumped together as ‘‘qualitative methods,’’ by juxtaposing studies of ‘‘culture in action’’ based on verbal accounts with ethnographic investigations, we show that the latter routinely attempts to explain the ‘‘attitude–behavior problem’’ while the former regularly ignores it. Because
meaning and action are collectively negotiated and context-dependent, we contend that self-reports of attitudes and behaviors are of limited value in explaining what people actually do because they are overly individualistic and abstracted from lived experience.
Overall, I find much to like in the article, but I wouldn’t get carried away. First, interviews and surveys vary in the degree of bias. I probably trust a question about educational history more than I do, say, racial attitudes. On a related point, you can also assess the quality of questions. Political scientists have definitely found biases in survey questions and that tells you how good a question is. Second, in some cases, you don’t have any choice but to work with interviews and surveys. For example, interviews are crucial for historical work. So when I do interview research, I use with caution.