Archive for the ‘academia’ Category
Work in Progress, the blog of ASA’s organizations, occupations, and work section, just launched a new series on the future of organizational sociology. It launched today with a introduction from Liz Gorman and a first post by Howard Aldrich. Liz has an impressive slate of sociologists lined up — in the days to come, you can expect to hear from:
Martin Ruef (Duke)
Harland Prechel (Texas A&M)
Elisabeth Clemens (University of Chicago)
Ezra Zuckerman (MIT Sloan)
Gerald F. Davis (University of Michigan)
Heather Haveman (UC-Berkeley)
Brayden King (Northwestern)
Charles Perrow (Yale)
W. Richard Scott (Stanford)
Mark Suchman (Brown)
Patricia Thornton (Duke)
Marc Ventresca (Oxford)
Elizabeth Gorman (University of Virginia)
Matt Vidal (King’s College London)
Thanks to Liz and OOW for organizing this conversation and here’s hoping it gets the attention it deserves.
Long time readers know that I am a skeptic when it comes to letters of recommendation. The last time I wrote about the topic, I relied on a well cited 1993 article by Aamodt, Bryan amd Whitcomb in Public Personnel Management that reviews the literature and shows that LoR’s have very little validity. I.e., they are poor predictors of future job performance. But what if the literature has changed in the meanwhile? Maybe these earlier studies were flawed, or based on limited samples, or better research methods provide more compelling answers. So I went back and read some more recent research on the validity of LoRs. The answer? With a few exceptions, still garbage.
For example, the journal Academic Medicine published a 2014 article that analyzed LoR for three cohorts of students at a medical school. From the abstract:
Results: Four hundred thirty-seven LORs were included. Of 76 LOR characteristics, 7 were associated with graduation status (P ≤ .05), and 3 remained significant in the regression model. Being rated as “the best” among peers and having an employer or supervisor as the LOR author were associated with induction into AOA, whereas having nonpositive comments was associated with bottom of the class students.
Conclusions: LORs have limited value to admission committees, as very few LOR characteristics predict how students perform during medical school.
Translation: Almost all information in letters is useless, except the occasional negative comment (which academics strive not to say). The other exception is explicit comparison with other candidates, which is not a standard feature of many (or most?) letters in academia.
Ok, maybe this finding is limited to med students. What about other contexts? Once again, LoRs do poorly unless you torture specific data out of them. From a 2014 meta-analysis of LoR recommendation research in education from the International Journal of Selection and Assessment:
… Second, letters of recommendation are not very reliable. Research suggests that the interrater reliability of letters of recommendation is only about .40 (Baxter, et al., 1981; Mosel & Goheen, 1952, 1959; Rim, 1976). Aamodt, Bryan & Whitcomb (1993) summarized this issue pointedly when they noted, ‘The reliability problem is so severe that Baxter et al. (1981) found that there is more agreement between two recommendations written by the same person for two different applicants than there is between two people writing recommendations for the same person’ (Aamodt et al., 1993, p. 82). Third, letter readers tend to favor letters written by people they know (Nicklin & Roch, 2009), despite any evidence that this leads to superior judgments.
Despite this troubling evidence, the letter of recommendation is not only frequently used; it is consistently evaluated as being nearly as important as test scores and prior grades (Bonifazi, Crespy, & Reiker, 1997; Hines, 1986). There is a clear and gross imbalance between the importance placed on letters and the research that has actually documented their efficacy. The scope of this problem is considerable when we consider that there is a very large literature, including a number of reviews and meta-analyses on standardized tests and no such research on letters. Put another way, if letters were a new psychological test they would not come close to meeting minimum professional criteria (i.e., Standards) for use in decision making (AERA, APA, & NCME, 1999). This study is a step toward addressing this need by evaluating what is known, identifying key gaps, and providing recommendations for use and research. [Note: bolded by me.]
As with other studies, there is a small amount of information in LoRs. The authors note that “… letters do appear to provide incremental information about degree attainment, a difficult and heavily motivationally determined outcome.” That’s something, I guess, for a tool that would fail standard tests of validity.
Hi, Steve, Fabio here. I recently read about how you are now the chair of the new sociology department at Washington University, St. Louis. It seems that you are getting advice from some excellent sociologists. Still, I wanted to offer a suggestion about how to build your program that I think has some merit but that may not be obvious.
Here it is: build a program that, roughly speaking, is about 2/3 quantitative and 1/3 qualitative. However, don’t use the traditional criteria for “quantitative research,” which means anyone who does regression analysis or, as in economics, people who do research in theoretical statistics. Instead, the quantitative sector of the department should focus on unique and important quantitative types of data that sociologists are, or can be, good at. Roughly speaking, that means network analysis, social simulations, “big data,” and quantitative analysis of text. You might also toss in the experimenter or survey design guru.
Why? No one else is building such a program, but it would have a huge immediate impact on the profession of sociology. You would have an enormous first mover advantage. It also has other benefits. For example, the graduate students would be immediately employable inside and outside academia; the faculty would be able to do some fundraising, though not as much as a demography center; and this sort of critical mass would increase the chance that WUSTL would be the origin of the next big quantitative advance in the social sciences.
The other 1/3 of the program should be filled with mid to late career qualitative scholars. You need this for a few reasons. First, sociology, especially the younger folks, has converged on the view that mixed methods is the way to go. So you will need top notch ethnographers, historical types, and interviewers to make sure that your PhD graduates have a proper view of sociology. Also, graduate students may opt for a qualitative PhD and you will need good faculty to make sure they don’t get lost in the cracks. The most important reason is that older scholars will be able to maintain a distinct identity and forge bonds in a program that is, by design, tilted in one direction.
As a well regarded private school, you might be tempted to mimic your peers and chase the “best people,” which means whoever recently graduated from high status programs with good publications. It’s not a bad idea, but you will directly compete with all the other top 2o programs that claims these graduates. Instead, you might consider a more focused mission that has a very specific, and achievable, intellectual goal. It’s worth a thought.
Business Insider named Judson Everitt of Loyola one of the best 25 professors in America. A hearty congratulations to an IU alumni and top notch educator. Here’s his profile at Loyola.
As many of you know, Washington University decided to reestablish a sociology department after notoriously shutting theirs down some two decades ago. The Chronicle of Higher Ed has reported that the university has chosen the department’s first chair and associate chair — Steven Mazzari, a macroeconomist at Wash U., and Mark Rank, who started in Washington’s sociology department before moving to the School of Social Work in 1989.
This seems like a surprising decision. The Chronicle writes:
Administrators had considered appointing a senior figure in American sociology to be chair, but, “lacking an obvious candidate,” as Mr. Fazzari puts it, they turned to him. Along with several teaching awards, he has six years of experience as chair of the economics department, and has done stints on campus-planning and hiring committees. He was a member of the campus advisory panel formed last year to consider how to revive sociology.
“There is much overlap between the problems addressed by economics and sociology,” he says. “Economics also provides a firm grounding in technical modeling and data analysis that is part of much advanced work in many social sciences, including sociology.”
I can imagine various reasons they might have taken this approach. Luring a top senior person in to build a department from scratch has to be a challenge. Still, Washington has a lot of resources and is a highly respected university (outside of sociology, where it has no presence). And there are some definite downsides to launching the department without a highly visible sociologist at the helm. I’m curious what the back story is here but, having no inside information, will leave it to you to speculate.
In response to Siri’s post about multi-disciplinary work, Peter Levin wrote the following:
For what it’s worth, working in a corporate environment, on big hairy systemic questions like, ‘How can we design an ecosystem for technologies to support precision agriculture over the next 2 decades?’ I work with a psychologist, an engineer, two anthropologists, an MBA/physicist, and a French literature PhD.
It’s a specifically-academia problem.
I agree. But I want to add a few comments. First, the evidence indicates that the problem is worse in social sciences than physical sciences. Social scientists are very territorial, as this article by Lada Adamic & co shows. Second, this system is reinforced by journal editors and tenure committees. Deans and administrators may sing the praises of interdisciplinary work, they routinely allow departments to punish faculty who don’t publish within discipline and journal editors are happy to let reviewers shoot down articles that use out of discipline ideas.
So, yes, interdisciplinary is important and needed, but until the system of rewards changes in the academy, it will remain the rhetoric of enthusiastic administrators.
The IGM panel of economic experts got some recent buzz because 63% of their experts — 81%, when weighted by confidence — disagree with the Piketty-inspired argument that r > g is driving recent wealth inequality in the U.S.
I always enjoy reading these surveys. The panel includes 50 or so top academic economists, from a variety of subfields and political orientations, and asks them whether they agree or disagree with a policy-relevant economic statement. Respondents answer on a Likert scale, and indicate their degree of certainty as well as their level of agreement. Sometimes they add a short comment.
The results usually aren’t incredibly surprising. Not really shocking that 100% of economists agree that
Letting car services such as Uber or Lyft compete with taxi firms on equal footing regarding genuine safety and insurance requirements, but without restrictions on prices or routes, raises consumer welfare.
They’re a little more nervous about selling kidneys (45% favor, but nearly 30% find themselves “uncertain” — the highest proportion for any recent question besides whether ending net neutrality is a good thing). The most interesting ones are those where there’s disagreement (Have the last decade of airline mergers improved things for travelers?) or that counter the stereotype (54% disagree that giving holiday presents — rather than cash — is inefficient. Okay, counters it a little).
Anyway, this got me wondering. What if sociology had a similar panel? I mean, aside from the fact that no one would care. I can think of empirical findings we’d have broad confidence in that much of the public wouldn’t buy — for example, that there’s lots of hiring discrimination against African-Americans. But are there policy prescriptions we’d agree on — ones that are grounded in the discipline, as opposed coming solely from our left-leaning tendencies, though of course the two are hard to separate — that would tell us, Yep, sociologists WOULD say that.
EDITED TO ADD: Yes, I know that Piketty does not actually argue r > g is the cause of recent inequality growth in the US, which is what the question asks. But if they can headline the poll “Piketty on Inequality,” it seems fair to call the statement “Piketty-inspired.”