Higher education has become dependent on human capital arguments to justify its existence. The new gainful employment rule for for-profit colleges, announced yesterday by the Obama administration, reminded me of this. It clarifies what standards for-profits have to meet in order to remain eligible for federal aid, which makes up 90% of many for-profits’ revenues.
Under the new standard, programs will fail if graduates’ debt-to-earnings ratio is over 30%, or if their debt-to-discretionary-earnings (income above 150% of the poverty line — about $17,000 for a single person) is over 12%.
Now, we could have a whole other conversation about this criterion, which is really, really, weak, since it no longer takes into account the percent of students who default on their loans within three years. By limiting the measure to graduates, it ignores, for example, the outcomes of the 86% of students who enroll in BA programs at the University of Phoenix but don’t finish in six years — most of whom are taking out as many federal loans as they can along the way.
But I want to make a different point here. More and more, we are focused on return on investment — income of graduates — as central to thinking about the value of college.
Work in Progress, the blog of ASA’s organizations, occupations, and work section, just launched a new series on the future of organizational sociology. It launched today with a introduction from Liz Gorman and a first post by Howard Aldrich. Liz has an impressive slate of sociologists lined up — in the days to come, you can expect to hear from:
Martin Ruef (Duke)
Harland Prechel (Texas A&M)
Elisabeth Clemens (University of Chicago)
Ezra Zuckerman (MIT Sloan)
Gerald F. Davis (University of Michigan)
Heather Haveman (UC-Berkeley)
Brayden King (Northwestern)
Charles Perrow (Yale)
W. Richard Scott (Stanford)
Mark Suchman (Brown)
Patricia Thornton (Duke)
Marc Ventresca (Oxford)
Elizabeth Gorman (University of Virginia)
Matt Vidal (King’s College London)
Thanks to Liz and OOW for organizing this conversation and here’s hoping it gets the attention it deserves.
Long time readers know that I am a skeptic when it comes to letters of recommendation. The last time I wrote about the topic, I relied on a well cited 1993 article by Aamodt, Bryan amd Whitcomb in Public Personnel Management that reviews the literature and shows that LoR’s have very little validity. I.e., they are poor predictors of future job performance. But what if the literature has changed in the meanwhile? Maybe these earlier studies were flawed, or based on limited samples, or better research methods provide more compelling answers. So I went back and read some more recent research on the validity of LoRs. The answer? With a few exceptions, still garbage.
For example, the journal Academic Medicine published a 2014 article that analyzed LoR for three cohorts of students at a medical school. From the abstract:
Results: Four hundred thirty-seven LORs were included. Of 76 LOR characteristics, 7 were associated with graduation status (P ≤ .05), and 3 remained significant in the regression model. Being rated as “the best” among peers and having an employer or supervisor as the LOR author were associated with induction into AOA, whereas having nonpositive comments was associated with bottom of the class students.
Conclusions: LORs have limited value to admission committees, as very few LOR characteristics predict how students perform during medical school.
Translation: Almost all information in letters is useless, except the occasional negative comment (which academics strive not to say). The other exception is explicit comparison with other candidates, which is not a standard feature of many (or most?) letters in academia.
Ok, maybe this finding is limited to med students. What about other contexts? Once again, LoRs do poorly unless you torture specific data out of them. From a 2014 meta-analysis of LoR recommendation research in education from the International Journal of Selection and Assessment:
… Second, letters of recommendation are not very reliable. Research suggests that the interrater reliability of letters of recommendation is only about .40 (Baxter, et al., 1981; Mosel & Goheen, 1952, 1959; Rim, 1976). Aamodt, Bryan & Whitcomb (1993) summarized this issue pointedly when they noted, ‘The reliability problem is so severe that Baxter et al. (1981) found that there is more agreement between two recommendations written by the same person for two different applicants than there is between two people writing recommendations for the same person’ (Aamodt et al., 1993, p. 82). Third, letter readers tend to favor letters written by people they know (Nicklin & Roch, 2009), despite any evidence that this leads to superior judgments.
Despite this troubling evidence, the letter of recommendation is not only frequently used; it is consistently evaluated as being nearly as important as test scores and prior grades (Bonifazi, Crespy, & Reiker, 1997; Hines, 1986). There is a clear and gross imbalance between the importance placed on letters and the research that has actually documented their efficacy. The scope of this problem is considerable when we consider that there is a very large literature, including a number of reviews and meta-analyses on standardized tests and no such research on letters. Put another way, if letters were a new psychological test they would not come close to meeting minimum professional criteria (i.e., Standards) for use in decision making (AERA, APA, & NCME, 1999). This study is a step toward addressing this need by evaluating what is known, identifying key gaps, and providing recommendations for use and research. [Note: bolded by me.]
As with other studies, there is a small amount of information in LoRs. The authors note that “… letters do appear to provide incremental information about degree attainment, a difficult and heavily motivationally determined outcome.” That’s something, I guess, for a tool that would fail standard tests of validity.
Hi, Steve, Fabio here. I recently read about how you are now the chair of the new sociology department at Washington University, St. Louis. It seems that you are getting advice from some excellent sociologists. Still, I wanted to offer a suggestion about how to build your program that I think has some merit but that may not be obvious.
Here it is: build a program that, roughly speaking, is about 2/3 quantitative and 1/3 qualitative. However, don’t use the traditional criteria for “quantitative research,” which means anyone who does regression analysis or, as in economics, people who do research in theoretical statistics. Instead, the quantitative sector of the department should focus on unique and important quantitative types of data that sociologists are, or can be, good at. Roughly speaking, that means network analysis, social simulations, “big data,” and quantitative analysis of text. You might also toss in the experimenter or survey design guru.
Why? No one else is building such a program, but it would have a huge immediate impact on the profession of sociology. You would have an enormous first mover advantage. It also has other benefits. For example, the graduate students would be immediately employable inside and outside academia; the faculty would be able to do some fundraising, though not as much as a demography center; and this sort of critical mass would increase the chance that WUSTL would be the origin of the next big quantitative advance in the social sciences.
The other 1/3 of the program should be filled with mid to late career qualitative scholars. You need this for a few reasons. First, sociology, especially the younger folks, has converged on the view that mixed methods is the way to go. So you will need top notch ethnographers, historical types, and interviewers to make sure that your PhD graduates have a proper view of sociology. Also, graduate students may opt for a qualitative PhD and you will need good faculty to make sure they don’t get lost in the cracks. The most important reason is that older scholars will be able to maintain a distinct identity and forge bonds in a program that is, by design, tilted in one direction.
As a well regarded private school, you might be tempted to mimic your peers and chase the “best people,” which means whoever recently graduated from high status programs with good publications. It’s not a bad idea, but you will directly compete with all the other top 2o programs that claims these graduates. Instead, you might consider a more focused mission that has a very specific, and achievable, intellectual goal. It’s worth a thought.
Business Insider named Judson Everitt of Loyola one of the best 25 professors in America. A hearty congratulations to an IU alumni and top notch educator. Here’s his profile at Loyola.