Archive for the ‘education’ Category
The Guardian recently ran an article about Shimer College, a tiny great books college in Chicago, Illinois. Originally, the authors wanted to know why it had been ranked so low by the Department of Education. The answer is that there is a fair amount of non-completion, people leave with debt, and they don’t get great jobs. Why? Shimer College takes all kinds of students and makes them go through this unique curriculum of great books for four years. It sounds like a wonderful institution, but not one that produces the “right numbers.” It’s also a college that is very close to closing due to extremely low enrollments.
When I finished reading the article, I realized that Shimer College represented a puzzle. Normally, in a large market, like higher education, we see an explosion of organizational forms catering to different market segments. And to some extent, that’s exactly what happened in higher ed. We have research schools, tribal colleges, cosmetology schools, and an army of biblical colleges. But the liberal arts sector keeps shrinking and shrinking. Is it really all that hard to find 200 people in a nation of 300 million that wants the free wheeling inquiry of Shimer College?
Here’s my solution to the issue. Start with the observation that there’s a negative association between price and risk tolerance. When college is cheap, people will try out all kinds of college experiences. As it becomes more expensive and tied to the labor market, there is a huge pressure for conformity. You get diversity when there is a strong social identity supporting an institution (e.g., ethnicity or religion) or when students simply can’t be shoe horned into existing structures (e.g., cosmetology students don’t need football stadiums). Thus, liberal arts schools exist only for a market segment that (a) needs the four year credential, (b) really, really doesn’t want the standard package offered by the big universities, and (c) has the cash to pay for such a specialized service. You also have some liberal arts schools that are bankrolled by others (e.g., Deep Springs or Berea). You probably get down to a few thousand students per year at most and there is stiff competition for their money. And as prices keep going up, the market gets smaller.
So yes, there are probably tons of students who would love the liberal arts education, but not many who would pay the full sticker price. I hope that people can create a model where you bring people back to this type of education at a more reasonable price.
There is a serious crisis at the University of Virginia after Rolling Stone published an article that, unsurprisingly, argued that the administration failed to punish sexual offenders on campus. Soon, the University president, Teresa Sullivan, announced a suspension of fraternity activities until Jan. 9 but otherwise defended the school.
This is very disappointing. It has become increasingly common knowledge that universities are unable to handle rape allegations, they have almost no control over fraternities, and national fraternity organizations have set themselves up so that they are not liable for student violence. President Sullivan’s punishment is especially disappointing because the suspension only applies, essentially, when VA is out of session. Literally, the suspension applies now (Thanksgiving break), final exams, and the winter break. UVA is out of session until Jan. 12. Some “punishment.”
Here is the difficult discussion that universities have to have if this horrid situation is ever to end:
- Rape is a felony. It’s something you go to prison for. Thus, colleges are not in a position to investigate or handle these claims. Student/faculty juries for felonies are a joke. All rape allegations should be immediately transferred to the police.
- Develop new procedures for victims. Instead of going to the dean (who should be supportive in any case), all victims should go to the health clinic or local hospital *immediately* for medical attention and collection of physical evidence. This is especially important as research shows that violence is committed mainly by a small group of serial offenders who exploit the party scene. Even charges are never filed, physical evidence and documentation is needed to expel people suspected of violence.
- Universities should divest themselves from fraternities because they are extremely dangerous and unaccountable. How bad? Essentially, insurance companies in America won’t ever cover claims related to fraternities any more. It’s that bad.
Sadly, we will always be faced with the challenge of sexual violence. However, universities have allowed a hot house environment for violence to grow on their campuses. Allowing large groups of unsupervised young men to throw alcohol drenched parties with no liability is a recipe for disaster. This has to end.
Higher education has become dependent on human capital arguments to justify its existence. The new gainful employment rule for for-profit colleges, announced yesterday by the Obama administration, reminded me of this. It clarifies what standards for-profits have to meet in order to remain eligible for federal aid, which makes up 90% of many for-profits’ revenues.
Under the new standard, programs will fail if graduates’ debt-to-earnings ratio is over 30%, or if their debt-to-discretionary-earnings (income above 150% of the poverty line — about $17,000 for a single person) is over 12%.
Now, we could have a whole other conversation about this criterion, which is really, really, weak, since it no longer takes into account the percent of students who default on their loans within three years. By limiting the measure to graduates, it ignores, for example, the outcomes of the 86% of students who enroll in BA programs at the University of Phoenix but don’t finish in six years — most of whom are taking out as many federal loans as they can along the way.
But I want to make a different point here. More and more, we are focused on return on investment — income of graduates — as central to thinking about the value of college.
Long time readers know that I am a skeptic when it comes to letters of recommendation. The last time I wrote about the topic, I relied on a well cited 1993 article by Aamodt, Bryan amd Whitcomb in Public Personnel Management that reviews the literature and shows that LoR’s have very little validity. I.e., they are poor predictors of future job performance. But what if the literature has changed in the meanwhile? Maybe these earlier studies were flawed, or based on limited samples, or better research methods provide more compelling answers. So I went back and read some more recent research on the validity of LoRs. The answer? With a few exceptions, still garbage.
For example, the journal Academic Medicine published a 2014 article that analyzed LoR for three cohorts of students at a medical school. From the abstract:
Results: Four hundred thirty-seven LORs were included. Of 76 LOR characteristics, 7 were associated with graduation status (P ≤ .05), and 3 remained significant in the regression model. Being rated as “the best” among peers and having an employer or supervisor as the LOR author were associated with induction into AOA, whereas having nonpositive comments was associated with bottom of the class students.
Conclusions: LORs have limited value to admission committees, as very few LOR characteristics predict how students perform during medical school.
Translation: Almost all information in letters is useless, except the occasional negative comment (which academics strive not to say). The other exception is explicit comparison with other candidates, which is not a standard feature of many (or most?) letters in academia.
Ok, maybe this finding is limited to med students. What about other contexts? Once again, LoRs do poorly unless you torture specific data out of them. From a 2014 meta-analysis of LoR recommendation research in education from the International Journal of Selection and Assessment:
… Second, letters of recommendation are not very reliable. Research suggests that the interrater reliability of letters of recommendation is only about .40 (Baxter, et al., 1981; Mosel & Goheen, 1952, 1959; Rim, 1976). Aamodt, Bryan & Whitcomb (1993) summarized this issue pointedly when they noted, ‘The reliability problem is so severe that Baxter et al. (1981) found that there is more agreement between two recommendations written by the same person for two different applicants than there is between two people writing recommendations for the same person’ (Aamodt et al., 1993, p. 82). Third, letter readers tend to favor letters written by people they know (Nicklin & Roch, 2009), despite any evidence that this leads to superior judgments.
Despite this troubling evidence, the letter of recommendation is not only frequently used; it is consistently evaluated as being nearly as important as test scores and prior grades (Bonifazi, Crespy, & Reiker, 1997; Hines, 1986). There is a clear and gross imbalance between the importance placed on letters and the research that has actually documented their efficacy. The scope of this problem is considerable when we consider that there is a very large literature, including a number of reviews and meta-analyses on standardized tests and no such research on letters. Put another way, if letters were a new psychological test they would not come close to meeting minimum professional criteria (i.e., Standards) for use in decision making (AERA, APA, & NCME, 1999). This study is a step toward addressing this need by evaluating what is known, identifying key gaps, and providing recommendations for use and research. [Note: bolded by me.]
As with other studies, there is a small amount of information in LoRs. The authors note that “… letters do appear to provide incremental information about degree attainment, a difficult and heavily motivationally determined outcome.” That’s something, I guess, for a tool that would fail standard tests of validity.
Business Insider named Judson Everitt of Loyola one of the best 25 professors in America. A hearty congratulations to an IU alumni and top notch educator. Here’s his profile at Loyola.
A problem with a lot of introductory level courses is that they attract heterogeneous students. In sociology, this is very apparent in the introduction to sociology class. It is not uncommon to get, in the same class, a graduating senior who wants to put in the minimal amount of effort and the very aggressive freshman who wants that 4.0 GPA for that Harvard law application. The heterogeneous class presents problems on many levels – the presentation of materials, classroom management, and so forth. In this posts, a few comments on how to handle this class.
- Cut the class in half. A few people have told me that it is effective to treat the first half as a chance to make sure everyone is on the same page. Then, the second half you can move into material that will be new for almost everyone.
- Active learning: People have also suggested that you stop lecturing. Instead, really have students to in-class work. This helps reduced the boredom for more advanced students and, at the least, gives them something to do.
- A third strategy is to stratify assignments. Older students can get more involved and challenging assignments. This depends on the nature of the course and if you have the patience to grade multiple assignments at once.
Use the comments to discuss your own teaching strategies for heterogeneous classes.
So the stock market has been freaking out a bit the last couple of weeks. Secular stagnation, Ebola, a five-year bull market—who knows why. Anyway, over the weekend I was listening to someone on NPR explain what the average person should do under such circumstances (answer: hang tight, don’t try to time the market). This reminded me of one of my pet quibbles with financial advice, which I think applies to a lot of social science more generally.
For years, the conventional wisdom around what ordinary folks should do with their money has gone something like this. Save a lot. Put it in tax-favored retirement accounts. Invest it mostly in index funds—the S&P 500 is good. Don’t mess with it. In the long run this is likely to net you a reliable 7% return after inflation, about the best you’re likely to do.
Now, it’s not that I think this is bad advice. In fact, this is pretty much exactly what I do, with some small tweaks.
But it has always struck me how, in news stories and advice columns and talk shows, people talk about how this is a good strategy because it’s worked for SO LONG. For 30 years! Or since 1929! Or since 1900! (Adjust returns accordingly.)
And yes, 30 years, or 85, or 114, are all a long time relative to human life. And we have to make decisions based on the knowledge we’ve got.
But it’s always seemed to me that if what you’re interested in is what will happen over the 30+ years of someone’s earning life (more if you’re not in academia!), you’ve basically got an N of 1 to 4 here. I mean, sure, this may be a reasonable guess, but I don’t think there’s any strong reason to believe that the next 100 years are likely to look very similar to the last 100. Odds are better if you’re just interested in the next 30, but even then, I’m always surprised by just how confident the conventional wisdom is around the idea that the market always coming out ahead over a 25- or 30-year period—going ALL THE WAY BACK TO 1929—is rock solid evidence that it will do so in the future.
Of course, there are lots of people who don’t believe this, too, as evidenced by what happened to gold prices after the financial crisis. Or by, you know, survivalists.
Anyway, I think this overconfidence in the lessons of the recent past is something we as social scientists tend to be susceptible to. The study that comes most immediately to mind here is the Raj Chetty study on value-added estimates of teachers (paper 1, paper 2, NYT article).
The gist of the argument is that teachers’ effects on student test scores, net of student characteristics (their value added), predicts students’ eventual income at age 28. Now, there’s a lot that could be discussed about this study (latest round of critique, media coverage thereof).
But I just want to point to it—or rather, broader interpretations of it—as illustrating a similar overconfidence in the ability of the past to predict the future.
Here we have a study based on a massive (2.5 million students) dataset over a twenty-year period (1989-2009). Just thinking about the scale of the study and taking its results at face value, it’s hard to imagine how much more certain one could be in social science than at the end of such an endeavor.
And much of the media coverage takes that certainty and projects it into the future (see the NYT article again). If you replace a low value-added teacher with an average one, the classroom’s lifetime earnings will increase by more than $250,000.
And yet to make such a leap, you have to be willing to assume so many things about the future will be like the past: not only that incentivizing teachers differently and making tests more important won’t change their predictive effects (which the papers acknowledge), but, just as importantly, that the effects of education on earnings—or, more specifically, of teacher value-added on earnings—will be similar in future 20-year periods as it was from 1989-2009. And that nothing else meaningful about teachers, students, schools, or earnings will evolve over the next 20 years in ways that mess with that relationship in a significant way.
I think we do this a lot—project into the future based on our understanding of a past that is, really, quite recent. Of course knowledge about the (relatively) recent past still should inform the decisions we make about the future. But rather a lot of modesty is called for when making blanket claims that assume the future is going to look just like the past. Maybe it’s human nature. But I think that modesty is often missing.