Archive for the ‘education’ Category
Higher education has become dependent on human capital arguments to justify its existence. The new gainful employment rule for for-profit colleges, announced yesterday by the Obama administration, reminded me of this. It clarifies what standards for-profits have to meet in order to remain eligible for federal aid, which makes up 90% of many for-profits’ revenues.
Under the new standard, programs will fail if graduates’ debt-to-earnings ratio is over 30%, or if their debt-to-discretionary-earnings (income above 150% of the poverty line — about $17,000 for a single person) is over 12%.
Now, we could have a whole other conversation about this criterion, which is really, really, weak, since it no longer takes into account the percent of students who default on their loans within three years. By limiting the measure to graduates, it ignores, for example, the outcomes of the 86% of students who enroll in BA programs at the University of Phoenix but don’t finish in six years — most of whom are taking out as many federal loans as they can along the way.
But I want to make a different point here. More and more, we are focused on return on investment — income of graduates — as central to thinking about the value of college.
Long time readers know that I am a skeptic when it comes to letters of recommendation. The last time I wrote about the topic, I relied on a well cited 1993 article by Aamodt, Bryan amd Whitcomb in Public Personnel Management that reviews the literature and shows that LoR’s have very little validity. I.e., they are poor predictors of future job performance. But what if the literature has changed in the meanwhile? Maybe these earlier studies were flawed, or based on limited samples, or better research methods provide more compelling answers. So I went back and read some more recent research on the validity of LoRs. The answer? With a few exceptions, still garbage.
For example, the journal Academic Medicine published a 2014 article that analyzed LoR for three cohorts of students at a medical school. From the abstract:
Results: Four hundred thirty-seven LORs were included. Of 76 LOR characteristics, 7 were associated with graduation status (P ≤ .05), and 3 remained significant in the regression model. Being rated as “the best” among peers and having an employer or supervisor as the LOR author were associated with induction into AOA, whereas having nonpositive comments was associated with bottom of the class students.
Conclusions: LORs have limited value to admission committees, as very few LOR characteristics predict how students perform during medical school.
Translation: Almost all information in letters is useless, except the occasional negative comment (which academics strive not to say). The other exception is explicit comparison with other candidates, which is not a standard feature of many (or most?) letters in academia.
Ok, maybe this finding is limited to med students. What about other contexts? Once again, LoRs do poorly unless you torture specific data out of them. From a 2014 meta-analysis of LoR recommendation research in education from the International Journal of Selection and Assessment:
… Second, letters of recommendation are not very reliable. Research suggests that the interrater reliability of letters of recommendation is only about .40 (Baxter, et al., 1981; Mosel & Goheen, 1952, 1959; Rim, 1976). Aamodt, Bryan & Whitcomb (1993) summarized this issue pointedly when they noted, ‘The reliability problem is so severe that Baxter et al. (1981) found that there is more agreement between two recommendations written by the same person for two different applicants than there is between two people writing recommendations for the same person’ (Aamodt et al., 1993, p. 82). Third, letter readers tend to favor letters written by people they know (Nicklin & Roch, 2009), despite any evidence that this leads to superior judgments.
Despite this troubling evidence, the letter of recommendation is not only frequently used; it is consistently evaluated as being nearly as important as test scores and prior grades (Bonifazi, Crespy, & Reiker, 1997; Hines, 1986). There is a clear and gross imbalance between the importance placed on letters and the research that has actually documented their efficacy. The scope of this problem is considerable when we consider that there is a very large literature, including a number of reviews and meta-analyses on standardized tests and no such research on letters. Put another way, if letters were a new psychological test they would not come close to meeting minimum professional criteria (i.e., Standards) for use in decision making (AERA, APA, & NCME, 1999). This study is a step toward addressing this need by evaluating what is known, identifying key gaps, and providing recommendations for use and research. [Note: bolded by me.]
As with other studies, there is a small amount of information in LoRs. The authors note that “… letters do appear to provide incremental information about degree attainment, a difficult and heavily motivationally determined outcome.” That’s something, I guess, for a tool that would fail standard tests of validity.
Business Insider named Judson Everitt of Loyola one of the best 25 professors in America. A hearty congratulations to an IU alumni and top notch educator. Here’s his profile at Loyola.
A problem with a lot of introductory level courses is that they attract heterogeneous students. In sociology, this is very apparent in the introduction to sociology class. It is not uncommon to get, in the same class, a graduating senior who wants to put in the minimal amount of effort and the very aggressive freshman who wants that 4.0 GPA for that Harvard law application. The heterogeneous class presents problems on many levels – the presentation of materials, classroom management, and so forth. In this posts, a few comments on how to handle this class.
- Cut the class in half. A few people have told me that it is effective to treat the first half as a chance to make sure everyone is on the same page. Then, the second half you can move into material that will be new for almost everyone.
- Active learning: People have also suggested that you stop lecturing. Instead, really have students to in-class work. This helps reduced the boredom for more advanced students and, at the least, gives them something to do.
- A third strategy is to stratify assignments. Older students can get more involved and challenging assignments. This depends on the nature of the course and if you have the patience to grade multiple assignments at once.
Use the comments to discuss your own teaching strategies for heterogeneous classes.
So the stock market has been freaking out a bit the last couple of weeks. Secular stagnation, Ebola, a five-year bull market—who knows why. Anyway, over the weekend I was listening to someone on NPR explain what the average person should do under such circumstances (answer: hang tight, don’t try to time the market). This reminded me of one of my pet quibbles with financial advice, which I think applies to a lot of social science more generally.
For years, the conventional wisdom around what ordinary folks should do with their money has gone something like this. Save a lot. Put it in tax-favored retirement accounts. Invest it mostly in index funds—the S&P 500 is good. Don’t mess with it. In the long run this is likely to net you a reliable 7% return after inflation, about the best you’re likely to do.
Now, it’s not that I think this is bad advice. In fact, this is pretty much exactly what I do, with some small tweaks.
But it has always struck me how, in news stories and advice columns and talk shows, people talk about how this is a good strategy because it’s worked for SO LONG. For 30 years! Or since 1929! Or since 1900! (Adjust returns accordingly.)
And yes, 30 years, or 85, or 114, are all a long time relative to human life. And we have to make decisions based on the knowledge we’ve got.
But it’s always seemed to me that if what you’re interested in is what will happen over the 30+ years of someone’s earning life (more if you’re not in academia!), you’ve basically got an N of 1 to 4 here. I mean, sure, this may be a reasonable guess, but I don’t think there’s any strong reason to believe that the next 100 years are likely to look very similar to the last 100. Odds are better if you’re just interested in the next 30, but even then, I’m always surprised by just how confident the conventional wisdom is around the idea that the market always coming out ahead over a 25- or 30-year period—going ALL THE WAY BACK TO 1929—is rock solid evidence that it will do so in the future.
Of course, there are lots of people who don’t believe this, too, as evidenced by what happened to gold prices after the financial crisis. Or by, you know, survivalists.
Anyway, I think this overconfidence in the lessons of the recent past is something we as social scientists tend to be susceptible to. The study that comes most immediately to mind here is the Raj Chetty study on value-added estimates of teachers (paper 1, paper 2, NYT article).
The gist of the argument is that teachers’ effects on student test scores, net of student characteristics (their value added), predicts students’ eventual income at age 28. Now, there’s a lot that could be discussed about this study (latest round of critique, media coverage thereof).
But I just want to point to it—or rather, broader interpretations of it—as illustrating a similar overconfidence in the ability of the past to predict the future.
Here we have a study based on a massive (2.5 million students) dataset over a twenty-year period (1989-2009). Just thinking about the scale of the study and taking its results at face value, it’s hard to imagine how much more certain one could be in social science than at the end of such an endeavor.
And much of the media coverage takes that certainty and projects it into the future (see the NYT article again). If you replace a low value-added teacher with an average one, the classroom’s lifetime earnings will increase by more than $250,000.
And yet to make such a leap, you have to be willing to assume so many things about the future will be like the past: not only that incentivizing teachers differently and making tests more important won’t change their predictive effects (which the papers acknowledge), but, just as importantly, that the effects of education on earnings—or, more specifically, of teacher value-added on earnings—will be similar in future 20-year periods as it was from 1989-2009. And that nothing else meaningful about teachers, students, schools, or earnings will evolve over the next 20 years in ways that mess with that relationship in a significant way.
I think we do this a lot—project into the future based on our understanding of a past that is, really, quite recent. Of course knowledge about the (relatively) recent past still should inform the decisions we make about the future. But rather a lot of modesty is called for when making blanket claims that assume the future is going to look just like the past. Maybe it’s human nature. But I think that modesty is often missing.
When people discuss affirmative action, they often have a mistaken view that higher education is filled with legions of under-qualified minorities. From the inside, we have the opposite view. The higher up you go, the less likely you will find folks from under-represented groups. So, what gives?
In addition to plain ideological differences, I think people are selectively looking at the academic pipeline. Basically, at some points in the career, affirmative action is indeed at work and some folks, including myself no doubt, will receive extra consideration. But most of the time, privilege is the rule. People will disproportionately focus on the parts of the pipeline where affirmative action is a modest benefit for some people.
To grasp the argument, it helps to break down what needs to happen in order for anyone to become a tenured professor:
- Getting a high college GPA.
- Applying to the “right” grad schools.
- Admission to the “right” grad schools.
- Passing courses.
- Passing exams.
- Getting the “right” adviser.
- Getting published in the “right” places.
- Writing the dissertation.
- Applying to tenure track positions
- Getting an offer from a school.
- Strong teaching skills.
- Continuing to publish in the “right” places.
- Getting elites in the profession to vouch for you.
- Getting the department and college to sign off on your tenure case.
As you can see, academia is this insanely long career track with a long list of interdependent parts.
Now let’s get back to affirmative action. Where does that policy work? In my scheme, it shows up mainly in step #3. Most schools will look askance at graduate school cohorts that lack ethnic or gender diversity. Some may even provide funds for recruitment and fellowships. But that’s it. After step #3, affirmative is rare. Perhaps the exception is when deans or departments at the junior level look to diversity the faculty and they may approve a hire.
This helps explain the perceptions of the policy. Admissions is high profile and people are openly competing for spots. Faculty hiring is also high visibility. In contrast, say, getting published in a journal, or joining the “right” research groups is highly invisible to most observers until after the fact. And these are structured as homophilic networks, which might work against diversifying the faculty.
So, when it come to diversity in academia, you can’t look at one link in the chain. You have to look at the whole thing.
A few days ago, we got into a fruitful discussion of college admissions. Steven Pinker wrote a widely discussed article condemning the Ivy League for using non-academic criteria in admissions. I concurred with the basic point, but noted that it is all for naught because Pinker doesn’t discuss why college admissions is set up the way it is. Basically, current admissions policies are designed generate income, political legitimacy, academic respect, and other factors. People simply wouldn’t stand for an admissions policy that would turn Harvard into Berkeley or Cal Tech, where Asians are the majority and Latinos and African Americans are under represented, not to mention all the influential people whose above average kids can’t get into Harvard without the legacy program.
In the comments, Chris Martin suggested that if Cal Tech and Berkeley could do it, it wouldn’t be so bad. I think Chris under estimates the issue. To see why, let’s review Berkeley and Cal Tech:
- Berkeley: This was a school that had a policy where students were given an index that combined a number of factors, such as GPA, SAT, race, extracurriculars and so forth. This system was not changed internally and race was only dropped due to a ballot initiative and various judicial battles.
- Cal Tech: Even though Cal Tech is probably a more elite school than Harvard, it is very different in that the political pressures on engineering and science schools are much weaker. Roughly speaking, every smart kid in America dreams of the Ivy League, but only the nerdiest kids want to go to Cal Tech. In other words, I’ve never heard of wealthy senators intensely lobbying Cal Tech to make sure their C+ son makes it in.
Bottom line: These two cases are not exemplars of internally driven change. Instead, they highlight how constrained college admissions policies are.