orgtheory.net

is harvard worth it? what exactly does dale/krueger say?

One of the most widely discussed research papers in higher education from the 2000s was “Estimating the Payoff to Attenting a More Selective College: An Application of Selection Observables and Unobsersvables” by Stacy Dale and Alan Krueger. The standard interpretation is that the paper shows that there is no link between college attended and future income. In other words, the specific college you go to doesn’t matter much. A number of people, including Robin Hanson and Shamus Khan, have argued that this is an incorrect reading of the paper.

So what does the paper say? First, they start with a discussion of biases in wage/education regression models. The issue is that the match between colleges and students is highly non-random. Smart students apply to competitive colleges, financial aid creates more bias, etc. So tossing in a variable for college attended can produce biased estimates in regression models.

Their solution is to find a data set where you know that people have similar academic skills and opportunities, but chose different colleges. There is such a data set, College and Beyond. It tells you where people got accepted into college and where they went. So you can compare people who got accepted into an elite school and accepted vs. people who got accepted and went to a non-elite school.

The answer is to be found in Table III on page 1507. In the models without matching, there is a correlation between school selectivity and income. This is what Robin Hanson, and others, point out. But these estimates quickly shrink when you account for matching. The OLS estimate of the effect of school selectivity on log-wages drops from .07 to about .03. Then, when you account for similar college application patterns, the effect becomes negative! In discussing these models, D&K state: “The effect of the school-average SAT score in these models is close to zero and more precisely estimated than in the matched-applicant models.” Further, on page 1511, “The coefficient (and standard error) on school average-SAT score was a robust .065 (.012) in the basic model, but fell to -.016 (.023) in the matched-applicant model and .010 (.012) in the self-revelation model.” So I say “1 point” for the standard reading of the paper and “0 points” for the critics. The correlation between school quality and income is not robust. It is clearly tied to unobserved variables.

Now, there is a lot more to the paper and much of it supports Robin, Shamus, and others. For example, D&K point out that schools can be measured in ways other than SAT scores. If you toss in dummies and then account for matching, there does appear to be some schools that affect later life income. Also, as I’ve always pointed out, D&K point out that the paper’s main finding, the non-robustness of the college SAT-income correlation, is not true for particular subsamples, like students from poor families.

What is the take home message? It’s actually simple, school effects often disappear when you account for unobserved heterogeneity, though colleges matter for some students and particular colleges may have income effects. But don’t take my word for it. This is how D&K state it in the conclusion of the paper:

These results are consistent with the conclusion of Hunt’s [1963, p. 56] seminal research: “The C student from Princeton earns more than the A student from Podunk not mainly because he has the prestige of a Princeton degree;, but merely because be is abler. The golden touch is possessed not by the Ivy League College, but by its students.”

But our results would still suggest that there is not a “one-size-fits-all” ranking of schools, in which students are always better off in terms of their expected future earnings by attending the most selective school that admits them. This sentiment was expressed clearly by Stephen R. Lewis, Jr., president of Carleton College, who responded to the U.S. News & World Report college rankings (which ranked his school sixth among liberal arts colleges) by saying, “The question should not be, what are the best eolleges? The real question should be, best for whom?”

Read the original yourself.

Adverts: From Black Power/Grad Skool Rulz

Advertisements

Written by fabiorojas

June 6, 2012 at 12:06 am

Posted in economics, education, fabio

6 Responses

Subscribe to comments with RSS.

  1. Yes… I agree, read the paper, and the new paper, which expands the set of colleges. http://dataspace.princeton.edu/jspui/bitstream/88435/dsp01gf06g265z/1/563.pdf

    What is incredibly frustrating, though, is that you cannot read the original work paper. Why is this so frustrating? Well, look at the difference in the abstracts. The original paper reads:

    “We find that students who attended colleges with higher average SAT scores do not earn more than other students who were accepted and rejected by comparable schools but attended a college with a lower average SAT score. However the Barron’s rating of school selectivity and the tuition charged by the school are significantly related to the students’ subsequent earnings.”

    The revised version reads:

    “We find that students who attended more selective colleges do not earn more than other students who were accepted and rejected by comparable schools but attended less selective colleges. However, the average tuition charged by the school is significantly related to the students’ subsequent earnings.”

    Presto! Selectivity finding gone! Caveat that selectivity is defined by SAT score alone, vanished!

    In the original version, they report, on page 24, “Based on the straightforward regression results in column 1, men who attend the most competitive colleges [according to Barron’s 1982 ratings] earn 23 percent more than men who attend very competitive colleges, other variables in the equation being equal.”

    This table is nowhere to be find in the revised version, or, the published paper.

    Why? because in the published paper Table VI (pp. 1517) shows that their constructed SAT selectivity measure and the Barron’s selectivity measure produce similar results across their variables of interest.

    But this is not the same as demonstrating that prestige, independent of SAT score, actually has a considerable impact on earnings (I’ll take a 23 percent raise, thank you very much). It seems that Dale and Krueger simply decided to remove the offending table to the otherwise sexy story from their paper, and not reproduce this analysis for future results.

    But before I sound too smug on these folks, let me also say this: their decisions to publish their papers online so that you can actually follow the development of their analysis is incredibly admirable. Something we don’t do in sociology.

    Like

    shakha

    June 6, 2012 at 2:01 am

  2. I agree, having a major finding shift from paper to paper is frustrating. Perhaps there was a desire to make the paper more sexy. Or maybe the original finding was flawed, people make mistakes in their Stata code all the time. Or maybe the reviewers asked for some revision that undermined the 23% finding.

    As an advocate of replication, you and I might agree on the solution. One could obtain the C&B data and reproduce the analyses and see if the original analysis can be reproduced and if there’s any decent reason for the shift between papers.

    Like

    fabiorojas

    June 6, 2012 at 5:21 am

  3. I can certainly agree. And I think it would make a great masters thesis. The data are available. The analysis clearly laid out. I guess I could do it… But my technical skills are so weak/rusty it would take me 10x what it would take someone else. At least that’s my excuse.

    Like

    shakha

    June 6, 2012 at 5:41 am

  4. If D&K have stata code, it’s not hard at all. Just download C&B and execute the script.

    Like

    fabiorojas

    June 6, 2012 at 6:11 am

  5. It has been a while since I’ve read the D+K paper, but I think there are good grounds to be skeptical about their results. Caroline Hoxby and Mark Long have raised the issue that D+K’s attempts to match kids may have resulted in confining their sample to weird kids who are unrepresentative on unobservables. We can wonder about the kids who are driving D+K’s effects–those kids who enrolled in less selective colleges but were admitted in more selective colleges. I also wonder about kids who enrolled in selective colleges but applied to substantially less selective ones–that could reflect unobserved characteristics that would hurt in labor markets.

    I think the results published by Hoekstra in 2009 in Review of Economics and Statistics are pretty persuasive, using a discontinuity design to show that kids who just barely made a state flagship university’s admission threshold had a substantial bump in their wages compared to kids who just barely fell short of the threshold. Granted, Hoesktra is only comparing enrollees to nonadmits, who may have gone on to another college or no college at all, so the counter-factual is murky.

    Like

    joshtk76

    June 6, 2012 at 5:22 pm

  6. […] is harvard worth it? what exactly does dale/krueger say? « orgtheory.net […]

    Like


Comments are closed.

%d bloggers like this: