Archive for the ‘education’ Category
A common theme in the history of education is that outside reformers run into a brick wall when they try to work in the K-12 arena. There is something incredibly sticky about K-12 that makes it impervious to outsiders. Mark Zuckerberg learned this when he gave $100m to Newark schools and got nothing in return. The LA Times recently described how Bill and Melinda Gates learned the same lesson and have dialed back their donations to K-12:
In 2009, it pledged a gift of up to $100 million to the Hillsborough County, Fla., schools to fund bonuses for high-performing teachers, to revamp teacher evaluations and to fire the lowest-performing 5%. In return, the school district promised to match the funds. But, according to reports in the Tampa Bay Times, the Gates Foundation changed its mind about the value of bonuses and stopped short of giving the last $20 million; costs ballooned beyond expectations, the schools were left with too big a tab and the least-experienced teachers still ended up at low-income schools. The program, evaluation system and all, was dumped.
It’s not just one grant, but many that failed. There are multiple reasons – stakeholders can’t be bought, a resistance to reform of any type, and decentralization of the system. Also, education is an area where there is an aversion to simple ideas that work and an emphasis on consultants and technology. Add your own theories of philanthropic failure in the comments.
I wanted to start this post with a dramatic question about whether some knowledge is too dangerous to pursue. The H-bomb is probably the archetypal example of this dilemma, and brings to mind Oppenheimer’s quotation of the Bhagavad Gita upon the detonation of Trinity: “Now I am become Death, the destroyer of worlds.
But really, that’s way too melodramatic for the example I have in mind, which is much more mundane. Much more bureaucratic. It’s less about knowledge that is too dangerous to pursue and more about blindness to the unanticipated — but not unanticipatable — consequences of some kinds of knowledge.
The knowledge I have in mind is the student-unit record. See? I told you it was boring.
The student-unit record is simply a government record that tracks a specific student across multiple educational institutions and into the workforce. Right now, this does not exist for all college students.
There are records of students who apply for federal aid, and those can be tied to tax data down the road. This is what the Department of Education’s College Scorecard is based on: earnings 6-10 years after entry into a particular college. But this leaves out the 30% of students who don’t receive federal aid.
There are states with unit-record systems. Virginia’s is particularly strong: it follows students from Virginia high schools through enrollment in any not-for-profit Virginia college and then into the workforce as reflected in unemployment insurance records. But it loses students who enter or leave Virginia, which is presumably a considerable number.
But there’s currently no comprehensive federal student-unit record system. In fact at the moment creating one is actually illegal. It was banned in an amendment to the Higher Education Act reauthorization in 2008, largely because the higher ed lobby hates the idea.
Having student-unit records available would open up all kind of research possibilities. It would help us see the payoffs not just to college in general, but to specific colleges, or specific majors. It would help us disentangle the effects of the multiple institutions attended by the typical college student. It would allow us to think more precisely about when student loans do, and don’t, pay off. Academics and policy wonks have argued for it on just these grounds.
In fact, basically every social scientist I know would love to see student-unit records become available. And I get it. I really do. I’d like to know the answers to those questions, too.
But I’m really leery of student-unit records. Maybe not quite enough to stand up and say, This is a terrible idea and I totally oppose it. Because I also see the potential benefits. But leery enough to want to point out the consequences that seem likely to follow a student-unit record system. Because I think some of the same people who really love the idea of having this data available would be less enthused about the kind of world it might help, in some marginal way, create.
So, with that as background, here are three things I’d like to see data enthusiasts really think about before jumping on this bandwagon.
First, it is a short path from data to governance. For researchers, the point of student-level data is to provide new insights into what’s working and what isn’t: to better understand what the effects of higher education, and the financial aid that makes it possible, actually are.
But for policy types, the main point is accountability. The main point of collecting student-level data is to force colleges to take responsibility for the eventual labor market outcomes of their students.
Sometimes, that’s phrased more neutrally as “transparency”. But then it’s quickly tied to proposals to “directly tie financial aid availability to institutional performance” and called “an essential tool in quality assurance.”
Now, I am not suggesting that higher education institutions should be free to just take all the federal money they can get and do whatever the heck they want with it. But I am very skeptical that, in general, the net effect of accountability schemes is generally positive. They add bureaucracy, they create new measures to game, and the behaviors they actually encourage tend to be remote from the behaviors they are intended to encourage.
Could there be some positive value in cutting off aid to institutions with truly terrible outcomes? Absolutely. But what makes us think that we’ll end up with that system, versus, say, one that incentivizes schools to maximize students’ earnings, with all the bad behavior that might entail? Anyone who seriously thinks that we would use more comprehensive data to actually improve governance of higher ed should take a long hard look at what’s going on in the UK these days.
Second, student-unit records will intensify our already strong focus on the economic return to college, and further devalue other benefits. Education does many things for people. Helping them earn more money is an important one of those things. It is not, however, the only one.
Education expands people’s minds. It gives them tools for taking advantage of opportunities that present themselves. It gives them options. It helps them to find work they find meaningful, in workplaces where they are treated with respect. And yes, selection effects — or maybe it’s just because they’re richer — but college graduates are happier and healthier than nongraduates.
The thing is, all these noneconomic benefits are difficult to measure. We have no administrative data that tracks people’s happiness, or their health, let alone whether higher education has expanded their internal life.
What we’ve got is the big two: death and taxes. And while it might be nice to know whether today’s 30-year-old graduates are outliving their nongraduate peers in 50 years, in reality it’s tax data we’ll focus on. What’s the economic return to college, by type of student, by institution, by major? And that will drive the conversation even more than it already does. Which to my mind is already too much.
Third, social scientists are occupationally prone to overestimate the practical benefit of more data. Are there things we would learn from student-unit records that we don’t know? Of course. There are all kinds of natural experiments, regression discontinuities, and instrumental variables that could be exploited, particularly around financial aid questions. And it would be great to be able to distinguish between the effects of “college” and the effects of that major at this college.
But we all realize that a lot of the benefit of “college” isn’t a treatment effect. It’s either selection — you were a better student going in, or from a better-off family — or signal — you’re the kind of person who can make it through college; what you did there is really secondary.
Proposals to use income data to understand the effects of college assume that we can adjust for the selection effects, at least, through some kind of value-added model, for example. But this is pretty sketchy. I mean, it might provide some insights for us to think about. But as a basis for concluding that Caltech, Colgate, MIT, and Rose-Hulman Institute of Technology (the top five on Brookings’ list) provide the most value — versus that they have select students who are distinctive in ways that aren’t reflected by adjusting for race, gender, age, financial aid status, and SAT scores — is a little ridiculous.
So, yeah. I want more information about the real impact of college, too. But I just don’t see the evidence out there that having more information is going to lead to policy improvements.
If there weren’t such clear potential negative consequences, I’d say sure, try, it’s worth learning more even if we can’t figure out how to use it effectively. But in a case where there are very clear paths to using this kind of information in ways that are detrimental to higher education, I’d like to see a little more careful thinking about the real likely impacts of student-unit records versus the ones in our technocratic fantasies.
Thad Domina, Andrew Penner and Emily Penner have a really interesting new paper out in Sociological Science, AKA the official journal of people on my Twitter feed.
So it turns out that some crazy Californians had the great idea to assign high school students color-coded IDs based on their standardized test results. Everyone got a white, gold, or platinum card based on their level of proficiency on the California state tests. The students had to display their ID card whenever they were on campus, and gold and platinum card holders got certain perks, including discounts on school events, but most visibly an “express” line in the cafeteria. What could possibly go wrong?
Well, this raised some eyebrows once word traveled beyond the schools themselves. The article doesn’t name the two schools, so I won’t either, but Google exists, and it sounds pretty clear that creating these visible new categories had real status effects. You can find quotes about how kids in honors classes with platinum cards told other kids in honors classes who only had a gold card that they shouldn’t even be there, and a principal apparently told some girls they should try to find platinum prom dates.
The program was in place for two years, then was shut down after quite a bit of unfavorable press. But not before our intrepid sociologists managed to collect some data.
The article looks at two main things: first, did the rewards affect test scores and other academic outcomes, and second, did creating status groups have effects.
The answer is yes on both accounts. Scores went up significantly on both math and English language arts (ELA) exams. As predicted, effects on other indicators of achievement (grades and scores on exit exams, a different test) were less consistently positive. Moreover, the effects were greater for students near the threshold (i.e. if you just barely missed the gold card one year, your scores increased more). Effects were larger for Asian students than white students, and larger for white students than Hispanic students.
So far, this is consistent (well, with the exception of the racial/ethnic variation) with behavior of the “adolescent econometricians” (p. 266, quoting Manski) assumed by the program. But here’s where it gets interesting.
The authors also use the card cutoffs to perform a regression discontinuity analysis. The students just above the gold card threshold basically look like those just below it. But it turns out they do much better. Or more accurately, kids who got the low-status white card do worse – to the tune of 0.35 SDs on the ELA test, and 0.10 SDs on the math test (both significant at p = 0.01 level).
Interestingly, the impact on grades is even greater: receiving a low-status white card reduces math grades an amount equivalent to moving from a C- down to a D, and ELA grades all the way from a C+ to a D. That’s a big drop. Domina, Penner & Penner speculate that this may be because the status categories are particularly salient for teachers, who may then treat or grade low-status students differently.
So as usual, the lesson here is that while yes, people respond to incentives, they do so in social contexts. You can’t just assume incentives are going to have similar effects on all groups of people, or ignore the effects of new status groups that are produced.
Two other thoughts here. The one interpretation I might quibble with is the authors’ attribution of the racial/ethnic differences to stereotype threat. While that’s one possibility, it’s also certainly plausible that the different response reflects cultural difference (e.g. in the importance attached to test results).
The other is about generalizability. Admirably, the authors don’t really try to generalize at all beyond saying that status categories have effects. But I can imagine people looking at an experiment like this and assuming that whatever the results were, they’d hold up across schools.
But status systems vary from place to place, of course. At the small rural high school I went to, wearing around a card advertising your high test scores would have been a fast ticket to social exclusion. Fryer and Torelli’s “Acting White” paper is hotly debated (and I haven’t followed the whole debate), but they argue that there are racial differences in whether academic achievement is associated with higher social status, and that this gap is greater in schools with more interracial contact. Clearly not every school mirrors the relatively wealthy, high-achieving schools that implemented this program, and not every kid associates high academic achievement with high social status.
That’s just a caution against assuming too much uniformity across social settings, though, not a criticism of the actual paper, which is a great use of a natural (?) experiment. And just think, they didn’t have to suffer through multiple R&Rs to publish it.
I just discovered an Economist article from last year showing that, once again, which college you go to is a lot less important than what you do at college. Using NCES data, PayScale estimated return on investment for students from selective and non-selective colleges. Then, they separated STEM majors from arts/humanities. Each dot represents a college major and its estimated rate of return:
Some obvious points:
- In the world of STEM, it really doesn’t matter where you go to school.
High prestige arts majors do worse, probably because they go into low paying careers, like being a professional painter (e.g., a Yale drama grad will actually try Broadway, while others may not get that far). [Ed. Fabio read the graph backwards when he wrote this.]
- A fair number of arts/humanities majors have *negative* rates of return.
- None of the STEM majors have a negative rate of return.
- The big message – college matters less than major.
There is also a big message for people who care about schools and inequality. If you want minorities and women to have equal pay, one of the very first things to do is to get more into STEM fields. All other policies are small in comparison.
I’m pretty sure that Atlantic-NYT-New Yorker-etc. readers have a near-bottomless appetite for articles on the elite college admissions process. I kind of do too, although I always feel a little gross afterwards, like I had one too many pieces of my kids’ Halloween candy. Not that I would ever eat their Halloween candy.
Anyway, the latest offering in this genre is a three-part series in the Atlantic (part one, part two, part three) by Alia Wong. Mostly it’s about how the process became so competitive, and what elite colleges are doing to try to make it less insane — in particular, how to stop incentivizing a certain kind of self-obsessed self-discipline among the the young demographic aiming at these schools.
The piece is well-researched, although a lot of this ground has already been covered exhaustively. But I did think the article overlooked a couple of key things.
First: Although the article is clear at the outset that this is a problem of “the 3%”, as Derek Thompson put it, it folds broader college-admissions developments in with elite college-admissions developments in ways that don’t always work. In particular, both parts point to enrollment management as a factor driving change in the elite admissions process.
I think enrollment management — the emergence of rationalized practices for selecting and aiding students to achieve some institutional objectives (financial, rankings, whatever) — is incredibly important in understanding changes in higher ed as a whole. But I suspect it doesn’t explain much of what’s changed in elite admissions in particular.
Enrollment management started as a financial tool for mid-level private colleges that were having trouble making ends meet (gated link). Elites have never embraced it quite so much, though they have adopted some of its practices. Where enrollment management matters is in understanding the rise of merit aid at less-elite colleges, for example, or in explaining the shift toward out-of-state students at public universities.
Second: So far, at least, the article overlooks the extent to which the elite-admissions-frenzy is driven by an increasingly winner-take-all society. If the returns to being in the 1% are increasing, then the returns to an Ivy degree are likely increasing as well. Conversely, as the middle class is hollowed out, to upper-middle-class parents, the costs of not getting one’s kids on the academic fast track look like dooming them to lifelong economic instability, as opposed to pointing them toward a nondescript but perfectly pleasant middle-class life.
Finally: There’s a simple solution to this problem: a lottery. Set some baseline GPA and SAT criteria, then admit people at random. Tweak the criteria to ensure racial and socioeconomic diversity, if you like. Even favor alums, if you absolutely must. But ditch the whole magical individualized selection process.
Of course, that’s pretty much a nonstarter. The whole point of Harvard admissions is, as Karabel suggests, to get to choose the next generation of elites. A lottery would undermine the very purpose.
[Edited to add link to part three, which doesn’t really affect my overall take. If anything, it further conflates general commercialization of higher ed with the elite admissions frenzy in an unhelpful way. Both important, but not closely related.
Also, ’cause I haven’t done nearly enough promoting — though still planning to — see Craig Tutterow and James Evans’ piece modeling how rankings might drive the development of enrollment management here in our new volume of RSO on “The University Under Pressure“. (Links gated, email me for copies.)]
A recent Atlantic article by Victoria Clayton makes the case that the GRE should be ditched based on some new research. The case for the GRE rests on the following:
- The GRE does actually, if modestly, predict early graduate school grades and you need to do well in courses to get the degree.
- Many other methods of evaluating graduate school applications are garbage. For example, nearly all research on letters of recommendation shows that they do not predict performance.
To reiterate: nobody says GRE scores are perfect predictor. I also believe that their predictive ability is lower for some groups. But the point is not perfection. Th point is that the GRE sorta, kinda works and the alternatives do not work
So what is the new evidence? Actually, the evidence is lame in some cases. For example, Clayton cites a 1997 Cornell study claiming that GRE’s don’t correlate with success. True, but if you actually read the research on GRE’s there have been meta-analyses that compile data from multiple studies and find that the GRE does actually predict performance. This study compiles data from over 1,700 samples and shows that, yes, GRE does predict performance. Sorry, it just does, test haters.
Clayton also cites a Nature column by Miller and Stassun that correctly laments the fact that standardized tests sometimes miss good students, especially minorities. As I pointed out above, no one claims the GRE makes perfect predictions. Only that the correlation is there and that is better than the alternatives that simply don’t predict performance. But at least Miller and Stassun offer a new alternative – in depth interviews. Miller and Stassun cite a study of 67 graduate students at Fisk and Vanderbilt selected via this method and note that their projected (not actual) completion rate is 80% – much better than the typical 50% of most grad programs.
Two comments: 1. I am intrigued. If the results can be replicated in other places, I would be thrilled. But so far, we have one (promising) study of a single program. Let’s see more. 2. I am still not about to ditch GRE’s because I am not persuaded that academia is ready to implement a very intensive in-depth interview admissions system as its primary selection mechanism. The Miller and Stassun column refers to a study of physics graduate students – small numbers. What is realistic for grad programs with many applicants is that you need to screen people for interviews and that screen will include, you guessed it, standardized tests.
Bottom line: The GRE is far from perfect but it is usable. There is no evidence to systematically undermine that claim. Some alternatives don’t work and the new proposed method, in depth interviews, will probably need to be coupled with GREs.
A quick post to announce the publication of “The University Under Pressure“, a new volume of Research in the Sociology of Organizations. Edited by Catherine Paradeise and myself, contributors include Mikaila Mariel Lemonik Arthur, Julien Barrier, Sondra Barringer, Manuelito Biag, Amy Binder, George Breslauer, Daniel Davis, Irwin Feller, Joseph Hermanowicz, James Evans, Ghislaine Filliatreau, Otto Hüther, Daniel Kleinman, George Krücken, Séverine Louvel, Christine Musselin, Robert Osley-Thomas, Richard Scott, Abby Stivers, Pedro Teixeira, and Craig Tutterow. A short description:
Universities are under pressure. All over the world, their resource environment is evolving, demands for accountability have increased, and competition has become more intense. At the same time, emerging countries have become more important in the global system, demographic shifts are changing educational needs, and new technologies threaten, or promise, to disrupt higher education. This volume includes cutting-edge research on the causes and consequences of such pressures on universities as organizations, particularly in the U.S. and Europe. It provides an empirical overview of pressures on universities in the Western world, and insight into what globalization means for universities and also looks at specific changes in the university environment and how organizations have responded. The volume examines changes internal to the university that have followed these pressures, from the evolving role of unions to new pathways followed by students and finally, asks about the future of the university as a public good in light of a transformation of student roles and university identities.
I’m sure I’ll write some more substantive posts about the volume and some of the papers in it — as well as why I think developing an organizational sociology of higher ed is important — in the days to come. But in the meanwhile, take a look. And drop me a line for access to pdfs.