orgtheory.net

i am so sorry, the GRE is still a valid tool in graduate admissions

A recent Atlantic article by Victoria Clayton makes the case that the GRE should be ditched based on some new research. The case for the GRE rests on the following:

  1. The GRE does actually, if modestly, predict early graduate school grades and you need to do well in courses to get the degree.
  2. Many other methods of evaluating graduate school applications  are garbage. For example, nearly all research on letters of recommendation shows that they do not predict performance.

To reiterate: nobody says GRE scores are perfect predictor. I also believe that their predictive ability is lower for some groups. But the point is not perfection. Th point is that the GRE sorta, kinda works and the alternatives do not work

So what is the new evidence? Actually, the evidence is lame in some cases. For example, Clayton cites a 1997 Cornell study claiming that GRE’s don’t correlate with success. True, but if you actually read the research on GRE’s there have been meta-analyses that compile data from multiple studies and find that the GRE does actually predict performance. This study compiles data from over 1,700 samples and shows that, yes, GRE does predict performance. Sorry, it just does, test haters.

Clayton also cites a Nature column by Miller and Stassun that correctly laments the fact that standardized tests sometimes miss good students, especially minorities. As I pointed out above, no one claims the GRE makes perfect predictions. Only that the correlation is there and that is better than the alternatives that simply don’t predict performance. But at least Miller and Stassun offer a new alternative – in depth interviews. Miller and Stassun cite a study of 67 graduate students at Fisk and Vanderbilt selected via this method and note that their projected (not actual) completion rate is 80% – much better than the typical 50% of most grad programs.

Two comments: 1. I am intrigued. If the results can be replicated in other places, I would be thrilled. But so far, we have one (promising) study of a single program. Let’s see more. 2. I am still not about to ditch GRE’s because I am not persuaded that academia is ready to implement a very intensive in-depth interview admissions system as its primary selection mechanism. The Miller and Stassun column refers to a study of physics graduate students – small numbers. What is realistic for grad programs with many applicants is that you need to screen people for interviews and that screen will include, you guessed it, standardized tests.

Bottom line: The GRE is far from perfect but it is usable. There is no evidence to systematically undermine that claim. Some alternatives don’t work and the new proposed method, in depth interviews, will probably need to be coupled with GREs.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

March 16, 2016 at 12:01 am

7 Responses

Subscribe to comments with RSS.

  1. Is the GRE’s predictive power supposed to be linear? My anecdotal sense is that anyone with scores below, say, the 70th percentile is unlikely to succeed in a good program. But there is very little difference between someone with scores in the 70s and someone with scores in the 90s. I don’t think any of the studies you cite above deal with this problem.

    Like

    Peter G. Klein

    March 16, 2016 at 4:12 am

  2. Peter, we are taking about average effects across a wide range of programs. Econ programs are now like engineering programs. Only those with very strong math scores have a chance. But across the university, lots of programs can deal with more variation.

    Liked by 1 person

    fabiorojas

    March 16, 2016 at 4:14 am

  3. Also, your point about thresholds reinforces the basic point of the post. In econ programs, GREs are probably a decent rough screen.

    Liked by 1 person

    fabiorojas

    March 16, 2016 at 4:16 am

  4. This argument convinces me that GREs are reasonable when applied to programs in which success is primarily based on coursework. But we know that most PhD programs have substantial dropout rates–and, as the Fisk and Vanderbilt study points out–the current system is very poor at predicting which students will stick around. Perhaps this is anecdotal, but I’ve known quite a number of people who left PhD programs, and I can tell you with complete confidence that not one of them left because “they couldn’t do well in the courses”–they left because they couldn’t or didn’t want to start a dissertation, couldn’t or didn’t want to finish a dissertation, couldn’t manage to find their way away from a terrible advisor, or discovered while in the PhD program that academe was not for them. The debate here is not, as you point out, whether GREs are a poor predictor. It is whether predicting grades in coursework is even useful for PhD admissions.

    I think the interview idea sound phenomenal. Obviously the larger programs are unlikely to adopt this, but I’d like to see some smaller programs with a real serious commitment to diversity and to bringing disadvantaged students into the fold try it out. Not all applicants need be interviewed, of course–I think such an experiment could clearly discard those with low GPAs, no prior research experience, etc. And perhaps there is even a utility in a really de minimums GRE cutoff depending on discipline (for example, students must be in the top third on at least one of the sections).

    The point to me is that graduate departments have a responsibility to solve this problem. Rather than relying on a tool which might work in certain limited circumstances, they need to find tools that actually work–unless, of course, Marc Bosquet is right and graduate departments have a real vested interested in accepting and then discarding students (see http://worksanddays.net/2003/File09.Bousquet_File09.Bousquet.pdf).

    Like

    Mikaila

    March 16, 2016 at 1:52 pm

  5. Mikhaila,
    I definitely saw fellow grad students leave the program because they couldn’t do the course work. Very few left the program after the course work was finished (I can only think of one person). No one seemed to leave because they didn’t want to write a dissertation or thought academe wasn’t for them. Of course, I am in economics, and math ability drives success in the course work and a PhD doesn’t mean you are headed for academe.

    Like

    Tulip

    March 16, 2016 at 4:01 pm

  6. GRE predicts problem-solving skills and most items under the big umbrella of academic aptitude, but it does not predict one’s social skills, ambition, motivation, and what not. Net of other factors, it is a great predictor, but unfortunately, we cannot parcel out other factors. Like in many other areas, who are most likely to be successful in academia? Those who have good social skills with medium to upper-level intellect.

    Like

    Joe

    March 16, 2016 at 5:08 pm

  7. We could compile data from our old admissions process that could address some of these issues. An old study in my department found that “committee rating” had a positive effect on success net of test scores and grades, which is presumably capturing the “other” factors that the committee was noticing when they rated. (Test scores and grades also had independent effects in that old analysis.)

    There is also the problem of disentangling highly inter-correlated factors: family SES, test scores, grades. Test scores are very highly correlated with parents’ education and children of highly educated parents do enter school with a large head start in the human capital race. Test scores net of parents’ background probably are capturing differences in ability, but in a graduate admissions pool you mostly don’t have the data to distinguish the two effects.

    High test scores and low grades are almost always a bad bet: these tend to be smart people who just cannot get the motivation or self-discipline to get anything done.

    I personally know a number of very successful sociologists who had low test scores at admission, and know lots of students with high scores that were wash outs. Which is not to say the correlation isn’t there, but there’s a lot of error around that coefficient.

    Agree that letters of reference are mostly useless.

    It was my feeling reading files that the writing samples were the best way to tell who really had some intellectual spark versus who was just following a template, especially if what they submitted was a major paper.

    Like

    olderwoman

    March 17, 2016 at 1:15 am


Comments are closed.

%d bloggers like this: