orgtheory.net

my ref prediction

I’m kind of obsessed with the REF, considering that it has zero direct impact on my life. It’s sort of like watching a train wreck in progress, and every time there’s big REF news I thank my lucky stars I’m in the U.S. and not the U.K.

For those who might not have been paying attention, the REF is the Research Excellence Framework, Britain’s homegrown academic homage to governmentality. Apologies in advance for any incorrect details here; to an outsider, the system’s a bit complex.

Every six years or so, U.K. universities have to submit samples of faculty members’ work – four “research outputs” per person – to a panel of disciplinary experts for evaluation. The panel ranks the outputs from 4* (world leading) to 1* (nationally recognized), although work can also be given no stars. Universities submit the work of most, but not all, of their faculty members; not being submitted to the REF is not, shall we say, a good sign for your career. “Impact” and “environment,” as well as research outputs, are also evaluated at the department level. Oh, and there’s £2 billion of research funding riding on the thing.

The whole system is arcane, and every academic I’ve talked to seems to hate it. Of course, it’s not meant to make academics happy, but to “provide…accountability for public investment in research and produce…evidence of the benefits of this investment.” Well, I don’t know that it’s doing that, but it’s certainly changing the culture of academia. I’d actually be very interested to hear a solid defense of the REF from someone who’s sympathetic to universities, so if you have one, by all means share.

Anyway, 2014 REF results were announced on Friday, to the usual hoopla. (If you’re curious but haven’t been following this, here are the results by field, including Sociology and Business and Management Studies; here are a few pieces of commentary.)

In its current form, outputs are “reviewed” by a panel of scholars in one’s discipline. This was strongly fought for by academics on the grounds that only expert review could be a legitimate way to evaluate research. This peer review, however, has become something of a farce, as panelists are expected to “review” massive quantities of research. (I can’t now find the figure, but I think it’s on the order of 1000 articles per person.)

At the same time, the peer-review element of the process (along with the complex case-study measurement of “impact”) has helped to create an increasingly elaborate, expensive, and energy-consuming infrastructure within universities around the management of the REF process. For example, universities conduct their own large-scale internal review of outputs to try to guess how REF panels will assess them, and to determine which faculty will be included in the REF submission.

All this has led to a renewed conversation about using metrics to distribute the money instead. The LSE Impact of Social Sciences blog has been particularly articulate on this front. The general argument is, “Sure, metrics aren’t great, but neither is the current system, and metrics are a lot simpler and cheaper.”

If I had to place money on it, I would bet that this metrics approach, despite all its limitations, will actually win out in the not-too-distant future. Which is awful, but no more awful than the current version of the REF. Of course metrics can be valuable tools. But as folks who know a thing or two about metrics have pointed out, they’re useful for “facilitating deliberation,” not “substitut[ing] for judgment.” It seems unlikely that any conceivable version of the REF would use metrics as anything other than a substitute for judgment.

In the U.S., this kind of extreme disciplining of the research process does not appear to be just around the corner, although Australia has partially copied the British model. But it is worth paying attention to nonetheless. The current British system took nearly thirty years to evolve into its present shape. One is reminded of the old story about the frog placed in the pot of cool water who, not noticing until too late that it was heating up, inadvertently found himself boiled.

 

Written by epopp

December 22, 2014 at 4:07 pm

3 Responses

Subscribe to comments with RSS.

  1. Objective measures (e.g. citations) are far superior to subjective judgments of a panel. Until, that is, those objective measures are used to incentivize. Then they will become as distorted as share prices measuring managerial performance, hospital waiting times, standardised test scores to measure teaching. For example, the Head of Department will direct faculty to cite all other members of their department in each publication; individuals will commission meaningless articles in open access journals which cite them. More effort will go into such “fraud” (it is really fraud, though of course not illegal) than into research.

    Like

    Michael

    December 22, 2014 at 4:47 pm

  2. Qualified defence of the REF and its Australasian cousins. I taught in a NZ university where they have their own version of REF, the PBRF: Performance Based Research Fund.

    One thing that’s really important to understand is that they only make sense given the employment law and employment contracts that exist in those countries with more standardized (union sometimes) pay scales than in North America (sometimes across disciplines), and the limited ability of introducing a proper tenure system. For all its problems the great thing about North American tenure models is that they make a very clear link between individual productivity and individual jobs. If you could introduce tenure and promotion (with the risk of not getting tenure a key part of this) and let salaries differ by discipline the need for the REF / PBRF would largely vanish.

    Pre-PBRF in the NZ context there was a bit of a culture of low teaching loads without the research productivity one would expect from it. I think similar stories can be told in parts of the UK and Australian systems. The REF/PBRF has incentivized departments to encourage productivity (or the appearance of productivity) and where I taught in NZ people felt it had made people more likely to do research, and publish the research they were doing. Good things. The previous system encouraged, or at least didn’t penalized, the notorious academic problem of waiting until the manuscript was perfect before sending it off.

    Now, the collective assessment of departments and then institutions is problematic. In both NZ and Britain it’s pretty clear the PBRF/REF has changed/distorted the job market. Very active market to get senior stars in place in time for them to count, and hesitancy about taking on promising but low-publication junior faculty. And the time and effort involved is largely additional to all the regular effort that goes into promotion and review within the university.

    The REF / PBRF framework is also probably hard for Americans to understand because of the strong role of central government in it. The funding of higher education research is just so different in the US compared to the Commonwealth countries. At least in NZ prior to PBRF the allocation of basic research money to universities was done on historical/political/lobbying grounds. I think similar in Britain. So REF/PBRF is an improvement in some ways there.

    All the negative things are true: it confuses collective and individual incentives, it’s terribly time consuming, there are weird discontinuities in the assessment of individual productivity. But compared to what it replaced there are some positives, and in the absence of a proper tenure/promotion system it might be the best way to financially motivate research outputs.

    Like

    Evan Roberts

    December 22, 2014 at 5:47 pm

  3. Thanks for the comments. And Evan, that does put it into much better context. At least why it might have made sense to some within, as well as outside, academia. I will take some convincing, though, that it’s a net positive at this point, at least the UK version. Also, Crooked Timber had a nice post a few days ago apropos of your point about hiring senior stars (at 0.2 fractions): http://crookedtimber.org/2014/12/18/research-excellence-framework-the-denouement/

    Like

    epopp

    December 23, 2014 at 4:41 am


Comments are closed.