Archive for the ‘research’ Category
Grants, grants, grants. Every academic likes getting them. And every administrator wants more of them. In some fields, you can’t be an active researcher without them. In all fields, the search for grants grows ever more competitive.
PLOS recently published the results of a survey of the grant-writing process in the U.S. Drawing on a nonrandom sample of astronomers and social & personality psychologists, and asking about their applications to NASA (astronomers), NIH (psychologists), and NSF (both), they reported their researchers having applied for four grants in the past four years, on average, and received one. Proposals took an average of 116 hours to write.
While respondents did note some non-pecuniary benefits to grant-writing, comments suggested that many researchers simply gave up after repeated failures, judging it a poor use of their time. In the words of one respondent:
I applied for grants from the NSF in 2004, 2005, 2006 and 2007….Most of the reasons given for not funding were that funds were too tight that particular year and that I should reapply the next year since the proposal had merit…I finally just gave up.
The authors, who start with the certainly-debatable-but-not-totally-baseless assumption that funding is fairly random (see these cites), make some rough calculations to come to the conclusion that when the funding rate is at 20%, half of grant applicants will be forced to abandon their efforts after three failed attempts, with a rate of more like 2/3 for previously unfunded researchers.
Now, this is a limited study and there is plenty of room to argue about the data it uses and the assumptions it makes. But few would disagree with its conclusion: “20% funding rates impose a substantial opportunity cost on researchers by wasting a large fraction of the available research time for at least half of our scientists, reducing national scientific output.”
And 20% is the good scenario these days. The National Institute on Aging, one of NIH’s major funders of sociology research, recently announced paylines at the 7th (for under-$500k grants) and 4th (for over-$500k grants) percentiles. That’s incredibly competitive.
Yet at the same time, universities, desperate for revenues, are pushing harder and harder for researchers to bring in grants, with those sweet, sweet indirect costs. I know mine is. When I arrived at SUNY Albany seven years ago, I was happy that the university seemed to treat grants as a bonus—something nice to have, not something required by them. This was a contrast with at least one other place I interviewed.
But that’s less the case now. I won’t go into details here. But suffice it to say that the pressure on the department to bring in grants is steady and increasing. And while this may not be happening everywhere, it’s certainly common, particularly at cash-strapped public universities. It’s even worse, of course, in the soft-money parts of higher ed, like med schools.
The thing is, it’s simply not a viable strategy. Unless the pool of grant funding is massively increased at the federal level—a remote possibility—this is a zero-sum game. And so we’re left with what is basically a variation on the tragedy of the commons problem. Rational individuals (or individuals responding to their employers’ rational demands) will write more grant applications, since doing so still probably increases one’s chances of being funded. And if everyone else is working harder and harder to secure grant funding, maintaining constant effort will likely result in a decreasing payoff.
But what a collective waste of resources. The PLOS article cites research suggesting that university faculty spend an average of 10.7 (public universities) to 14.5 (private universities) hours per week on research during the semester. Even assuming grant-writers are finding somewhat more time for research, grant-writing still takes half a semester’s research time or so. Yes, there’s some intellectual benefit to the grant-writing process. But probably not so much to writing multiple unfunded grants.
My modest proposal is to cap the number of applications—either at the individual level, or the institutional level. Or perhaps establish a lottery for the right to submit in a given round. I can think of disadvantages to that—why should the best (or at least most fundable) researchers not be able to apply as often as they want? But the goal here isn’t to share the wealth—it’s to waste less of our collective time.
In the meanwhile, the ever-growing pressure for external funding in an environment of flat-at-best resources is itself throwing money out the window. At a time of intense pressure to do more with less, this is the biggest irony of all.
Heads turn whenever accusations are made about academic impropriety, but it is especially provocative when one of the researchers involved in a study makes accusations about his/her own research. A forthcoming article in the Journal of Management Inquiry written by an anonymous senior scholar in organizational behavior does exactly this. The author, who remains anonymous to protect the identity of his/her coauthors, claims that research in organizational behavior routinely violates norms of scientific hypothesis testing.
I want to be clear: I never fudged data, but it did seem like I was fudging the framing of the work, by playing a little fast and loose with the rules of the game—as I thought I understood how the game should be played according to the rules of the scientific method. So, I must admit, it was not unusual for me to discover unforeseen results in the analysis phase, and I often would then create post hoc hypotheses in my mind to describe these unanticipated results. And yes, I would then write up the paper as if these hypotheses were truly a priori. In one way of thinking, the theory espoused (about the proper way of doing research) became the theory-in-practice (about how organizational research actually gets done).
I’m certain that some people reading this will say, “Big deal, people reformulate hypotheses all the time as they figure out what their analysis is telling them.” The author recognizes this is the case and, I believe, relates his/her experience as a warning of how the field’s standards for writing are evolving in detrimental ways. For example, the author says, “there is a commonly understood structure or “script” for an article, and authors ask for trouble when they violate this structure. If you violate the script, you give the impression that you are inexperienced. Sometimes, even the editor requires an author to create false framing.” Sadly, this is true.
All too often reviewers feel that it is their role to tell the authors of a paper how to structure hypotheses, rewrite hypotheses, and explain analysis results. Some reviewers, especially inexperienced ones, may do this because they feel that they are doing the author(s) a favor – they’re helping make the paper cleaner and understandable. But the unintended consequence of this highly structured way of writing and presenting results is that it forces authors into a form of mild academic dishonesty in which they do not allow the exploratory part of the analytical process to be transparent.
Some journals have a much stronger ethos about hypothesis testing than others. AJS is looser and allows authors more freedom in this regard. But some social psych journals (like JPSP) have become extremely rigid in wanting to see hypotheses stated a priori and then tested systematically. I would love to see more journals encourage variability in reporting of results and allow for the possibility that many of our results were, in fact, unexpected. I would love it if more editors chastised reviewers who want to force authors into a highly formulaic style of hypothesizing and writing results. It simply doesn’t reflect how most of us do research.
Perhaps the anonymous author’s tale will ignite a much needed discussion about how we write about social scientific analysis.
Last year, Nicholas Christakis argued that the social sciences were stuck. Rather that fully embrace the massive tidal wave of theory and data from the biological and physical sciences, the social sciences are content to just redo the same analysis over and over. Christakis’ used the example of racial bias. How many social scientists would be truly shocked to find that people have racial biases? If we already know that (and we do, by the way), then why not move on to new problems?
Christakis’ was recently covered in the media for his views and for attending a conference that tries to push this idea. To further promote this view, I would like to introduce Christakis’ Query, which every researcher should ask:
Think about the major question that you are working on and what you think the answer is. Estimate the confidence in your answer. If you already know the answer with more than 50% confidence, then why are you working on it? Why not move on?
Try it out.