Archive for the ‘research’ Category
A specific brand of high-protein chocolate milk improved the cognitive function of high school football players with concussions. At least that’s what a press release from the University of Maryland claimed a few weeks ago. It also quoted the superintendent of the Washington County Public Schools as saying, “Now that we understand the findings of this study, we are determined to provide Fifth Quarter Fresh [the milk brand] to all of our athletes.”
The problem is that the “study” was not only funded in part by the milk producer, but is unpublished, unavailable to the public and, based on the press release — all the info we’ve got — raises immediate methodological questions. Certainly there are no grounds for making claims about this milk in particular, since the control group was given no milk at all.
The summary also raises questions about the sample size. The total sample included 474 high school football players, but included both concussed and non-concussed players. How many of these got concussions during one season? I would hope not enough to provide statistical power — this NAS report suggests high schoolers get 11 concussions per 10,000 football games and practices.
And even if the sample size is sufficient, it’s not clear that the results are meaningful. The press release suggests concussed athletes who drank the milk did significantly better on four of thirty-six possible measures — anyone want to take bets on the p-value cutoff?
Maryland put out the press release nearly four weeks ago. Since then there’s been a slow build of attention, starting with a takedown by Health News Review on January 5, before the story was picked up by a handful of news outlets and, this weekend, by Vox. In the meanwhile, the university says in fairly vague terms that it’s launched a review of the study, but the press release is still on the university website, and similarly questionable releases (“The magic formula for the ultimate sports recovery drink starts with cows, runs through the University of Maryland and ends with capitalism” — you can’t make this stuff up!) are up as well.
Whoever at the university decided to put out this press release should face consequences, and I’m really glad there are journalists out there holding the university’s feet to the fire. But while the university certainly bears responsibility for the poor decision to go out there and shill for a sponsor in the name of science, it’s worth noting that this is only half of the story.
There’s a lot of talk in academia these days about the status of scientific knowledge — about replicability, bias, and bad incentives, and how much we know that “just ain’t so.” And there’s plenty of blame to go around.
But in our focus on universities’ challenges in producing scientific knowledge, sometimes we underplay the role of another set of institutions: the media. Yes, there’s a literature on science communication that looks as the media as intermediary between science and the public. But a lot of it takes a cognitive angle on audience reception, and it’s got a heavy bent toward controversial science, like climate change or fracking.
More attention to media as a field, though, with rapidly changing conditions of production, professional norms and pathways, and career incentives, could really shed some light on the dynamics of knowledge production more generally. It would be a mistake to look back to some idealized era in which unbiased but hard-hitting reporters left no stone unturned in their pursuit of the public interest. But the acceleration of the news cycle, the decline of journalism as a viable career, the impact of social media on news production, and the instant feedback on pageviews and clickthroughs all tend to reinforce a certain breathless attention to the latest overhyped university press release.
It’s not the best research that gets picked up, but the sexy, the counterintuitive, and the clickbait-ish. Female-named hurricanes kill more than male hurricanes. (No.) Talking to a gay canvasser makes people support gay marriage. (Really no.) Around the world, children in religious households are less altruistic than children of atheists. (No idea, but I have my doubts.)
This kind of coverage not only shapes what the public believes, but it shapes incentives in academia as well. After all, the University of Maryland is putting out these press releases because it perceives it will benefit, either from the perception it is having a public impact, or from the goodwill the attention generates with Fifth Quarter Fresh and other donors. Researchers, in turn, will be similarly incentivized to focus on the sexy topic, or at least the sexy framing of the ordinary topic. And none of this contributes to the cumulative production of knowledge that we are, in theory, still pursuing.
None of this is meant to shift the blame for the challenges faced by science from the academic ecosystem to the realm of media. But if you really want to understand why it’s so hard to make scientific institutions work, you can’t ignore the role of media in producing acceptance of knowledge, or the rapidity with which that role is changing.
Happy new year. Guess what my New Year’s resolution is. To that end, a few quick thoughts on universities and the grant economy to dip a toe back in the water.
We all know that American universities (well, not only American universities) are increasingly hungry for grants. When state funding stagnates, and tuition revenues are limited by politics or discounting, universities look to their faculty to bring in money through grants. Although this may be a zero-sum game across universities (assuming total funding is fixed), it is unsurprising that administrations would intensify grant-seeking when faced with tight budgets.
Of course, it’s only unsurprising if grants actually make money for the university. But a variety of observers, from the critical to the self-interested, have argued that the indirect costs that many grants bring in – the part that pays not for the direct cost of research, but for overhead expenses like keeping the network running, the library open, and the heat and electricity on – don’t actually cover the full expense of conducting research.
Instead, they suggest that every grant the university brings in costs it another 9% or so in unreimbursed overhead. In addition, about 12% of total research spending consists of universities spending their own money on research. While some of this goes to support work unlikely to receive external funding (e.g. research in the humanities), I think it’s safe to assume that most of it is related to the search for external grants – it’s seed funding for projects with the potential for external funding, or bridge funding for lab faculty between grants. (These numbers come from the Council on Government Relations, a lobbying organization of research universities.)
If that’s the case, it means that when faculty bring in grants, even federal grants that come with an extra 50% or so to pay for overhead costs, it costs the university money. Money that could be spent on instruction, or facility maintenance, or even on research itself. So how can we make sense of the fact that universities are intensifying their search for grants, even as the numbers suggest that grants cost universities more they gain them?
I can think of at least three reasons this might be the case:
1. The numbers are wrong.
It is notoriously difficult to estimate the “real” indirect costs of research. How much of the library should your grant pay for? How much of the heat, if it’s basically supporting a grad student who would be sitting in the same shared office with or without the grant? There are conventions here, but they are just that – conventions. And maybe universities have a better sense of the “real” costs, which might be lower than standard accounting would suggest. COGR has an interest in making research look expensive, so government is generous about covering indirect costs. And critics of the university (with whom I sympathize) have a different interest in highlighting the costs of research, since they see a heavy grants focus as coming at the cost of education and of the humanities and social sciences. (See e.g. this recent piece by Chris Newfield, which inspired the line of thought behind this post.)
Certainly the numbers are squishy, and the evidence that grant-seeking costs universities more than it gains them isn’t airtight. But I haven’t seen anyone make a strong case that universities are actually making money from indirect costs. So I’m skeptical that these numbers are out-and-out wrong, although open to better evidence.
2. It’s basically political and/or symbolic, not financial.
A second possibility is that the additional dollars aren’t really the point. The point is that universities exist in a status economy in which having a large research enterprise is integral to many forms of success, from attracting desirable faculty and students, to appearing in a positive light to politicians (more relevant for public than private universities), to attracting donations from those who want to give to an institution that is among the “best”. Or, in a slight variation, maybe the perceived political benefits of having a large grant apparatus – of being on the cutting edge of science, of being seen as economically valuable – is seen as outweighing any extra costs. After all, what’s an extra 10% per grant if it makes the difference between the state increasing or cutting your appropriations over the next decade? (Again, most relevant for publics.)
These dynamics are real, but they don’t explain the intensification of the search for grants in response to tight budgets, except insofar as tight budgets also intensify the status competition. But it really seems to me that administrators see grants as a direct financial solution, not an indirect one. So I think that symbolic politics is a piece of the puzzle, but not the only one.
3. Not all dollars are created equal.
Different dollars have different values to different people. Academic scientists often like industry grants because they tend to be more flexible than government money. Administrators, on the other hand, don’t, since such grants typically don’t cover overhead expenses.
Perhaps something related is going on with the broader search for grants. Maybe, even if grants really do cost more than they bring in for universities, administrators don’t perceive the revenues and the expenses in parallel ways. After all, those indirect costs provide identifiable extra dollars the university wouldn’t have seen otherwise. But the “excess” expenses are sort of invisible. The university is going to pay for the heat and the library either way; even if you know the research infrastructure has to be supported, you might assume that the marginal overhead cost of an additional grant doesn’t make that much difference. (Maybe you’d even be right.) And people might not see some costs – like university seed funding for potentially fundable research – as an expense of grant-seeking, even if that’s why they exist.
I think this is probably a big part of the explanation. The extra revenues of grants are visible and salient; the extra costs are hidden and easy to discount. So, rightly or wrongly, administrators turn to grant-seeking in tight times despite the fact that it actually costs universities money.
There are some other possibilities I’m not considering here. For example, maybe this is about the interests of different specific groups within the organization – e.g. about competitions among deans, or between upper administration and trustees. But I think #2 and #3 capture a lot of what’s going on.
So, if you think this dynamic (the intensification of grant-seeking) is kind of dysfunctional, what do you do? Well, pointing out how much research really costs the university – loudly and repeatedly – is probably a good idea. Make those “extra” costs as visible and salient as the revenues. (Though it would be SO NICE if the numbers were better.)
But don’t discount #2 – even if any extra costs of grants are made clear, universities aren’t going to give up the search for them. Because while the money grants bring in matters, they also have value as status capital, and that outweighs any unreimbursed costs they incur. Grants may not quite cover those pesky infrastructure costs. But the legitimacy they collectively confer is, quite literally, priceless.
new book Handbook of Qualitative Organizational Research Innovative Pathways and Methods (2015, Routledge) now available
At orgtheory, we’ve had on-going discussions about how to undertake research. For example, I’ve shared my own take on dealing with the IRB, gaining access to organizations, undertaking ethnography , timing and pacing research, writing for wider audiences, and what is ethnography good for? Guest blogger Ellen Berrey elaborated her thoughts on how to get access to organizations, and we’ve had at least three discussions about the challenges of anonymizing names and identities of persons and organizations, including guest blogger Victor Tan Chen’s post, guest blogger Ellen Berrey’s post, and Fabio’s most recent post here.
Looking for more viewpoints about how to undertake organizational research? Preparing a research proposal? Need a new guide for a methods or organizations class? Rod Kramer and Kim Elsbach have co-edited the Handbook of Qualitative Organizational Research Innovative Pathways and Methods (2015, Routledge).
In the introduction, Kramer and Elsbach describe the impetus for the volume:
There were several sources of inspiration that motivated this volume. First and foremost was a thoughtful and provocative article by Jean Bartunek, Sara Rynes, and Duane Ireland that appeared in the Academy of Management Journal in 2006. This article published a list of the 17 most interesting organizational papers published in the last 100 years. These papers were identified by Academy of Management Journal board members—all of whom are leading organizational scholars cognizant of the best work being done in their respective areas. A total of 67 board members nominated 160 articles as exceptionally interesting; those articles that received two or more nominations were deemed the most interesting. Of these exceptional articles, 12 (71%) involved qualitative methods.
This result strongly mirrors our own experience as organizational researchers. Although both of us have used a variety of methods in our organizational research (ranging from experimental lab studies and surveys to computer-based, agent simulations), our favorite studies by far have been our qualitative studies (including those we have done together). One of the qualities we have come to most appreciate, even cherish, about qualitative research is the sense of discovery and the opportunity for genuine intellectual surprise. Rather than merely seeking to confirm a preordained hypothesis or “nail down” an extrapolation drawn from the extant literature, our inductive studies, we found, invariably opened up exciting, unexpected intellectual doors and pointed us toward fruitful empirical paths for further investigation. In short, if life is largely all about the journey rather than destination, as the adage asserts, we’ve found qualitative research most often gave us a road we wanted to follow.
Here’s the list (so far):
Some people might want to hand wave the problem away or jump to the conclusion that science is broken. There’s a more intuitive explanation – science is “brittle.” That is, once you get past some basic and important findings, you get to findings that are small in size, require many technical assumptions, or rely on very specific laboratory/data collection conditions.
There should be two responses. First, editors should reject submissions which might depend on “local conditions” or very small results or send them to lower tier journals. Second, other researchers should feel free to try to replicate research. This is appropriate work for early career academics who need to learn how work is done. Of course, people who publish in top journals, or obtain famous results, should expect replication requests.
Science just published a piece showing that only a third of articles from major psychology journals can be replicated. That is, if you reran the experiments, only a third of experiments will have statistically significant results. The details of the studies matter as well. The higher the p-value, the less like you were to replicate and “flashy” results were less likely to replicate.
Insider Education spoke to me and other sociologists about the replication issue in our discipline. A major issue is that there is no incentive to actually assess research since it seems to be nearly impossible to publish replications and statistical criticisms in our major journals:
Recent research controversies in sociology also have brought replication concerns to the fore. Andrew Gelman, a professor of statistics and political science at Columbia University, for example, recently published a paper about the difficulty of pointing out possible statistical errors in a study published in the American Sociological Review. A field experiment at Stanford University suggested that only 15 of 53 authors contacted were able or willing to provide a replication package for their research. And the recent controversy over the star sociologist Alice Goffman, now an assistant professor at the University of Wisconsin at Madison, regarding the validity of her research studying youths in inner-city Philadelphia lingers — in part because she said she destroyed some of her research to protect her subjects.
Philip Cohen, a professor of sociology at the University of Maryland, recently wrote a personal blog post similar to Gelman’s, saying how hard it is to publish articles that question other research. (Cohen was trying to respond to Goffman’s work in the American Sociological Review.)
“Goffman included a survey with her ethnographic study, which in theory could have been replicable,” Cohen said via email. “If we could compare her research site to other populations by using her survey data, we could have learned something more about how common the problems and situations she discussed actually are. That would help evaluate the veracity of her research. But the survey was not reported in such a way as to permit a meaningful interpretation or replication. As a result, her research has much less reach or generalizability, because we don’t know how unique her experience was.”
Readers can judge whether Gelman’s or Cohen’s critiques are correct. But the broader issue is serious. Sociology journals simply aren’t publishing error correction or replication, with the honorable exception of Sociological Science which published a replication/critique of the Brooks/Manza (2006) ASR article. For now, debate on the technical merits of particular research seems to be the purview of blog posts and book reviews that are quickly forgotten. That’s not good.
Cristobal Young is an assistant professor at Stanford’s Department of Sociology. He works on quantitative methods, stratification, and economic sociology. In this post co-authored with Aaron Horvath, he reports on the attempt to replicate 53 sociological studies. Spoiler: we need to do better.
Do Sociologists Release Their Data and Code? Disappointing Results from a Field Experiment on Replication.
Replication packages – releasing the complete data and code for a published article – are a growing currency in 21st century social science, and for good reasons. Replication packages help to spread methodological innovations, facilitate understanding of methods, and show confidence in findings. Yet, we found that few sociologists are willing or able to share the exact details of their analysis.
We conducted a small field experiment as part of a graduate course in statistical analysis. Students selected sociological articles that they admired and wanted to learn from, and asked the authors for a replication package.
Out of the 53 sociologists contacted, only 15 of the authors (28 percent) provided a replication package. This is a missed opportunity for the learning and development of new sociologists, as well as an unfortunate marker of the state of open science within our field.
Some 19 percent of authors never replied to repeated requests, or first replied but never provided a package. More than half (56 percent) directly refused to release their data and code. Sometimes there were good reasons. Twelve authors (23 percent) cited legal or IRB limitations on their ability to share their data. But only one of these authors provided the statistical code to show how the confidential data were analyzed.
Why So Little Response?
A common reason for not releasing a replication package was because the author had lost the data – often due to reported computer/hard drive malfunctions. As well, many authors said they were too busy or felt that providing a replication package would be too complicated. One author said they had never heard of a replication package. The solutions here are simple: compiling a replication package should be part of a journal article’s final copy-editing and page-proofing process.
More troubling is that a few authors openly rejected the principle of replication, saying in effect, “read the paper and figure it out yourself.” One articulated a deep opposition, on the grounds that replication packages break down the “barriers to entry” that protect researchers from scrutiny and intellectual competition from others.
The Case for Higher Standards
Methodology sections of research articles are, by necessity, broad and abstract descriptions of their procedures. However, in most quantitative analyses, the exact methods and code are on the author’s computer. Readers should be able to download and run replication packages as easily as they can download and read published articles. The methodology section should not be a “barrier to entry,” but rather an on-ramp to an open and shared scholarly enterprise.
When authors released replication packages, it was enlightening for students to look “under the hood” on research they admired, and see exactly how results were produced. Students finished the process with deeper understanding of – and greater confidence in – the research. Replication packages also serve as a research accelerator: their transparency instills practical insight and confidence – bridging the gap between chalkboard statistics and actual cutting-edge research – and invites younger scholars to build on the shoulders of success. As Gary King has emphasized, replications have become first publications for many students, and helped launched many careers – all while ramping up citations to the original articles.
In our small sample, little more than a quarter of sociologists released their data and code. Top journals in political science and economics now require on-line replication packages. Transparency is no less crucial in sociology for the accumulation of knowledge, methods, and capabilities among young scholars. Sociologists – and ultimately, sociology journals – should embrace replication packages as part of the lasting contribution of their research.
Table 1. Response to Replication Request
|Yes: Released data and code for paper||15||28%|
|No: Did not release||38||72%|
|Reasons for “No”|
|IRB / legal / confidentiality issue||12||23%|
|No response / no follow up||10||19%|
|Don’t have data||6||11%|
|Don’t have time / too complicated||6||11%|
|Still using the data||2||4%|
|‘See the article and figure it out’||2||4%|
Note: For replication and transparency, a blinded copy of the data is available on-line. Each author’s identity is blinded, but the journal name, year of publication, and response code is available. Half of the requests addressed articles in the top three journals, and more than half were published in the last three years.
Figure 1: Illustrative Quotes from Student Correspondence with Authors:
- “Here is the data file and Stata .do file to reproduce [the] Tables…. Let me know if you have any questions.”
- “[Attached are] data and R code that does all regression models in the paper. Assuming that you know R, you could literally redo the entire paper in a few minutes.”
- “While I applaud your efforts to replicate my research, the best guidance I can offer
is that the details about the data and analysis strategies are in the paper.”
- “I don’t keep or produce ‘replication packages’… Data takes a significant amount of human capital and financial resources, and serves as a barrier-to-entry against other researchers… they can do it themselves.”