orgtheory.net

Archive for the ‘research’ Category

why do universities salivate over money-losing grants?

Happy new year. Guess what my New Year’s resolution is. To that end, a few quick thoughts on universities and the grant economy to dip a toe back in the water.

We all know that American universities (well, not only American universities) are increasingly hungry for grants. When state funding stagnates, and tuition revenues are limited by politics or discounting, universities look to their faculty to bring in money through grants. Although this may be a zero-sum game across universities (assuming total funding is fixed), it is unsurprising that administrations would intensify grant-seeking when faced with tight budgets.

Of course, it’s only unsurprising if grants actually make money for the university. But a variety of observers, from the critical to the self-interested, have argued that the indirect costs that many grants bring in – the part that pays not for the direct cost of research, but for overhead expenses like keeping the network running, the library open, and the heat and electricity on – don’t actually cover the full expense of conducting research.

Instead, they suggest that every grant the university brings in costs it another 9% or so in unreimbursed overhead. In addition, about 12% of total research spending consists of universities spending their own money on research. While some of this goes to support work unlikely to receive external funding (e.g. research in the humanities), I think it’s safe to assume that most of it is related to the search for external grants – it’s seed funding for projects with the potential for external funding, or bridge funding for lab faculty between grants. (These numbers come from the Council on Government Relations, a lobbying organization of research universities.)

If that’s the case, it means that when faculty bring in grants, even federal grants that come with an extra 50% or so to pay for overhead costs, it costs the university money. Money that could be spent on instruction, or facility maintenance, or even on research itself. So how can we make sense of the fact that universities are intensifying their search for grants, even as the numbers suggest that grants cost universities more they gain them?

I can think of at least three reasons this might be the case:

1.  The numbers are wrong.

It is notoriously difficult to estimate the “real” indirect costs of research. How much of the library should your grant pay for? How much of the heat, if it’s basically supporting a grad student who would be sitting in the same shared office with or without the grant? There are conventions here, but they are just that – conventions. And maybe universities have a better sense of the “real” costs, which might be lower than standard accounting would suggest. COGR has an interest in making research look expensive, so government is generous about covering indirect costs. And critics of the university (with whom I sympathize) have a different interest in highlighting the costs of research, since they see a heavy grants focus as coming at the cost of education and of the humanities and social sciences. (See e.g. this recent piece by Chris Newfield, which inspired the line of thought behind this post.)

Certainly the numbers are squishy, and the evidence that grant-seeking costs universities more than it gains them isn’t airtight. But I haven’t seen anyone make a strong case that universities are actually making money from indirect costs. So I’m skeptical that these numbers are out-and-out wrong, although open to better evidence.

2. It’s basically political and/or symbolic, not financial.

A second possibility is that the additional dollars aren’t really the point. The point is that universities exist in a status economy in which having a large research enterprise is integral to many forms of success, from attracting desirable faculty and students, to appearing in a positive light to politicians (more relevant for public than private universities), to attracting donations from those who want to give to an institution that is among the “best”. Or, in a slight variation, maybe the perceived political benefits of having a large grant apparatus – of being on the cutting edge of science, of being seen as economically valuable – is seen as outweighing any extra costs. After all, what’s an extra 10% per grant if it makes the difference between the state increasing or cutting your appropriations over the next decade? (Again, most relevant for publics.)

These dynamics are real, but they don’t explain the intensification of the search for grants in response to tight budgets, except insofar as tight budgets also intensify the status competition. But it really seems to me that administrators see grants as a direct financial solution, not an indirect one. So I think that symbolic politics is a piece of the puzzle, but not the only one.

3.  Not all dollars are created equal.

Different dollars have different values to different people. Academic scientists often like industry grants because they tend to be more flexible than government money. Administrators, on the other hand, don’t, since such grants typically don’t cover overhead expenses.

Perhaps something related is going on with the broader search for grants. Maybe, even if grants really do cost more than they bring in for universities, administrators don’t perceive the revenues and the expenses in parallel ways. After all, those indirect costs provide identifiable extra dollars the university wouldn’t have seen otherwise. But the “excess” expenses are sort of invisible. The university is going to pay for the heat and the library either way; even if you know the research infrastructure has to be supported, you might assume that the marginal overhead cost of an additional grant doesn’t make that much difference. (Maybe you’d even be right.) And people might not see some costs – like university seed funding for potentially fundable research – as an expense of grant-seeking, even if that’s why they exist.

I think this is probably a big part of the explanation. The extra revenues of grants are visible and salient; the extra costs are hidden and easy to discount. So, rightly or wrongly, administrators turn to grant-seeking in tight times despite the fact that it actually costs universities money.

There are some other possibilities I’m not considering here. For example, maybe this is about the interests of different specific groups within the organization – e.g. about competitions among deans, or between upper administration and trustees. But I think #2 and #3 capture a lot of what’s going on.

So, if you think this dynamic (the intensification of grant-seeking) is kind of dysfunctional, what do you do? Well, pointing out how much research really costs the university – loudly and repeatedly – is probably a good idea. Make those “extra” costs as visible and salient as the revenues. (Though it would be SO NICE if the numbers were better.)

But don’t discount #2 – even if any extra costs of grants are made clear, universities aren’t going to give up the search for them. Because while the money grants bring in matters, they also have value as status capital, and that outweighs any unreimbursed costs they incur. Grants may not quite cover those pesky infrastructure costs. But the legitimacy they collectively confer is, quite literally, priceless.

Written by epopp

January 4, 2016 at 1:54 pm

new book Handbook of Qualitative Organizational Research Innovative Pathways and Methods (2015, Routledge) now available

At orgtheory, we’ve had on-going discussions about how to undertake research.  For example, I’ve shared my own take on dealing with the IRB, gaining access to organizations, undertaking ethnography , timing and pacing research, writing for wider audiences, and what is ethnography good for?  Guest blogger Ellen Berrey elaborated her thoughts on how to get access to organizations, and we’ve had at least three discussions about the challenges of anonymizing names and identities of persons and organizations, including guest blogger Victor Tan Chen’s post, guest blogger Ellen Berrey’s post, and Fabio’s most recent post here.

Looking for more viewpoints about how to undertake organizational research?  Preparing a research proposal?  Need a new guide for a methods or organizations class?  Rod Kramer and Kim Elsbach have co-edited the Handbook of Qualitative Organizational Research Innovative Pathways and Methods (2015, Routledge)

HandbookQualitativeOrgResearch

In the introduction, Kramer and Elsbach describe the impetus for the volume:

There were several sources of inspiration that motivated this volume. First and foremost was a thoughtful and provocative article by Jean Bartunek, Sara Rynes, and Duane Ireland that appeared in the Academy of Management Journal in 2006. This article published a list of the 17 most interesting organizational papers published in the last 100 years. These papers were identified by Academy of Management Journal board members—all of whom are leading organizational scholars cognizant of  the best work being done in their respective areas. A total of 67 board members nominated 160 articles as exceptionally interesting; those articles that received two or more nominations were deemed the most interesting. Of these exceptional articles, 12 (71%) involved qualitative methods.

This result strongly mirrors our own experience as organizational researchers. Although both of us have used a variety of methods in our organizational research (ranging from experimental lab studies and surveys to computer-based, agent simulations), our favorite studies by far have been our qualitative studies (including those we have done together). One of the qualities we have come to most appreciate, even cherish, about qualitative research is the sense of discovery and the opportunity for genuine intellectual surprise. Rather than merely seeking to confirm a preordained hypothesis or “nail down” an extrapolation drawn from the extant literature, our inductive studies, we found, invariably opened up exciting, unexpected intellectual doors and pointed us toward fruitful empirical paths for further investigation. In short, if life is largely all about the journey rather than destination, as the adage asserts, we’ve found qualitative research most often gave us a road we wanted to follow.

Read the rest of this entry »

Written by katherinechen

December 18, 2015 at 5:27 pm

stuff that doesn’t replicate

Here’s the list (so far):

Some people might want to hand wave the problem away or jump to the conclusion that science is broken. There’s a more intuitive explanation – science is “brittle.” That is, once you get past some basic and important findings, you get to findings that are small in size, require many technical assumptions, or rely on very specific laboratory/data collection conditions.

There should be two responses. First, editors should reject submissions which might depend on “local conditions” or very small results or send them to lower tier journals. Second, other researchers should feel free to try to replicate research. This is appropriate work for early career academics who need to learn how work is done. Of course, people who publish in top journals, or obtain famous results, should expect replication requests.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

October 13, 2015 at 12:01 am

movements and inhabited institutions: the case of latino student groups

A key insight from research on student activism is that the college environment has a strong influence on how that activism expresses itself. We saw that in Amy Binder and Kate Wood’s study of conservative groups. Daisy Reyes has an article in Sociology of Education that explores this issue with Latino groups and links it to institutional theory:

To comply with ideals of multiculturalism and diversity, postsecondary institutions incorporate Latino students into distinct campus cultures. These cultures influence how students interact with one another, the university community at large, and communities outside of campus, ultimately shaping how students inhabit Latino politics. Drawing on data from 20 months of ethnographic fieldwork with six student organizations and 60 in-depth interviews, I compare Latino student organizations in a liberal arts college, a research university, and a regional public university. Building on inhabited institutional theory, I identify dimensions of campus cultures that work in interaction with students to produce three divergent forms of ethnic political expression: deliberative, divisive, and contentious. Inhabited institutionalism helps explain why Latino politics takes distinct forms in specific academic contexts and suggests that strong collegiate incorporation may paradoxically serve to suppress Latino student engagement in political activism outside the campus gates.

Read the entire article here. Recommended.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

September 29, 2015 at 12:01 am

inside higher education discusses replication in psychology and sociology

Science just published a piece showing that only a third of articles from major psychology journals can be replicated. That is, if you reran the experiments, only a third of experiments will have statistically significant results. The details of the studies matter as well. The higher the p-value, the less like you were to replicate and “flashy” results were less likely to replicate.

Insider Education spoke to me and other sociologists about the replication issue in our discipline. A major issue is that there is no incentive to actually assess research since it seems to be nearly impossible to publish replications and statistical criticisms in our major journals:

Recent research controversies in sociology also have brought replication concerns to the fore. Andrew Gelman, a professor of statistics and political science at Columbia University, for example, recently published a paper about the difficulty of pointing out possible statistical errors in a study published in the American Sociological Review. A field experiment at Stanford University suggested that only 15 of 53 authors contacted were able or willing to provide a replication package for their research. And the recent controversy over the star sociologist Alice Goffman, now an assistant professor at the University of Wisconsin at Madison, regarding the validity of her research studying youths in inner-city Philadelphia lingers — in part because she said she destroyed some of her research to protect her subjects.

Philip Cohen, a professor of sociology at the University of Maryland, recently wrote a personal blog post similar to Gelman’s, saying how hard it is to publish articles that question other research. (Cohen was trying to respond to Goffman’s work in the American Sociological Review.)

“Goffman included a survey with her ethnographic study, which in theory could have been replicable,” Cohen said via email. “If we could compare her research site to other populations by using her survey data, we could have learned something more about how common the problems and situations she discussed actually are. That would help evaluate the veracity of her research. But the survey was not reported in such a way as to permit a meaningful interpretation or replication. As a result, her research has much less reach or generalizability, because we don’t know how unique her experience was.”

Readers can judge whether Gelman’s or Cohen’s critiques are correct. But the broader issue is serious. Sociology journals simply aren’t publishing error correction or replication, with the honorable exception of Sociological Science which published a replication/critique of the Brooks/Manza (2006) ASR article. For now, debate on the technical merits of particular research seems to be the purview of blog posts and book reviews that are quickly forgotten. That’s not good.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

August 31, 2015 at 12:01 am

sociologists need to be better at replication – a guest post by cristobal young

Cristobal Young is an assistant professor at Stanford’s Department of Sociology. He works on quantitative methods, stratification, and economic sociology. In this post co-authored with Aaron Horvath, he reports on the attempt to replicate 53 sociological studies. Spoiler: we need to do better.

Do Sociologists Release Their Data and Code? Disappointing Results from a Field Experiment on Replication.

 

Replication packages – releasing the complete data and code for a published article – are a growing currency in 21st century social science, and for good reasons. Replication packages help to spread methodological innovations, facilitate understanding of methods, and show confidence in findings. Yet, we found that few sociologists are willing or able to share the exact details of their analysis.

We conducted a small field experiment as part of a graduate course in statistical analysis. Students selected sociological articles that they admired and wanted to learn from, and asked the authors for a replication package.

Out of the 53 sociologists contacted, only 15 of the authors (28 percent) provided a replication package. This is a missed opportunity for the learning and development of new sociologists, as well as an unfortunate marker of the state of open science within our field.

Some 19 percent of authors never replied to repeated requests, or first replied but never provided a package. More than half (56 percent) directly refused to release their data and code. Sometimes there were good reasons. Twelve authors (23 percent) cited legal or IRB limitations on their ability to share their data. But only one of these authors provided the statistical code to show how the confidential data were analyzed.

Why So Little Response?

A common reason for not releasing a replication package was because the author had lost the data – often due to reported computer/hard drive malfunctions. As well, many authors said they were too busy or felt that providing a replication package would be too complicated. One author said they had never heard of a replication package. The solutions here are simple: compiling a replication package should be part of a journal article’s final copy-editing and page-proofing process.

More troubling is that a few authors openly rejected the principle of replication, saying in effect, “read the paper and figure it out yourself.” One articulated a deep opposition, on the grounds that replication packages break down the “barriers to entry” that protect researchers from scrutiny and intellectual competition from others.

The Case for Higher Standards

Methodology sections of research articles are, by necessity, broad and abstract descriptions of their procedures. However, in most quantitative analyses, the exact methods and code are on the author’s computer. Readers should be able to download and run replication packages as easily as they can download and read published articles. The methodology section should not be a “barrier to entry,” but rather an on-ramp to an open and shared scholarly enterprise.

When authors released replication packages, it was enlightening for students to look “under the hood” on research they admired, and see exactly how results were produced. Students finished the process with deeper understanding of – and greater confidence in – the research. Replication packages also serve as a research accelerator: their transparency instills practical insight and confidence – bridging the gap between chalkboard statistics and actual cutting-edge research – and invites younger scholars to build on the shoulders of success. As Gary King has emphasized, replications have become first publications for many students, and helped launched many careers – all while ramping up citations to the original articles.

In our small sample, little more than a quarter of sociologists released their data and code. Top journals in political science and economics now require on-line replication packages. Transparency is no less crucial in sociology for the accumulation of knowledge, methods, and capabilities among young scholars. Sociologists – and ultimately, sociology journals – should embrace replication packages as part of the lasting contribution of their research.

Table 1. Response to Replication Request

Response Frequency Percent
Yes:   Released data and code for paper 15 28%
No: Did not release 38 72%
Reasons for “No”
    IRB / legal / confidentiality issue 12 23%
    No response / no follow up 10 19%
    Don’t have data 6 11%
    Don’t have time / too complicated 6 11%
    Still using the data 2 4%
    ‘See the article and figure it out’ 2 4%
Total 53 100%

Note: For replication and transparency, a blinded copy of the data is available on-line. Each author’s identity is blinded, but the journal name, year of publication, and response code is available. Half of the requests addressed articles in the top three journals, and more than half were published in the last three years.

Figure 1: Illustrative Quotes from Student Correspondence with Authors:

Positive:

  1. “Here is the data file and Stata .do file to reproduce [the] Tables….  Let me know if you have any questions.”
  2. “[Attached are] data and R code that does all regression models in the paper. Assuming that you know R, you could literally redo the entire paper in a few minutes.”

Negative:

  1. “While I applaud your efforts to replicate my research, the best guidance I can offer
    is that the details about the data and analysis strategies are in the paper.”
  2. “I don’t keep or produce ‘replication packages’… Data takes a significant amount of human capital and financial resources, and serves as a barrier-to-entry against other researchers… they can do it themselves.”

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

August 11, 2015 at 12:01 am

party in the streeet: response to econlog commenters

Last week, Bryan Caplan wrote two lengthy posts about Party in the Street (here and here). He focuses on a few issues: the differences between Republican and Democratic administrations on war policy and the exaggeration of differences by activists. Bryan also argues that the arguments typically made by peace activists aren’t those he would make. Rather than condemn specific politicians or make blanket statements about war, he focuses on the death of innocents and war’s unpredictability (e.g., it is hard to judge if wars work ex ante).

The commenters raised a number of questions and issues. Here are a few:

  • Jacob Geller asks whether the collapse of the peace movement is spurious and could be attributed to other factors (e.g., the economy). Answer: There are multiple ways to assess this claim – the movement began its slide pre-recession (true), partisans are more likely to disappear than non-partisans during the recession (true), and the movement did not revive post-recession (true – e.g., few democrats have protested Obama’s war policies). Movements rise and fall for many reasons, but in this case, partisanship is almost certainly a factor.
  • Michael suggested that there was a Democratic war policy difference in that Al Gore would not have fought Iraq. One can’t establish anything with certainty using counter factual history, but Frank Harvey suggested that President Gore would like have fought Iraq, given the long standing enmity and low level armed conflict between Iraq and the Clinton administration  (including Gore).
  • Also, a few people raised the issue of voting and if the antiwar issue was salient for Democrats. A few comments – one is that in data about activists, Democrats tended to view Obama’s management of war in better terms than non-partisans. Another point is that opinions on the war affected vote choice in multiple elections. The issue, though, isn’t whether Democrats were motivated by their attitudes on the Iraq War. The issue is how that is linked to movement participation and how that changes over time, given electoral events. All evidence suggests that the democratic party and the antiwar movement dissociated over time, leading to the peace movement’s collapse.

Thanks for the comments!

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

July 22, 2015 at 12:01 am