orgtheory.net

Archive for the ‘research’ Category

why do some states have higher mass incarceration than others? comment on pamela oliver’s blog

Over at Race, Politics, and Justice, Pamela Oliver asks why her home state of Wisconsin has such high rates of Black imprisonment in comparison to other states, even in times when rates are falling:

Wisconsin has stayed at the top of the pile in Black incarceration even though its Black incarceration rate has been declining. How can this be? The answer is that all the other states have been declining faster. By putting a scatter plot of state imprisonment rates on consistent axes, I’ve been able to produce a really cool animation effect.  The data source is the\ National Corrections Reporting Program public “in custody” file. Rates are calculation on entire population (all ages). States voluntarily participate in this data collection program and appear and disappear from the plot depending on whether they reported for the appropriate year. States are also eliminated if more than 10% of their inmates are recorded as having unknown race. You’ll see if watch long enough that the relative positions of most states stay the same, but the whole distribution starts  moving downward (lower Black incarceration rates) and to the left (lower White incarceration rates) in the last few years. You may download both these images and explanatory material in PDF format  using this link.

Interesting. This is a classic example of the “dog that didn’t bark.” What happened in other states that did not happen in Wisconsin? A few hypotheses: Wisconsin reflects particularly bad conditions in segregated places like Milwaukee; fixed effects of prosecutors – Wisconsin district attorney’s are notoriously bad; police enforcement is unusually harsh. Add your hypotheses or explanation in the comments.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street 

Written by fabiorojas

October 6, 2016 at 12:29 am

bad reporting on bad science

This Guardian piece about bad incentives in science was getting a lot of Twitter mileage yesterday. “Cut-throat academia leads to natural selection of bad science,” the headline screams.

The article is reporting on a new paper by Paul Smaldino and Richard McElreath, and features quotes from the authors like, “As long as the incentives are in place that reward publishing novel, surprising results, often and in high-visibility journals above other, more nuanced aspects of science, shoddy practices that maximise one’s ability to do so will run rampant.”

Well. Can’t disagree with that.

But when I clicked through to read the journal article, the case didn’t seem nearly so strong. The article has two parts. The first is a review of review pieces published between 1962 and 2013 that examined the levels of statistical power reported in studies in a variety of academic fields. The second is a formal model of an evolutionary process through which incentives for publication quantity will drive the spread of low-quality methods (such as underpowered studies) that increase both productivity as well as the likelihood of false positives.

The formal model is kind of interesting, but just shows that the dynamics are plausible — something I (and everyone else in academia) was already pretty much convinced of. The headlines are really based on the first part of the paper, which purports to show that statistical power in the social and behavioral sciences hasn’t increased over the last fifty-plus years, despite repeated calls for it to do so.

Well, that part of the paper basically looks at all the papers that reviewed levels of statistical power in studies in a particular field, focusing especially on papers that reported small effect sizes. (The logic is that such small effects are not only most common in these fields, but also more likely to be false positives resulting from inadequate power.) There were 44 such reviews. The key point is that average reported statistical power has stayed stubbornly flat. The conclusion the authors draw is that bad methods are crowding out good ones, even though we know better, through some combination of poor incentives and selection that rewards researcher ignorance.

 

 

The problem is that the evidence presented in the paper is hardly strong support for this claim. This is not a random sample of papers in these fields, or anything like it. Nor is there other evidence to show that the reviewed papers are representative of papers in their fields more generally.

More damningly, though, the fields that are reviewed change rather dramatically over time. Nine of the first eleven studies (those before 1975) review papers from education or communications. The last eleven (those after 1995) include four from aviation, two from neuroscience, and one each from health psychology, software engineering, behavioral ecology, international business, and social and personality psychology. Why would we think that underpowering in the latter fields at all reflects what’s going on in the former fields in the last two decades? Maybe they’ve remained underpowered, maybe they haven’t. But statistical cultures across disciplines are wildly different. You just can’t generalize like that.

The news article goes on to paraphrase one of the authors as saying that “[s]ociology, economics, climate science and ecology” (in addition to psychology and biomedical science) are “other areas likely to be vulnerable to the propagation of bad practice.” But while these fields are singled out as particularly bad news, not one of the reviews covers the latter three fields (perhaps that’s why the phrasing is “other areas likely”?). And sociology, which had a single review in 1974, looks, ironically, surprisingly good — it’s that positive outlier in the graph above at 0.55. Guess that’s one benefit of using lots of secondary data and few experiments.

The killer is, I think the authors are pointing to a real and important problem here. I absolutely buy that the incentives are there to publish more — and equally important, cheaply — and that this undermines the quality of academic work. And I think that reviewing the reviews of statistical power, as this paper does, is worth doing, even if the fields being reviewed aren’t consistent over time. It’s also hard to untangle whether the authors actually said things that oversold the research or if the Guardian just reported it that way.

But at least in the way it’s covered here, this looks like a model of bad scientific practice, all right. Just not the kind of model that was intended.

[Edited: Smaldino points on Twitter to another paper that offers additional support for the claim that power hasn’t increased in psychology and cognitive neuroscience, at least.]

Written by epopp

September 22, 2016 at 12:28 pm

Appetite for Innovation: Creativity & Change at elBulli (To be published by Columbia University Press on July 12, 2016)

How is it possible for an organization to systematically enact changes in the larger system of which it is part? Using Ferran Adria’s iconic restaurant “elBulli” as an example of organizational creativity and radical innovation, Appetite for Innovation examines how Adria’s organization was able to systematically produce breakthroughs of knowledge within its field and, ultimately, to stabilize a new genre or paradigm in cuisine – the often called “experimental,” “molecular,” or “techno-emotional” culinary movement.

Recognized as the most influential restaurant in the world, elBulli has been at the forefront of the revolution that has inspired the gastronomic avant-garde worldwide. With a voracious appetite for innovation, year after year, Adrià and his team have broken through with new ingredients, combinations, culinary concepts and techniques that have transformed our way of understanding food and the development of creativity in haute cuisine.

Appetite for Innovation is an organizational study of the system of innovation behind Adrià’s successful organization. It reveals key mechanisms that explain the organization’s ability to continuously devise, implement and legitimate innovative ideas within its field and beyond. Based on exclusive access to meetings, observations, and interviews with renowned professionals of the contemporary gastronomic field, the book reveals how a culture for change was developed within the organization; how new communities were attracted to the organization’s work and helped to perpetuate its practice, and how the organization and its leader’s charisma and reputation were built and maintained over time. The book draws on examples from other fields, including art, science, music, theatre and literature to explore the research’s potential to inform practices of innovation and creativity in multiple kinds of organizations and industries.

The research for Appetite for Innovation was conducted when Adria’s organization was undergoing its most profound transformation, from a restaurant to a research center for innovation, “elBulli foundation”.  The book, therefore, takes advantage of this unique moment in time to retrace the story of a restaurant that became a legend and to explore underlying factors that led to its reinvention in 2011 into a seemingly unparalleled organizational model.

Appetite for Innovation is primarily intended to reach and be used by academic and professionals from the fields of innovation and organizations studies. It is also directed towards a non-specialist readership interested in the topics of innovation and creativity in general. In order to engage a wider audience and show the fascinating world of chefs and the inner-workings of high-end restaurants, the book is filled with photographs of dishes, creative processes and team’s dynamics within haute cuisine kitchens and culinary labs. It also includes numerous diagrams and graphs that illustrate the practices enacted by the elBulli organization to sustain innovation, and the networks of relationships that it developed over time. Each chapter opens with an iconic recipe created by elBulli as a way of illustrating the book’s central arguments and key turning points that enable the organization to gain a strategic position within its field and become successful.

To find a detailed description of the book please go to: http://cup.columbia.edu/book/appetite-for-innovation/9780231176781

Also, Forbes.com included Appetite for Innovation in its list of 17 books recommended for “creative leaders” to read this summer:  http://www.forbes.com/sites/berlinschoolofcreativeleadership/2016/05/15/17-summer-books-creative-leaders-can-read-at-the-beach/#7ac430985cef

 

Picture1.pngPicture3

Picture2
Picture4

Written by M. Pilar Opazo

June 8, 2016 at 4:46 pm

tying our own noose with data? higher ed edition

I wanted to start this post with a dramatic question about whether some knowledge is too dangerous to pursue. The H-bomb is probably the archetypal example of this dilemma, and brings to mind Oppenheimer’s quotation of the Bhagavad Gita upon the detonation of Trinity: “Now I am become Death, the destroyer of worlds.

But really, that’s way too melodramatic for the example I have in mind, which is much more mundane. Much more bureaucratic. It’s less about knowledge that is too dangerous to pursue and more about blindness to the unanticipated — but not unanticipatable — consequences of some kinds of knowledge.

19462658349_0e7d937d6d_b

Maybe this wasn’t such a good idea.

The knowledge I have in mind is the student-unit record. See? I told you it was boring.

The student-unit record is simply a government record that tracks a specific student across multiple educational institutions and into the workforce. Right now, this does not exist for all college students.

There are records of students who apply for federal aid, and those can be tied to tax data down the road. This is what the Department of Education’s College Scorecard is based on: earnings 6-10 years after entry into a particular college. But this leaves out the 30% of students who don’t receive federal aid.

There are states with unit-record systems. Virginia’s is particularly strong: it follows students from Virginia high schools through enrollment in any not-for-profit Virginia college and then into the workforce as reflected in unemployment insurance records. But it loses students who enter or leave Virginia, which is presumably a considerable number.

But there’s currently no comprehensive federal student-unit record system. In fact at the moment creating one is actually illegal. It was banned in an amendment to the Higher Education Act reauthorization in 2008, largely because the higher ed lobby hates the idea.

Having student-unit records available would open up all kind of research possibilities. It would help us see the payoffs not just to college in general, but to specific colleges, or specific majors. It would help us disentangle the effects of the multiple institutions attended by the typical college student. It would allow us to think more precisely about when student loans do, and don’t, pay off. Academics and policy wonks have argued for it on just these grounds.

In fact, basically every social scientist I know would love to see student-unit records become available. And I get it. I really do. I’d like to know the answers to those questions, too.

But I’m really leery of student-unit records. Maybe not quite enough to stand up and say, This is a terrible idea and I totally oppose it. Because I also see the potential benefits. But leery enough to want to point out the consequences that seem likely to follow a student-unit record system. Because I think some of the same people who really love the idea of having this data available would be less enthused about the kind of world it might help, in some marginal way, create.

So, with that as background, here are three things I’d like to see data enthusiasts really think about before jumping on this bandwagon.

First, it is a short path from data to governance. For researchers, the point of student-level data is to provide new insights into what’s working and what isn’t: to better understand what the effects of higher education, and the financial aid that makes it possible, actually are.

But for policy types, the main point is accountability. The main point of collecting student-level data is to force colleges to take responsibility for the eventual labor market outcomes of their students.

Sometimes, that’s phrased more neutrally as “transparency”. But then it’s quickly tied to proposals to “directly tie financial aid availability to institutional performance” and called “an essential tool in quality assurance.”

Now, I am not suggesting that higher education institutions should be free to just take all the federal money they can get and do whatever the heck they want with it. But I am very skeptical that, in general, the net effect of accountability schemes is generally positive. They add bureaucracy, they create new measures to game, and the behaviors they actually encourage tend to be remote from the behaviors they are intended to encourage.

Could there be some positive value in cutting off aid to institutions with truly terrible outcomes? Absolutely. But what makes us think that we’ll end up with that system, versus, say, one that incentivizes schools to maximize students’ earnings, with all the bad behavior that might entail? Anyone who seriously thinks that we would use more comprehensive data to actually improve governance of higher ed should take a long hard look at what’s going on in the UK these days.

Second, student-unit records will intensify our already strong focus on the economic return to college, and further devalue other benefits. Education does many things for people. Helping them earn more money is an important one of those things. It is not, however, the only one.

Education expands people’s minds. It gives them tools for taking advantage of opportunities that present themselves. It gives them options. It helps them to find work they find meaningful, in workplaces where they are treated with respect. And yes, selection effects — or maybe it’s just because they’re richer — but college graduates are happier and healthier than nongraduates.

The thing is, all these noneconomic benefits are difficult to measure. We have no administrative data that tracks people’s happiness, or their health, let alone whether higher education has expanded their internal life.

What we’ve got is the big two: death and taxes. And while it might be nice to know whether today’s 30-year-old graduates are outliving their nongraduate peers in 50 years, in reality it’s tax data we’ll focus on. What’s the economic return to college, by type of student, by institution, by major? And that will drive the conversation even more than it already does. Which to my mind is already too much.

Third, social scientists are occupationally prone to overestimate the practical benefit of more data. Are there things we would learn from student-unit records that we don’t know? Of course. There are all kinds of natural experiments, regression discontinuities, and instrumental variables that could be exploited, particularly around financial aid questions. And it would be great to be able to distinguish between the effects of “college” and the effects of that major at this college.

But we all realize that a lot of the benefit of “college” isn’t a treatment effect. It’s either selection — you were a better student going in, or from a better-off family — or signal — you’re the kind of person who can make it through college; what you did there is really secondary.

Proposals to use income data to understand the effects of college assume that we can adjust for the selection effects, at least, through some kind of value-added model, for example. But this is pretty sketchy. I mean, it might provide some insights for us to think about. But as a basis for concluding that Caltech, Colgate, MIT, and Rose-Hulman Institute of Technology (the top five on Brookings’ list) provide the most value — versus that they have select students who are distinctive in ways that aren’t reflected by adjusting for race, gender, age, financial aid status, and SAT scores — is a little ridiculous.

So, yeah. I want more information about the real impact of college, too. But I just don’t see the evidence out there that having more information is going to lead to policy improvements.

If there weren’t such clear potential negative consequences, I’d say sure, try, it’s worth learning more even if we can’t figure out how to use it effectively. But in a case where there are very clear paths to using this kind of information in ways that are detrimental to higher education, I’d like to see a little more careful thinking about the real likely impacts of student-unit records versus the ones in our technocratic fantasies.

Written by epopp

June 3, 2016 at 2:06 pm

design-focused review: a guest post by samuel r. lucas

Samuel R. Lucas is professor of sociology at the University of California, Berkeley. He works on education, social mobility, and research methods. This guest post proposes a reform of the journal review process.

On-going discussion about the journal publication process is laudable. I support many of the changes that have been suggested, such as the proposal to move to triple-blind review, and implemented, such as the rise of new journals that reject “dictatorial revi”–oops, I mean “developmental review.” I suggest, however, that part of the problem is that reviewers are encouraged to weigh in on anything–literally anything! I’ve reviewed papers and later received others’ reviews only to find a reviewer ignored almost all of the paper, weighing in on such issues as punctuation and choice of abbreviations for some technical terms. Although such off-point reviews are rare, they indicate that reviewers perceive it legitimate to weigh in on anything and everything. But a system allowing unlimited bases of review is part of the problem with peer review, for it shifts too much power to reviewers while at the same time providing insufficient guidance on what will be helpful in peer review. I contend that we need dispense with our kitchen-sink reviewing system by removing from reviewer consideration two aspects of papers: framing and findings.

Framing is a matter of taste and, as there is no accounting for taste, framing offers fertile ground for years of delay. Framing is an easy way to hold a paper hostage, because most solid papers could be framed in any one of several ways, and often multiple frames are equally valid. Authors should be allowed to frame their work as they see fit, not be forced to alter the frame because a reviewer reads the paper differently than the author. A reviewer who feels a paper should be framed differently should wait for its publication and then submit a paper that notes that the paper addressed Z but missed its connection to Q. Such an approach would make any worthwhile debate on framing public while freeing authors to place their ideas into the dialogue as well.

As for findings, peer review should be built on the following premise: if you accept the methods, then you accept the findings enough for the paper to enter the peer-reviewed literature. Thus, reviewers should assess whether the paper’s (statistical, experimental, qualitative) research design can answer the paper’s research question, but not the findings produced by the solid research design. Allowing reviewers to evaluate findings allows reviewers to (perhaps inadvertantly) scrutinize papers differently depending on the findings. To prevent such possibilities, journals should allow authors to request a findings-embargoed review, for which the journal would remove the findings section of the paper as well as the findings from: 1)the abstract, and, 2)the discussion/conclusion section of the paper before delivering the paper for review. As some reviewers may regard reading soon-to-be-published work early as a benefit of reviewing, reviewers could be sent full manuscripts if the paper is accepted for publication.

A review system in which reviewers do not review framing and findings is a design-focused review system. Once a paper passes a design-focused review, editors can conduct an in-house assessment to assure findings are accurately conveyed and the framing is coherent. The editors, unlike reviewers, see the population of submissions, and thus, unlike reviewers, are well-placed to fairly and consistently assess any other issues. Editors will be even more enabled to make such calls if they can make them only for the papers reviewers have determined satisfy the basic criterion of having a design solid enough to answer the question the paper poses.

The current kitchen-sink review system has become increasingly time-consuming and perhaps capricious, hardly positive features for effective peer review. If findings were embargoed and reviewers were discouraged from treating their preferred frame as essential to a quality paper, review times could be chopped dramatically and revise and resubmit processes would be focused on solidifying design. As a result, design-focused review could lower our collective workload by reducing the number of taste-driven rounds of review we experience as authors and reviewers, while simultaneously reducing authors’ potentially paralyzing concern that mere matters of taste will block their research from timely publication. Design-focused review may thus make peer review work better for everyone.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

April 28, 2016 at 12:02 am

how scientists can help us avoid the next flint

In a story full of neglect and willful ignorance, there are a few heroes. One is Mona Hanna-Attisha, the Flint pediatrician and Michigan State professor who raised the alarm with data on kids’ blood-lead levels from the local hospital. Another is Marc Edwards, the Virginia Tech environmental engineer who took on the Michigan Department of Environmental Quality after a Flint resident sent him a lead-rich water sample for testing.

Hanna-Attisha and Edwards provide shining examples of how academics can use science to hold the powers-that-be accountable and make meaningful change.

Taking on the status quo is hard. But as Edwards discusses in the Chronicle, it’s becoming ever-harder to do that from within universities:

I am very concerned about the culture of academia in this country and the perverse incentives that are given to young faculty. The pressures to get funding are just extraordinary. We’re all on this hedonistic treadmill — pursuing funding, pursuing fame, pursuing h-index — and the idea of science as a public good is being lost….What faculty person out there is going to take on their state, the Michigan Department of Environmental Quality, and the U.S. Environmental Protection Agency?…When was the last time you heard anyone in academia publicly criticize a funding agency, no matter how outrageous their behavior? We just don’t do these things….Everyone’s invested in just cranking out more crap papers.

When faculty defend academic freedom, tenure is often the focus. And certainly tenure provides one kind of protection for scientists like Hanna-Attisha (though she doesn’t yet have it) or Edwards who want to piss off the powerful.

But as this interview — and you should really read the whole thing — makes clear, tenure isn’t the only element of the academic ecosystem that allows people to speak out. Scientists can’t do their work without research funding, or access to data. When funders have interests — whether directly economic, as when oil and gas companies fund research on the environmental impacts of fracking, or more organizational, as when environmental agencies just don’t want to rock the boat — that affects what scientists can do.

So in addition to tenure, a funding ecosystem that includes multiple potential sources and that excludes the most egregiously self-interested will encourage independent science.

But beyond that, we need to defend strong professional cultures. Hanna-Attisha emphasizes how the values of medicine both motivated her (“[T]his is what matters. This is what we do … This is why we’re here”) and prompted her boss’s support (“Kids’ health comes first”), despite the “politically messy situation” that might have encouraged the hospital’s silence. Edwards lectures his colleagues about “their obligation as civil engineers to protect the public” and says, “I didn’t get in this field to stand by and let science be used to poison little kids.”

Intense economic pressures, though, make it hard to protect such this kind of idealism. As market and financial logics come to dominate institutions like hospitals and universities, professional values gradually erode. It takes a concerted effort to defend them when everything else encourages you to keep your head down and leave well enough alone.

Promoting academic independence isn’t without its downsides. Scientists can become solipsistic, valuing internal status over real-world impact and complacently expecting government support as their due. The balance between preserving a robust and independent academic sector and ensuring scientists remain accountable to the public is a delicate one.

But if I have to choose between two risks—that science might be a bit insular and too focused on internal incentives, or that the only supporters of science have a one-sided interest in how the results turn out—I’ll take the first one every time.

Written by epopp

February 12, 2016 at 4:54 pm

that chocolate milk study: can we blame the media?

A specific brand of high-protein chocolate milk improved the cognitive function of high school football players with concussions. At least that’s what a press release from the University of Maryland claimed a few weeks ago. It also quoted the superintendent of the Washington County Public Schools as saying, “Now that we understand the findings of this study, we are determined to provide Fifth Quarter Fresh [the milk brand] to all of our athletes.”

The problem is that the “study” was not only funded in part by the milk producer, but is unpublished, unavailable to the public and, based on the press release — all the info we’ve got — raises immediate methodological questions. Certainly there are no grounds for making claims about this milk in particular, since the control group was given no milk at all.

The summary also raises questions about the sample size. The total sample included 474 high school football players, but included both concussed and non-concussed players. How many of these got concussions during one season? I would hope not enough to provide statistical power — this NAS report suggests high schoolers get 11 concussions per 10,000 football games and practices.

And even if the sample size is sufficient, it’s not clear that the results are meaningful. The press release suggests concussed athletes who drank the milk did significantly better on four of thirty-six possible measures — anyone want to take bets on the p-value cutoff?

Maryland put out the press release nearly four weeks ago. Since then there’s been a slow build of attention, starting with a takedown by Health News Review on January 5, before the story was picked up by a handful of news outlets and, this weekend, by Vox. In the meanwhile, the university says in fairly vague terms that it’s launched a review of the study, but the press release is still on the university website, and similarly questionable releases (“The magic formula for the ultimate sports recovery drink starts with cows, runs through the University of Maryland and ends with capitalism” — you can’t make this stuff up!) are up as well.

Whoever at the university decided to put out this press release should face consequences, and I’m really glad there are journalists out there holding the university’s feet to the fire. But while the university certainly bears responsibility for the poor decision to go out there and shill for a sponsor in the name of science, it’s worth noting that this is only half of the story.

There’s a lot of talk in academia these days about the status of scientific knowledge — about replicability, bias, and bad incentives, and how much we know that “just ain’t so.” And there’s plenty of blame to go around.

But in our focus on universities’ challenges in producing scientific knowledge, sometimes we underplay the role of another set of institutions: the media. Yes, there’s a literature on science communication that looks as the media as intermediary between science and the public. But a lot of it takes a cognitive angle on audience reception, and it’s got a heavy bent toward controversial science, like climate change or fracking.

More attention to media as a field, though, with rapidly changing conditions of production, professional norms and pathways, and career incentives, could really shed some light on the dynamics of knowledge production more generally. It would be a mistake to look back to some idealized era in which unbiased but hard-hitting reporters left no stone unturned in their pursuit of the public interest. But the acceleration of the news cycle, the decline of journalism as a viable career, the impact of social media on news production, and the instant feedback on pageviews and clickthroughs all tend to reinforce a certain breathless attention to the latest overhyped university press release.

It’s not the best research that gets picked up, but the sexy, the counterintuitive, and the clickbait-ish. Female-named hurricanes kill more than male hurricanes. (No.) Talking to a gay canvasser makes people support gay marriage. (Really no.) Around the world, children in religious households are less altruistic than children of atheists. (No idea, but I have my doubts.)

This kind of coverage not only shapes what the public believes, but it shapes incentives in academia as well. After all, the University of Maryland is putting out these press releases because it perceives it will benefit, either from the perception it is having a public impact, or from the goodwill the attention generates with Fifth Quarter Fresh and other donors. Researchers, in turn, will be similarly incentivized to focus on the sexy topic, or at least the sexy framing of the ordinary topic. And none of this contributes to the cumulative production of knowledge that we are, in theory, still pursuing.

None of this is meant to shift the blame for the challenges faced by science from the academic ecosystem to the realm of media. But if you really want to understand why it’s so hard to make scientific institutions work, you can’t ignore the role of media in producing acceptance of knowledge, or the rapidity with which that role is changing.

After all, if academics themselves can’t resist the urge to favor the counterintuitive over the mundane, we can hardly blame journalists for doing the same.

Written by epopp

January 18, 2016 at 1:23 pm