Forrest Briscoe and Abhinav Gupta have a new article “Social Activism in and Around Organizations” in The Academy of Management Annals that reviews recent work on movements inside organizations. The abstract:
Organizations are frequent targets for social activists aiming to influence society by first altering organizational policies and practices. Reflecting a steady rise in research on this topic, we review recent literature and advance an insider-outsider framework to help explicate the diverse mechanisms and pathways involved. Our framework distinguishes between different types of activists based on their relationship with targeted organizations. For example, “insider” activists who are employees of the target organization have certain advantages and disadvantages when compared with “outsider” activists who are members of independent social movement organizations. We also distinguish between the direct and indirect (or spillover) effects of social activism. Much research has focused on the direct effects of activism on targeted organizations, but often the effects on non-targeted organizations matter more for activists goals of achieving widespread change. Drawing on this framework, we identify and discuss eight specific areas that are in need of further scholarly attention.
It’s really a wonderful article that gets into the subtleties of this area of work. Highly recommended!
This Guardian piece about bad incentives in science was getting a lot of Twitter mileage yesterday. “Cut-throat academia leads to natural selection of bad science,” the headline screams.
The article is reporting on a new paper by Paul Smaldino and Richard McElreath, and features quotes from the authors like, “As long as the incentives are in place that reward publishing novel, surprising results, often and in high-visibility journals above other, more nuanced aspects of science, shoddy practices that maximise one’s ability to do so will run rampant.”
Well. Can’t disagree with that.
But when I clicked through to read the journal article, the case didn’t seem nearly so strong. The article has two parts. The first is a review of review pieces published between 1962 and 2013 that examined the levels of statistical power reported in studies in a variety of academic fields. The second is a formal model of an evolutionary process through which incentives for publication quantity will drive the spread of low-quality methods (such as underpowered studies) that increase both productivity as well as the likelihood of false positives.
The formal model is kind of interesting, but just shows that the dynamics are plausible — something I (and everyone else in academia) was already pretty much convinced of. The headlines are really based on the first part of the paper, which purports to show that statistical power in the social and behavioral sciences hasn’t increased over the last fifty-plus years, despite repeated calls for it to do so.
Well, that part of the paper basically looks at all the papers that reviewed levels of statistical power in studies in a particular field, focusing especially on papers that reported small effect sizes. (The logic is that such small effects are not only most common in these fields, but also more likely to be false positives resulting from inadequate power.) There were 44 such reviews. The key point is that average reported statistical power has stayed stubbornly flat. The conclusion the authors draw is that bad methods are crowding out good ones, even though we know better, through some combination of poor incentives and selection that rewards researcher ignorance.
The problem is that the evidence presented in the paper is hardly strong support for this claim. This is not a random sample of papers in these fields, or anything like it. Nor is there other evidence to show that the reviewed papers are representative of papers in their fields more generally.
More damningly, though, the fields that are reviewed change rather dramatically over time. Nine of the first eleven studies (those before 1975) review papers from education or communications. The last eleven (those after 1995) include four from aviation, two from neuroscience, and one each from health psychology, software engineering, behavioral ecology, international business, and social and personality psychology. Why would we think that underpowering in the latter fields at all reflects what’s going on in the former fields in the last two decades? Maybe they’ve remained underpowered, maybe they haven’t. But statistical cultures across disciplines are wildly different. You just can’t generalize like that.
The news article goes on to paraphrase one of the authors as saying that “[s]ociology, economics, climate science and ecology” (in addition to psychology and biomedical science) are “other areas likely to be vulnerable to the propagation of bad practice.” But while these fields are singled out as particularly bad news, not one of the reviews covers the latter three fields (perhaps that’s why the phrasing is “other areas likely”?). And sociology, which had a single review in 1974, looks, ironically, surprisingly good — it’s that positive outlier in the graph above at 0.55. Guess that’s one benefit of using lots of secondary data and few experiments.
The killer is, I think the authors are pointing to a real and important problem here. I absolutely buy that the incentives are there to publish more — and equally important, cheaply — and that this undermines the quality of academic work. And I think that reviewing the reviews of statistical power, as this paper does, is worth doing, even if the fields being reviewed aren’t consistent over time. It’s also hard to untangle whether the authors actually said things that oversold the research or if the Guardian just reported it that way.
But at least in the way it’s covered here, this looks like a model of bad scientific practice, all right. Just not the kind of model that was intended.
[Edited: Smaldino points on Twitter to another paper that offers additional support for the claim that power hasn’t increased in psychology and cognitive neuroscience, at least.]
One of the most striking arguments of Gary Becker’s theory of discrimination is that there is a cost of racial discrimination. If you hire people based on personal taste rather than job skills, your competitors can hire these better works and you work at a disadvantage. I think the strong version argument isn’t right. Markets do not instantly weed out discriminators. But the weak version has a lot of merit. If you truly avoid workers based on race or gender, you are giving away a huge advantage to the competition.
Well, turns out that Becker was right, at least in one data set. Devah Pager has a new paper in Sociological Science showing that discrimination is indeed associated with lower firm performance:
Economic theory has long maintained that employers pay a price for engaging in racial discrimination. According to Gary Becker’s seminal work on this topic and the rich literature that followed, racial preferences unrelated to productivity are costly and, in a competitive market, should drive discriminatory employers out of business. Though a dominant theoretical proposition in the field of economics, this argument has never before been subjected to direct empirical scrutiny. This research pairs an experimental audit study of racial discrimination in employment with an employer database capturing information on establishment survival, examining the relationship between observed discrimination and firm longevity. Results suggest that employers who engage in hiring discrimination are less likely to remain in business six years later.
Commentary: I have always found it ironic that sociologists and non-economists have resisted the implications of taste based discrimination theory. If discrimination in markets is truly not based on performance or productivity, there must be *some* consequence. However, a lot of sociologists have a strong distrust of markets that draws their attention to this rather simple implication of price theory. I don’t know the entire literature on taste based discrimination, but it’s good to see this appear.
Recently, the topic of quality advising has come up in conversation. The question is: are there actually good advisers? Or is it mainly a selection effect? A related question: do star advisers make star students? I’d be interested if readers know if there is a literature on this question. Here are some hypotheses:
- It is easy to be a good adviser by simply not being a bad adviser. A lot of advisers undermine students by either being negligent/non-responsive or overly aggressive and stressing students out. Even if you don’t have any special advice for students, you can probably increase your student outcomes by actually meeting with students, not being a jerk, and doing paper work on time.
- Having a star adviser is probably neutral on the average. My hypothesis is that some academics are very good at multitasking. When they become famous, they can keep up the work and help students. Others disappear into a world of committees and administrative posts and abandon the student.
- Advisers with a long string of strong placements tend to be at top schools, which suggests a selection effect. Most employ the “reliable/stable/nice model” and attract good students. A few employ “survival of the fittest” – they only take students who are capable of high quality work and weed out the rest. The Indiana model (support students at all skill levels) seems to be very rare.
Add your own ideas in the comments.
A few months ago, we discussed the general shift from blogs to social media and anonymous boards. But a question remains: if that’s true, why bother with blogs at all? In fact, our evil twin blog surrendered and admitted defeat, while retreating into Facebook. Why continue?
Answer: Only a blog does what a blog does well. In other words, blogs are good at specific things and social media is good at other things.
- Searchable – orgtheory is completely searchable going back to the first post in 2006. Twitter only allows searches of the last 3k tweets (which is, like 5 minutes, for some Tweeters like Tressie Mc). Facebook is basically unsearchable for content.
- Accountability and identity – Blogs are good for creating an identity, which means accountability. Even if we used pseudonyms, we’d still create an identity that would help you assess the quality of the post.
- Quality – I’m sorry, but most social media simply isn’t good at producing high quality content. Twitter may be fun, but it won’t replace a sustained argument. Facebook allows length, but it is often buried deep inside a walled garden. A lot of social media is good for “in the moment discussion” rather than sustained truth seeking.
I love social media and I have account on Twitter, Facebook, and other platforms. But make no mistake. If you care about writing, blogs are a good format and it’s much better than social media which favors snark and anonymous sniping. So for now, I’m stil McBloggin‘.