orgtheory.net

putting limits on the academic workday

with one comment

Today, among the various administrative tasks of scheduling meetings with students and other responsibilities, I decided to RSVP yes for an upcoming evening talk.  I didn’t make this decision lightly, as it involved coordinating schedules with another party (i.e., fellow dual career parent).

With the use of technology such as email, increasing job precarity, and belief in facetime as signalling productivity and commitment, the workday in the US has elongated, blurring boundaries to the point that work can crowd out other responsibilities to family, community, hobbies, and self-care.  However, one Ivy  institution is exhorting its members to rethink making evening events and meetings the norm.

In this nuanced statement issued to department chairs, Brown University’s provost outlines the stakes and consequences of an elongated workday:

This burden [of juggling work and family commitments] may disproportionately affect female faculty members. Although data on Brown’s faculty is not available, national statistics indicate that male faculty members (of every rank) are more likely than female faculty members (of every rank) to have a spouse or partner whose comparably flexible work schedule allows that spouse or partner to handle the bulk of evening-time household responsibilities. Put differently, male faculty members are more likely than female faculty members to have the household support to attend campus events after 5:30. We must be attuned to issues of gender equity when we think about program scheduling. We must also take into consideration the particular challenges faced by single parents on the faculty when required to attend events outside the regular hours of childcare.

Read the rest of this entry »

Written by katherinechen

September 28, 2016 at 7:16 pm

the pager paper, sociological science, and the journal process

with 7 comments

Last week, we discussed Devah Pager’s new paper on the correlation between discrimination in hiring and firm closure. As one would expect from Pager, it’s a simple and elegant paper using an audit study to measure the prevalence and consequences of discrimination in the labor market. In this post, I want to use the paper to talk about the journal publication process. Specifically, I want to discuss why this paper appeared in Sociological Science.

First, it may be the case that Professor Pager directly went to Sociological Science without trying another peer reviewed journal. If so, then I congratulate both Pager and Sociological Science. By putting a high quality paper into public access, both Professor Pager and the editors of Sociological Science have shown that we don’t need the lengthy and cumbersome developmental review system to get work out there.

Second, it may be the case that Professor Pager tried another journal, probably the ASR or AJS or an elite specialty journal and it was rejected. If so, that raises an important question – what specifically was “wrong” with this paper? Whatever one thinks of the Becker theory of racial discrimination, one can’t critique the paper on lacking a “framing” or have a simple and clean research design. One can’t critique statistical technique because it’s a simple comparison of means. One can’t critique the importance of the finding – the correlation between discrimination in hiring and firm closure is important to know and notable in size. And, of course, the paper is short and clearly written.

Perhaps the only criticism I can come up with is a sort of “identification fundamentalism.” Perhaps reviewers brought up the fact discrimination was not randomly assigned to firms so you can’t infer anything from the correlation. That is bizarre because it would render Becker’s thesis un-testable. What experimental design would allow you get a random selection of firms to suddenly become racist in their hiring practices? Here, the only sensible approach is Bayesian – you collect high quality observational data and revise your beliefs accordingly. This criticism, if it was made, isn’t sound upon reflection. I wonder what, possibly, could the grounds for rejection be aside from knee jerk anti-rational choice comments or discomfort with a finding that markets do have some corrective to racial discrimination.

Bottom line: Pager and the Sociological Science crew are to be commended. Maybe Pager just wanted this paper “out there” or just got tired of the review process. Either way, three cheers for Pager and the Soc Sci Crew.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

September 28, 2016 at 12:10 am

the donald and the mule

with 3 comments

In the 1950s, Isaac Asimov wrote a series of books called The Foundation Series. The plot is simple and fascinating. Far in the future, civilization is collapsing, Roman Empire style. A small group of mathematical social scientists (“pyschohistorians”) use all kinds of models to predict what might happen to humanity. They decide to let the Empire fall and replace it with an alternative social order originating on a marginal planet called Foundation. And of course, the pyschohistorians pull the strings to make this happen.

The sequel introduces a fascinating twist. For the first hundred or so years, everything is going to plan. Foundation becomes a strong city state and starts to restore political order. Then, all of a sudden, a Napoleon like leader, called “The Mule,” shows up and effortlessly conquers vast expanses of space. What happened? None of the social science models predicted that this would happen.

It turns out that the Mule is a mutant who has mind control. He simply conquers populations by adjusting their emotions and memories, so they just immediately fall into his lap. Eventually, a hidden group of psychohistorians, “the Second Foundation,” defeat the Mule, he becomes normal, and social progress is back on track.

This is the best way to explain my model of the Trump candidacy. He is a type of figure that does not normally get consideration in social theory. He isn’t quite as all powerful as The Mule, but he does share one key trait. He has a unique ability to directly appeal to a large group of people and by pass the normal channels of influence. The Mule had psychic powers, the Donald has the skill to manipulate the media. Weber, of course, spoke about charisma, but few have really gone into depth and integrated an account of charisma into social theory more systematically. That is why so many social scientists have difficulty talking about Trump, even after the fact. It’s about time we thought more carefully about these rare, but important, figures.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

September 27, 2016 at 12:37 am

party in the street: why study failed movements?

leave a comment »

This is the last post responding to Professor Amenta’s lengthy, supportive and critical take on Party in the Street. We earlier discussed whether it was wise to group Afghanistan and Iraq and if our explanation of the anti-Vietnam War movement was valid. In the review, he asks, if the antiwar movement of the 2000s failed, what is the point of studying it?

Short answer: Don’t select on the dependent variable.

Long answer: In the social sciences, we often exhibit a bias toward success. We like to talk about Apple and Google, but not Pets.com.But that’s a bad thing, especially if you want to study the outcomes of social processes. Failures are just as important as successes in the social sciences. You need a random sample of events or a sample where you can model the bias. So, in movements, we shouldn’t study only those that succeed. We need comparisons. And detailed case studies of a movement are way to start.

Peace movements are a class of movements that are notoriously unsuccessful, as we note in the book. By studying one in detail, and comparing with others, we can develop a sense of why that might be the case and then ask about other movements. Note: If you want a highly meritorious study of a study that uses a random sample of successful and non-successful movement groups, see Kathleen Blee’s award winning Democracy in the Making.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

September 26, 2016 at 12:06 am

just your average minimalist finnish alt-folk group

leave a comment »

MyBubba.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

September 25, 2016 at 12:14 am

new article on insider activism by briscoe and gupta

leave a comment »

Forrest Briscoe and Abhinav Gupta have a new article  “Social Activism in and Around Organizations” in The Academy of Management Annals that reviews recent work on movements inside organizations. The abstract:

Organizations are frequent targets for social activists aiming to influence society by first altering organizational policies and practices. Reflecting a steady rise in research on this topic, we review recent literature and advance an insider-outsider framework to help explicate the diverse mechanisms and pathways involved. Our framework distinguishes between different types of activists based on their relationship with targeted organizations. For example, “insider” activists who are employees of the target organization have certain advantages and disadvantages when compared with “outsider” activists who are members of independent social movement organizations. We also distinguish between the direct and indirect (or spillover) effects of social activism. Much research has focused on the direct effects of activism on targeted organizations, but often the effects on non-targeted organizations matter more for activists goals of achieving widespread change. Drawing on this framework, we identify and discuss eight specific areas that are in need of further scholarly attention.

It’s really a wonderful article that gets into the subtleties of this area of work. Highly recommended!

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

September 23, 2016 at 12:10 am

bad reporting on bad science

with 2 comments

This Guardian piece about bad incentives in science was getting a lot of Twitter mileage yesterday. “Cut-throat academia leads to natural selection of bad science,” the headline screams.

The article is reporting on a new paper by Paul Smaldino and Richard McElreath, and features quotes from the authors like, “As long as the incentives are in place that reward publishing novel, surprising results, often and in high-visibility journals above other, more nuanced aspects of science, shoddy practices that maximise one’s ability to do so will run rampant.”

Well. Can’t disagree with that.

But when I clicked through to read the journal article, the case didn’t seem nearly so strong. The article has two parts. The first is a review of review pieces published between 1962 and 2013 that examined the levels of statistical power reported in studies in a variety of academic fields. The second is a formal model of an evolutionary process through which incentives for publication quantity will drive the spread of low-quality methods (such as underpowered studies) that increase both productivity as well as the likelihood of false positives.

The formal model is kind of interesting, but just shows that the dynamics are plausible — something I (and everyone else in academia) was already pretty much convinced of. The headlines are really based on the first part of the paper, which purports to show that statistical power in the social and behavioral sciences hasn’t increased over the last fifty-plus years, despite repeated calls for it to do so.

Well, that part of the paper basically looks at all the papers that reviewed levels of statistical power in studies in a particular field, focusing especially on papers that reported small effect sizes. (The logic is that such small effects are not only most common in these fields, but also more likely to be false positives resulting from inadequate power.) There were 44 such reviews. The key point is that average reported statistical power has stayed stubbornly flat. The conclusion the authors draw is that bad methods are crowding out good ones, even though we know better, through some combination of poor incentives and selection that rewards researcher ignorance.

 

 

The problem is that the evidence presented in the paper is hardly strong support for this claim. This is not a random sample of papers in these fields, or anything like it. Nor is there other evidence to show that the reviewed papers are representative of papers in their fields more generally.

More damningly, though, the fields that are reviewed change rather dramatically over time. Nine of the first eleven studies (those before 1975) review papers from education or communications. The last eleven (those after 1995) include four from aviation, two from neuroscience, and one each from health psychology, software engineering, behavioral ecology, international business, and social and personality psychology. Why would we think that underpowering in the latter fields at all reflects what’s going on in the former fields in the last two decades? Maybe they’ve remained underpowered, maybe they haven’t. But statistical cultures across disciplines are wildly different. You just can’t generalize like that.

The news article goes on to paraphrase one of the authors as saying that “[s]ociology, economics, climate science and ecology” (in addition to psychology and biomedical science) are “other areas likely to be vulnerable to the propagation of bad practice.” But while these fields are singled out as particularly bad news, not one of the reviews covers the latter three fields (perhaps that’s why the phrasing is “other areas likely”?). And sociology, which had a single review in 1974, looks, ironically, surprisingly good — it’s that positive outlier in the graph above at 0.55. Guess that’s one benefit of using lots of secondary data and few experiments.

The killer is, I think the authors are pointing to a real and important problem here. I absolutely buy that the incentives are there to publish more — and equally important, cheaply — and that this undermines the quality of academic work. And I think that reviewing the reviews of statistical power, as this paper does, is worth doing, even if the fields being reviewed aren’t consistent over time. It’s also hard to untangle whether the authors actually said things that oversold the research or if the Guardian just reported it that way.

But at least in the way it’s covered here, this looks like a model of bad scientific practice, all right. Just not the kind of model that was intended.

[Edited: Smaldino points on Twitter to another paper that offers additional support for the claim that power hasn’t increased in psychology and cognitive neuroscience, at least.]

Written by epopp

September 22, 2016 at 12:28 pm