Archive for the ‘academia’ Category
Over at Statistical Modelling, Andrew Gelman makes a very sensible point about peer review: it is as only as good as your peers. Why do psychologists worship p-values? Because they approve it in peer review. A few choice quotes:
In short, if an entire group of peers has a misconception, peer review can simply perpetuate error. We’ve seen this a lot in recent years, for example that paper on ovulation and voting was reviewed by peers who didn’t realize the implausibility of 20-percentage-point vote swings during the campaign, peers who also didn’t know about the garden of forking paths. That paper on beauty and sex ratio was reviewed by peers who didn’t know much about the determinants of sex ratio and didn’t know much about the difficulties of estimating tiny effects from small sample sizes.
To put it another way, peer review is conditional. Papers in the Journal of Freudian Studies will give you a good sense of what Freudians believe, papers in the Journal of Marxian Studies will give you a good sense of what Marxians believe, and so forth. This can serve a useful role. If you’re already working in one of these frameworks, or if you’re interested in how these fields operate, it can make sense to get the inside view. I’ve published (and reviewed papers for) the journal Bayesian Analysis. If you’re anti-Bayesian (not so many of theseanymore), you’ll probably think all these papers are a crock of poop and you can ignore them, and that’s fine.
Read the whole thing.
The most recent DuBois Review has a really interesting article about how social movements push for legal change and how that fights changes the field of advocacy groups. Ellen Berrey’s “Making a Civil Rights Claim for Affirmative Action” is a historical review of how one University of Michigan student group fought for affirmative action and how that changed the other organizations at Michigan that were involved in racial student politics:
The politics of affirmative action are currently structured as a litigious conflict among elites taking polarized stances. Opponents call for colorblindness, and defenders champion diversity. How can marginalized activists subvert the dominant terms of legal debate? To what extent can they establish their legitimacy? This paper advances legal mobilization theory by analytically foregrounding the field of contention and the relational production of meaning among social movement organizations. The case for study is two landmark United States Supreme Court cases that contested the University of Michigan’s race-conscious admissions policies. Using ethnographic data, the paper analyzes BAMN, an activist organization, and its reception by other affirmative action supporters. BAMN had a marginalized allied-outsider status in the legal cases, as it made a radical civil rights claim for a moderate, elite-supported policy: that affirmative action corrects systemic racial discrimination. BAMN activists pursued their agenda by passionately defending and, at once, critiquing the university’s policies. However, the organization’s militancy remained a liability among university leaders, who prioritized the consistency of their diversity claims. The analysis forwards a scholarly understanding of the legacy of race-conscious policies.
Great addition to the literature on student mobilization.
A specific brand of high-protein chocolate milk improved the cognitive function of high school football players with concussions. At least that’s what a press release from the University of Maryland claimed a few weeks ago. It also quoted the superintendent of the Washington County Public Schools as saying, “Now that we understand the findings of this study, we are determined to provide Fifth Quarter Fresh [the milk brand] to all of our athletes.”
The problem is that the “study” was not only funded in part by the milk producer, but is unpublished, unavailable to the public and, based on the press release — all the info we’ve got — raises immediate methodological questions. Certainly there are no grounds for making claims about this milk in particular, since the control group was given no milk at all.
The summary also raises questions about the sample size. The total sample included 474 high school football players, but included both concussed and non-concussed players. How many of these got concussions during one season? I would hope not enough to provide statistical power — this NAS report suggests high schoolers get 11 concussions per 10,000 football games and practices.
And even if the sample size is sufficient, it’s not clear that the results are meaningful. The press release suggests concussed athletes who drank the milk did significantly better on four of thirty-six possible measures — anyone want to take bets on the p-value cutoff?
Maryland put out the press release nearly four weeks ago. Since then there’s been a slow build of attention, starting with a takedown by Health News Review on January 5, before the story was picked up by a handful of news outlets and, this weekend, by Vox. In the meanwhile, the university says in fairly vague terms that it’s launched a review of the study, but the press release is still on the university website, and similarly questionable releases (“The magic formula for the ultimate sports recovery drink starts with cows, runs through the University of Maryland and ends with capitalism” — you can’t make this stuff up!) are up as well.
Whoever at the university decided to put out this press release should face consequences, and I’m really glad there are journalists out there holding the university’s feet to the fire. But while the university certainly bears responsibility for the poor decision to go out there and shill for a sponsor in the name of science, it’s worth noting that this is only half of the story.
There’s a lot of talk in academia these days about the status of scientific knowledge — about replicability, bias, and bad incentives, and how much we know that “just ain’t so.” And there’s plenty of blame to go around.
But in our focus on universities’ challenges in producing scientific knowledge, sometimes we underplay the role of another set of institutions: the media. Yes, there’s a literature on science communication that looks as the media as intermediary between science and the public. But a lot of it takes a cognitive angle on audience reception, and it’s got a heavy bent toward controversial science, like climate change or fracking.
More attention to media as a field, though, with rapidly changing conditions of production, professional norms and pathways, and career incentives, could really shed some light on the dynamics of knowledge production more generally. It would be a mistake to look back to some idealized era in which unbiased but hard-hitting reporters left no stone unturned in their pursuit of the public interest. But the acceleration of the news cycle, the decline of journalism as a viable career, the impact of social media on news production, and the instant feedback on pageviews and clickthroughs all tend to reinforce a certain breathless attention to the latest overhyped university press release.
It’s not the best research that gets picked up, but the sexy, the counterintuitive, and the clickbait-ish. Female-named hurricanes kill more than male hurricanes. (No.) Talking to a gay canvasser makes people support gay marriage. (Really no.) Around the world, children in religious households are less altruistic than children of atheists. (No idea, but I have my doubts.)
This kind of coverage not only shapes what the public believes, but it shapes incentives in academia as well. After all, the University of Maryland is putting out these press releases because it perceives it will benefit, either from the perception it is having a public impact, or from the goodwill the attention generates with Fifth Quarter Fresh and other donors. Researchers, in turn, will be similarly incentivized to focus on the sexy topic, or at least the sexy framing of the ordinary topic. And none of this contributes to the cumulative production of knowledge that we are, in theory, still pursuing.
None of this is meant to shift the blame for the challenges faced by science from the academic ecosystem to the realm of media. But if you really want to understand why it’s so hard to make scientific institutions work, you can’t ignore the role of media in producing acceptance of knowledge, or the rapidity with which that role is changing.
Last we discussed how to say no. But there remains the question, what should you say no to? It helps to start with a baseline – what should ALL academics do?
- Say yes to teaching at least one service course a year. Why? Most programs survive on enrollments. Unless you are a star faculty who can opt out, most people will like you if you reliably do some basic teaching.
- Say yes to serving on one “heavy” committee per year. In most programs, this includes grad admissions, recruitment/evaluation, and the governing committee of your program. In teaching intensive, there are various undergrad affairs that require attention.
- The tenured should say “yes” to one committee outside your program. For example, for multiple years, I served on our seed grant committee at IU. For two years, I did NSF reviews.
If you say yes to these requests, then you have earned the right to turn down other requests. But these are minimal. Here are the requests that you should say to *occassionally*:
- Journal/book manuscript reviews. My rule is simple, I say “yes” until I have three in the hopper. Then people have to wait or go elsewhere. By saying yes, you ensure that you are giving back, you help with quality control, and you learn new research.
- Requests for research collaboration. You only have so many hours a day, but you can say yes to really strong projects or people who have a good track record.
- Dissertation committees. Most people can handle a few students and to be honest, being a third or fourth reader requires little work.
- Fancy committees outside the university for non-profits or the profession.
Notice that each of these recommendations isn’t about the activity itself. It’s simply a time budget. In my experience, for example, I simply can’t do a good job reviewing if I have more than one or two papers on my desk. Either I write bad reviews or I slow down a lot. Either way, it doesn’t help anyone.
So what should you avoid in most cases?
- Remedial work with students. Most colleges have basic support for writing and math. It is your job to teach college level material. Unless your job is in the writing program, it is not your job to teach basic writing, though you can certainly offer your opinions.
- Personal therapy. This is really tough. But once again, in many cases, you probably aren’t qualified to give advice on issues like drug problems or sexual assault. Be compassionate but get them to the person who is qualified to help.
- Fluff committees. These are hard to define in the abstract. But there are committees people set up just so they can sound off, not to accomplish anything of substance. In general, the core of the university is teaching, research, and fund raising. If it isn’t one of those, you should be careful.
- Pre-tenure consulting: Unless they are paying serious cash, you should probably say no. If they pay, make sure the time commitment is limited.
- Toxic people. Even if the activity has merit, toxic people can make the experience miserable and pull you down. Avoid at all costs.
To summarize: You should always be a good citizen. Beyond that, budget for things that are central for academic work. Beyond that, just say no.
[Ha — I wrote this last night and set it to post for this morning — when I woke up saw that Fabio had beat me to it. Posting anyway for the limited additional thoughts it contains.]
Last week Fabio launched a heated discussion about whether economics is less “racially balanced” than other social sciences. Then on Friday Justin Wolfers (who has been a vocal advocate for women in economics) published an Upshot piece arguing that female economists get less credit when they collaborate with men.
The Wolfers piece covers research by Harvard economics PhD candidate Heather Sarsons, who used data on tenure decisions at top-30 economics programs in the last forty years to estimate the effects of collaboration (with men or women) on whether women get tenure, controlling for publication quantity and quality and so on. (Full paper here.) Only 52% of the women in this population received tenure, compared to 77% of the men.
The takeaway is that women got no marginal benefit (in terms of tenure decision) from coauthoring with men, while they received some benefit (but less than men did) if they coauthored with at least one other women. Their tenure chances did, however, benefit as much as men’s from solo-authored papers. Sarsons’ interpretation (after ruling out several alternative possibilities) is that while women are given full credit when there is no question about their role in a study, their contributions are discounted when they coauthor with men.
Interesting from a sociologist’s perspective is that Sarsons uses a more limited data set from sociology as a comparison. Looking at a sample of 250 sociology faculty at top-20 programs, she finds no difference in tenure rates by gender, and no similar disadvantage from coauthorship.
While it would be nice to interpret this as evidence of the great enlightenment of sociology around gender issues, that is probably premature. Nevertheless, Sarsons points to one key difference between sociology and economics (other than differing assumptions about women’s contributions) that could potentially explain the divergence.
Sociology, as most of you probably know, has a convention of putting the author who made the largest contribution first in the authorship list. Economics uses alphabetical order. Other disciplines have their own conventions — lab sciences, for example, put the senior author last. This means that sociologists can infer a little bit more than economists about who played the biggest role in a paper from authorship order — information Sarsons suggests might contribute to women receiving more credit for their collaborative work.
This sounds plausible to me, although I also wouldn’t be surprised if the two disciplines made different assumptions, ceteris paribus, about women’s contributions. It might be worth looking at sociology articles with the relatively common footnote “Authors contributed equally; names are listed in alphabetical order” (or reverse alphabetical order, or by coin toss, or whatever). Of course such a note still provides information about relative contribution — 50-50, at least in theory — so it’s not an ideal comparison. But I would bet that readers mentally give one author more credit than the other for these papers.
That may just be the first author, due to the disciplinary convention. But one could imagine that a male contributor (or a senior contributor) would reap greater rewards for these kinds of collaborations. It wouldn’t say much about the hypothesis if that were not the case, but if men received more advantage from papers with explicitly equal coauthors, that would certainly be consistent with the hypothesis that first-author naming conventions help women get credit.
Okay, maybe that’s a stretch. Sarsons closes by noting that she plans to expand the sociology sample and add disciplines with different authorship conventions. It will be challenging to tease out whether authorship conventions really help women get due credit for their work, and I’m skeptical that that’s 100% of the story. But even if it could fix part of the problem, what a simple solution to ensure credit where credit is due.
Academics have a problem saying “no” to people. At the Get a Life, PhD blog, Vilna Treitler has an interesting solution. Have a group of friends give you advice on when to say no:
Forming a “No Committee” helped me get perspective on my limits. Let me tell you about my No Committee. On it, I have two friends who are both professors and the third person is my life partner. Their qualifications: they care about me, they know the academy well enough to know what challenges are there for me, and they keep up with me so that they know how much is too much for me to handle.
How do I use them? When an opportunity comes to me, I send them an email with the subject line “Here’s one for the No Committee” and ask them for their advice. In the email I describe the opportunity, what information I have about what it entails (and whether I can trust the information I have), and further, I normally list all my reasons for saying yes to this thing plus whatever doubts I might have, and I hit “send.”
You still have sixteen hours left (and counting) to submit a paper for ASA 2016 — barring the common-but-not-enough-to-count-on deadline extension. Choosing what sessions to submit to, though, is more a matter of lore than of evidence (though olderwoman once posted some limited but very welcome data). If you’re planning to submit but haven’t yet decided what session to target, here are four tips I give graduate students.
- Go with the organizer over the topic. As a grad student, the gut instinct is often to look for the session whose name sounds like it best fits your topic. But if the person organizing that session does work very different from yours, you might think twice. Instead, look for organizers who seem simpatico. This doesn’t mean you have to know them, or that you should submit to a session that’s not a reasonable subject fit. But if you look at the organizer’s work, and think, “Yeah, that’s my kind of sociology!,” chances are better they’ll think the same of yours.
- Section sessions on current events often get few submissions. When sections choose topics, they sometimes go for “timely.” So there are three section sessions this year, for example, that reference Black Lives Matter. The year after the financial crisis, there were a bunch of Great Recession panels. The problem is that academia is slow, and not that many people have a paper ready to go about something that happened last year. So if you do, or even if you can frame a related paper in a way that speaks to the topic (“What the Civil Rights Movement Can Tell Us About #blacklivesmatter”), your odds get better. Variation: Sections also frequently pick weirdly specific topics, mostly because somebody in a business meeting said “Hey, wouldn’t it be cool if we did a panel about the intersection between X, Y, and Z,” and nobody said no. If you can make the case that your paper fits in such a panel, your odds may go up. Caveat: sometimes organizers recruit submissions for such panels, so some of the slots may be semi-promised already. Still, I think these are good bets, if they fit your paper.
- While section sessions produce only one panel, regular session topics can produce several. Without knowing this about regular sessions, you might think that you should submit to the narrower regular session on “Inequality at the Intersection of Race, Class and Gender in the Global South” (okay I made that up) rather than the related, but much broader, “Inequality.” But the narrow topic will probably just result in one session. If “Inequality” gets 50 submissions, on the other hand, the organizer can make the case to ASA that they should really have five or six panels, and may well get them. Section sessions are limited; regular sessions can expand.
- Be aware that your backup choice is probably irrelevant. Unless things have changed fairly recently, because of the way the submission system works, almost all panels are filled with papers who listed the session as their first choice. Chances are slim that there will be any space in your second choice panel by the time the organizer sees your paper. So unless you pick a really unpopular session for your second choice, your paper will probably go from your first choice to a roundtable with only the briefest of stops in between.
Finally, an advanced tip: The down side of successfully picking a session that’s going to get few submissions is that you increase your chances of ending up in one that also has a small audience. Sure, that astrosociology session may accept all four of the papers it receives. But that can also mean there just aren’t that many people interested in astrosociology. The current events panel can be an exception to this, as the lack of submissions reflects the newness of the topic, not its unpopularity.
All this is useful to know. But in the end, if you’ve got a paper you’ve put a lot of work into, and you’re pretty confident in, just go for the session you want. Even the most competitive session won’t be top-journal selective.