orgtheory.net

Archive for the ‘academia’ Category

bad reporting on bad science

with 2 comments

This Guardian piece about bad incentives in science was getting a lot of Twitter mileage yesterday. “Cut-throat academia leads to natural selection of bad science,” the headline screams.

The article is reporting on a new paper by Paul Smaldino and Richard McElreath, and features quotes from the authors like, “As long as the incentives are in place that reward publishing novel, surprising results, often and in high-visibility journals above other, more nuanced aspects of science, shoddy practices that maximise one’s ability to do so will run rampant.”

Well. Can’t disagree with that.

But when I clicked through to read the journal article, the case didn’t seem nearly so strong. The article has two parts. The first is a review of review pieces published between 1962 and 2013 that examined the levels of statistical power reported in studies in a variety of academic fields. The second is a formal model of an evolutionary process through which incentives for publication quantity will drive the spread of low-quality methods (such as underpowered studies) that increase both productivity as well as the likelihood of false positives.

The formal model is kind of interesting, but just shows that the dynamics are plausible — something I (and everyone else in academia) was already pretty much convinced of. The headlines are really based on the first part of the paper, which purports to show that statistical power in the social and behavioral sciences hasn’t increased over the last fifty-plus years, despite repeated calls for it to do so.

Well, that part of the paper basically looks at all the papers that reviewed levels of statistical power in studies in a particular field, focusing especially on papers that reported small effect sizes. (The logic is that such small effects are not only most common in these fields, but also more likely to be false positives resulting from inadequate power.) There were 44 such reviews. The key point is that average reported statistical power has stayed stubbornly flat. The conclusion the authors draw is that bad methods are crowding out good ones, even though we know better, through some combination of poor incentives and selection that rewards researcher ignorance.

 

 

The problem is that the evidence presented in the paper is hardly strong support for this claim. This is not a random sample of papers in these fields, or anything like it. Nor is there other evidence to show that the reviewed papers are representative of papers in their fields more generally.

More damningly, though, the fields that are reviewed change rather dramatically over time. Nine of the first eleven studies (those before 1975) review papers from education or communications. The last eleven (those after 1995) include four from aviation, two from neuroscience, and one each from health psychology, software engineering, behavioral ecology, international business, and social and personality psychology. Why would we think that underpowering in the latter fields at all reflects what’s going on in the former fields in the last two decades? Maybe they’ve remained underpowered, maybe they haven’t. But statistical cultures across disciplines are wildly different. You just can’t generalize like that.

The news article goes on to paraphrase one of the authors as saying that “[s]ociology, economics, climate science and ecology” (in addition to psychology and biomedical science) are “other areas likely to be vulnerable to the propagation of bad practice.” But while these fields are singled out as particularly bad news, not one of the reviews covers the latter three fields (perhaps that’s why the phrasing is “other areas likely”?). And sociology, which had a single review in 1974, looks, ironically, surprisingly good — it’s that positive outlier in the graph above at 0.55. Guess that’s one benefit of using lots of secondary data and few experiments.

The killer is, I think the authors are pointing to a real and important problem here. I absolutely buy that the incentives are there to publish more — and equally important, cheaply — and that this undermines the quality of academic work. And I think that reviewing the reviews of statistical power, as this paper does, is worth doing, even if the fields being reviewed aren’t consistent over time. It’s also hard to untangle whether the authors actually said things that oversold the research or if the Guardian just reported it that way.

But at least in the way it’s covered here, this looks like a model of bad scientific practice, all right. Just not the kind of model that was intended.

[Edited: Smaldino points on Twitter to another paper that offers additional support for the claim that power hasn’t increased in psychology and cognitive neuroscience, at least.]

Written by epopp

September 22, 2016 at 12:28 pm

how ‘who you worked with’ doesn’t work

“Who did you work with?” It’s a question applicants get all the time on the job market, and for good reason: if you know someone’s dissertation advisor, you probably know a bit more about how that person works, the kinds of questions they study, the sorts of methods and theory they use. Of course, the American job market doesn’t want to hire disciples, so a student can’t be too close to the teacher: there has to be some difference, theoretical or methodological, creative or substantive. Yet there needs to be some commonality too, if nothing more than a commonality of competence. Knowing someone’s mentor has chops is supposed to translate into knowing that person has chops as well. Those of us who have grad students have them because we believe that being a good sociologist can be taught, and that skills—even excellence—can be reproduced. And so status moves forward, down the genealogical line: begat, begat, begat.

There are a lot of problems with this focus on individual-level mentoring, with an advisor’s status functioning as a high-level credential (eg: “Oh X is solid. She worked with Y”). First off, the sociology of it is not at all obvious to me. It’s simply an empirical question how much a mentor matters in the formation of good scholars: we are, after all, a big wide community, and why can’t it take a village to raise a sociologist? That “village” might extend beyond a graduate school to the discipline as a whole: think of a student at a low-prestige grad program whose mentor is not well-known nor super invested in the discipline (or grad students). Yet this grad student reads widely, attends conferences, and networks assertively, finding other people to read her work. She doesn’t have the currency of a high status mentor, but she might well have some good publications. It’s interesting how much the status of the mentor still matters when I think about that student’s chances on the market (and the status of her university).

I’m not saying anything new to say that it’s deeply ironic how a discipline so committed to fighting inequality in the world at large maintains such deep inequalities in its own house. And a focus on mentors as tokens of worth, while important and understandable, can have a significant role in maintaining those inequalities. What if, instead of thinking of ourselves as a guild of masters and apprentices, we thought of ourselves as a community of practitioners, eager to help everyone get better at what they do? I have no idea how that would work out practically, but it’s worth thinking about, and I’ll write more about this. If you have any thoughts, do please let me know in the comments or over e-mail.

 

 

Written by jeffguhin

September 7, 2016 at 6:36 pm

Posted in academia

Tagged with , , ,

agreements and disagreements with rob warren

Rob Warren, of the University of Minnesota, wrote some very engaging and insightful comments about his time as the editor of Sociology of Education. Jeff Guhin covered this last week. Here, I’ll add my own comments. First, a strong nod of agreement:

First, a large percentage of papers had fundamental research design flaws. Basic methodological problems—of the sort that ought to earn a graduate student a B- in their first-year research methods course—were fairly common.4 (More surprising to me, by the way, was how frequently reviewers seemed not to notice such problems.) I’m not talking here about trivial errors or minor weaknesses in research designs; no research is perfect. I’m talking about problems that undermined the author’s basic conclusions. Some of these problems were fixable, but many were not.

Yes. Professor Warren is correct. Once you are an editor, or simply an older scholar who has read a lot of journal submissions, you quickly realize that there a lot of papers that really, really flub research methods 101. For example, a lot of paper rely on convenience samples, which lead to biased results. Warren has more on this issue.

Now, let me get to where I think Warren is incorrect:

Second, and more surprising to me: Most papers simply lacked a soul—a compelling and well-articulated reason to exist. The world (including the world of education) faces an extraordinary number of problems, challenges, dilemmas, and even mysteries.  Yet most papers failed to make a good case for why they were necessary. Many analyses were not well motivated or informed by existing theory, evidence, or debates. Many authors took for granted that readers would see the importance of their chosen topic, and failed to connect their work to related issues, ideas, or discussions. Over and over again, I kept asking myself (and reviewers also often asked): So what?

About five years ago, I used to think this way. Now, I’ve mellowed and come to a more open minded view. Why? In the past, I have rejected a fair number of papers on “framing” grounds. Later, I will see them published in other journals, often with high impact. Also, in my own career, leading journals have rejected my work on “framing” grounds and when it gets published in another leading journal, the work will get cited. The framing wasn’t that bad. Lesson? A lot of complaints about are framing are actually arbitrary. Instead, let the work get published and let the wider community decide, not the editor and a few peer reviewers.

The evidence on the reliability of the peer review process suggests that there is a lot of randomness in the process. If some of these “soul-less” papers had been resubmitted a few months later, some of them would have been accepted with enthusiastic reviews. Here’s a 2006 review of the literature on journal reliability and here’s the classic 1982 article showing that a lot of journal acceptance is indeed random. Ironically, Peters and Ceci (1982) note that “serious methodological flaws” are a common reason for rejecting papers – that had already been accepted!

This brings me to Warren’s third point – a complaint about people who submit poorly developed papers. He suggests that there are job pressures and a lack of training. On the training point, there is nothing to back up his assertion. Most social science programs have a fairly standard sequence of quantitative methods courses. The basic issues regarding causation v. description, identification, and assessment of instrument quality are all pretty easy to learn. Every year, the ICPSR offers all kinds of training. Training we have, in spades.

On the jobs point, I would like to blame people like Professor Warren and his colleagues on hiring and promotion committees (which includes me!!). The job market for the better positions in sociology (R1 jobs and competitive liberal arts schools) has essentially evolved into whoever gets into the top journals in graduate school plus graduate program reputation.

I’d suggest we simply think about the incentives here. Junior scholars live in a world where a lot of weight is placed on a very small number of journals. They also live in a world where journal acceptance is random. They also live in a world where journals routinely lose papers, reject after multiple R&R rounds and takes years (!) to respond (see my journal horror stories post). How would any rational person respond to this environment? Answer: just send out a lot of stuff until something hits. There is no incentive to develop a paper well if it will be randomly rejected after sitting at the journal for 16 months.

This is why I openly praise and encourage reforms of the journal system. I praise “platform” publishing like PLoS One. I praise “up or down” curated publishing, like Sociological Science. I praise Socius, the open access ASA journal. I praise socArxiv for creating an open pre-print portal. I praise editors who speed the review process and I praise multiple submissions practices. The basic issues that Professor Warren discusses are real. But the problem isn’t training or stressed out junior scholars. The problem is the archaic journal system. Let’s make it better.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

August 9, 2016 at 12:01 am

rob warren’s harsh critique of the submissions he got at soe

If you don’t get the Sociology of Education newsletter, or even if you do and just don’t read it, you probably didn’t see Rob Warren’s pretty devastating criticism of the submissions he usually got when he was the editor of Sociology of Education.  As a junior scholar who has sent out my own share of not-quite-formed papers, his points are well taken, and my hunch (and what I’ve heard from editors) is that these complaints extend to other major journals as well.  Read the whole thing at his website, but here’s a sample:

Most of the papers that I read had one or both of two basic problems:

First, a large percentage of papers had fundamental research design flaws. Basic methodological problems—of the sort that ought to earn a graduate student a B- in their first-year research methods course—were fairly common.4 (More surprising to me, by the way, was how frequently reviewers seemed not to notice such problems.) I’m not talking here about trivial errors or minor weaknesses in research designs; no research is perfect. I’m talking about problems that undermined the author’s basic conclusions. Some of these problems were fixable, but many were not.

Second, and more surprising to me: Most papers simply lacked a soul—a compelling and well-articulated reason to exist. The world (including the world of education) faces an extraordinary number of problems, challenges, dilemmas, and even mysteries.  Yet most papers failed to make a good case for why they were necessary. Many analyses were not well motivated or informed by existing theory, evidence, or debates. Many authors took for granted that readers would see the importance of their chosen topic, and failed to connect their work to related issues, ideas, or discussions. Over and over again, I kept asking myself (and reviewers also often asked): So what?

Written by jeffguhin

August 5, 2016 at 3:41 am

Posted in academia

Tagged with , , ,

why i don’t study latino sociology

I am often asked: why don’t I study Latinos? Answer: I don’t have any good ideas. That’s it. If I had a great research idea, I would do it. Let me tell you, I would definitely do it and it would be YUGE.

Still, the question is worth thinking about in more detail. One might say that the question is racist, but I don’t think so. Normally, people like doing academic research about themselves. White people usually study white people and Black people like to study black people. Not a hard and fast rule, but we shouldn’t be surprised that American high schools teach American history instead of Albanian history. Thus, it’s ok to ask why I am focused on out groups.

Another way to think about the question is why I haven’t spent the time and effort working on Latino communities in search of research questions. For example, I have been asked a few times why I wrote on Black Studies instead of Latino Studies. There, the answer is simple. For the book, I preferred a “large N” data set. There are hundreds of Black Studies programs, but only a few dozen Latino or Hispanic American Studies programs. No systematic reasons. It’s just that I haven’t found the right case to make the right argument.

The lesson here is that what you study can be an idiosyncratic mix of personal identity and opportunity. If I weren’t interested in disciplines and higher education, I might have well arrived at a dissertation project that focused on Latinos. If my friend hadn’t asked me to help out with an antiwar studies project, I might have chosen a different post-dissertation topic. Who knows? If someone has a great idea on Latinos and approaches me, I’d probably try to help them.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

August 4, 2016 at 12:36 am

the best footnote in matt desmond’s evicted

One of the nice things about summer is getting to read stuff you don’t have to read. Matt Desmond’s Evicted: Poverty and Profit in the American City was excellent, and deserves the great deal of attention it received. The sociology is largely implicit, but it is absolutely there, and Desmond paints a compelling portrait of flawed but comprehensible individuals caught in a web of exploitative institutions from which it is very, very hard to escape.

But you know the good stuff is always in the footnotes, right? And my favorite footnote is not about Lamar, the neighborhood father figure whose legs froze off when, high on crack, he passed out in an abandoned house; or Lorraine, who tries to find a little joy in her otherwise grinding poverty by spending her food stamps on lobster.

No, my favorite footnote, found on page 404, is about the Moving to Opportunity experiment, which I wrote about last year:

According to Google Scholar, there are more than 4,800 scholarly articles and books in which the phrase “Moving to Opportunity” appears in the text. This neighborhood relocation initiative designed to move families out of disadvantaged neighborhoods was a bold and important program—which served roughly 4,600 households. In other words, by now every family who benefited from Moving to Opportunity could have their own study in which their program was mentioned.

Ouch. Point very much taken.

Written by epopp

July 28, 2016 at 6:27 pm

tenure and promotion experiences among women of color

After completing a Ph.D., how to get a tenure-track position, secure tenure, and advance to full and beyond are not clear, particularly since multiple layers of bureaucracy (committees, department, division, school, and university board) have a say over candidates’ cases.  Despite written policies specifying criteria and process for tenure and promotion, universities can interpret these policies in ways that advance or push out qualified candidates.  Over at feministwire, Vilna Bashi Treitler shares her experiences with the tenure process at one university, where unofficial teaching evaluations were apparently used to justify a tenure vote:

In my case, I was unable to defend myself when someone at my tenure hearing read verbatim from RateMyProfessor.com, a popular website where anyone can write anything about any professor in the country. The review reported me for “abandoning” my class. My colleagues discussed my case without reference to the medical emergency that pulled me from class: I lay, pregnant and bleeding, on doctor’s ordered bed rest, trying to save my baby. My colleagues failed to consider the testimonies of graduate students who taught the four class sessions that remained in the semester – at my own expense – or the fact that my website showed evidence that classes continued (with the aid of graduate students) and I distributed handouts online, despite my forced absence.

Perhaps most frustrating, it did not appear to matter to my colleagues that I had several peer-reviewed articles published in top journals, a book already published with a top-tier university press, a grant from the National Science Foundation for a new project, and mostly good reviews from students up until that time. This happened 10 years ago, and despite the opposition, I survived and succeeded in the academy. However, I never stopped facing challenges from white students who – despite signing up for my course, which at no time was ever a requirement – resist what I have to teach them, and in some cases, treat me with open disrespect.

Having served with Vilna on a committee overseeing dissertation proposals at the Graduate Center, CUNY and spending time with her discussing pedagogy, I can attest that she is very invested in students’ learning, no matter how difficult the topic.  In sociology and related disciplines, we teach and discuss complex topics – inequality, discrimination, and the various –isms – that can challenge or even threaten people’s worldviews.  The individualistic emphasis in the US makes it especially difficult to convey alternative ways of thinking.

Vilna’s post includes several recommendations for the academy.  In particular, she urges colleagues who have power to act on behalf of those who do not:

We must stand behind the promises we made to young faculty when we hired them: if you produce high quality scholarship, we will award you the tenure you need to continue conducting cutting edge research. Any scholar who makes the grade with notable and widely accepted peer-reviewed scholarship should not have their fates sealed in closed-door processes with little transparency or overt accountability where the complaints of a relatively tiny number of students – of course, students have never published research or taught courses themselves – are given undue weight. (Of course, bad teaching should not be rewarded, but we have other ways to assess teaching, including examining syllabi, having faculty regularly observed by peer scholars, and creating and encouraging the use of teaching centers where new scholar-teachers can seek aid in improving their classroom skills.)

Faculty who serve on committees that make these decisions know when injustice is being committed, and the time is now to take a stand. Standing up to proceedings that negate principles of both academic freedom and honor among colleagues and that allow racism and sexism to decide who is a quality scholar is risky and requires courage, but is nevertheless necessary. It is difficult to ask questions aloud about what’s not happening when a colleague looks like they’re being railroaded. If you stand up, you effectively become a whistleblower, for which there might be retaliation – but if you’re tenured, that’s exactly what tenure is for: protection from punishment for following through on ideas that may be unpopular. So when the tide turns against a junior colleague in your department or university, the difficult but morally right thing to do would be to take a bold step to stand up and at minimum question why.

And standing up takes many forms. When the conversation turns towards student complaints about a professor, inform your colleagues that student evaluations have gender and race biases (see here, and here, too). Find out if the professor has good evaluations that are being ignored or downplayed. Ask whether colleagues are overlooking other evidence of good teaching, like positive peer observations, or syllabi chock full of information about assignments, how grades are determined, and classroom policies. Professors who stand up must ask about the rest of the scholarly record: are we talking about the teaching of a highly productive scholar who has a publishing record and is a good departmental and college/university citizen? Maybe you should ask whether those things should matter more than evaluations – especially if you know this is what junior faculty are told when informed of the requirements for tenure.

Standing up also looks like administrators who overturn or challenge insufficiently explained tenure denials for stellar candidate records, being mindful of institutional commitments to inclusion and diversity. In addition, professors who become aware that injustice is occurring should reach out to administrators and encourage them to do the right thing.

Vilna’s insightful post includes links to several other scholars’ tenure denial experiences in the academy, as well as additional recommendations on working with students.

 

Written by katherinechen

July 27, 2016 at 5:25 pm