orgtheory.net

Archive for the ‘academia’ Category

scary stuff, continued: researcher misconduct

On All Hallow’s Eve, it’s time to flip off the lights, take out the flashlight to illuminate our faces from below, and swap scary stories.  For some of us, the scary stuff lurks constantly, just around the corner where we work: the unfinished thesis, courtesy of Academia Obscura:

unfinishedthesispumpkin

But, far worse horrors exist – namely, researcher misconduct.  While the most obvious cases of falsified data and plagiarized writings are caught and retracted,  more subtle cases are unfortunately harder to pursue, prove, and punish.

Are these cases of sociopathic scholars or scholars made sociopathic by the system?   At least two civil engineers worry that the current academic system of incentives could possibly tip scholars towards misconduct.  In their abstract, authors Marc A. Edwards and Siddhartha Roy warn about how the increasingly competitive race for funding in STEM could potentially ruin scholarship.

Over the last 50 years, we argue that incentives for academic scientists have become increasingly perverse in terms of competition for research funding, development of quantitative metrics to measure performance, and a changing business model for higher education itself. Furthermore, decreased discretionary funding at the federal and state level is creating a hypercompetitive environment between government agencies (e.g., EPA, NIH, CDC), for scientists in these agencies, and for academics seeking funding from all sources—the combination of perverse incentives and decreased funding increases pressures that can lead to unethical behavior. If a critical mass of scientists become untrustworthy, a tipping point is possible in which the scientific enterprise itself becomes inherently corrupt and public trust is lost, risking a new dark age with devastating consequences to humanity. Academia and federal agencies should better support science as a public good, and incentivize altruistic and ethical outcomes, while de-emphasizing output.

In particular, Edwards and Roy single out three conditions as problematic in sites where much of research is conducted: the university:

Recently, however, an emphasis on quantitative performance metrics (Van Noorden, 2010), increased competition for static or reduced federal research funding (e.g., NIH, NSF, and EPA), and a steady shift toward operating public universities on a private business model (Plerou, et al., 1999; Brownlee, 2014; Kasperkevic, 2014) are creating an increasingly perverse academic culture.

They note how academics fear that misconduct is becoming widespread as opposed to isolated instances:

Ultimately, the well-intentioned use of quantitative metrics may create inequities and outcomes worse than the systems they replaced. Specifically, if rewards are disproportionally given to individuals manipulating their metrics, problems of the old subjective paradigms (e.g., old-boys’ networks) may be tame by comparison. In a 2010 survey, 71% of respondents stated that they feared colleagues can “game” or “cheat” their way into better evaluations at their institutions (Abbott, 2010), demonstrating that scientists are acutely attuned to the possibility of abuses in the current system.

They also worry that people attracted to academia for altruistic reasons will be turned off by the perverse incentives and exit the system for careers and workplaces that are more consistent with their values and goals:

While there is virtually no research exploring the impact of perverse incentives on scientific productivity, most in academia would acknowledge a collective shift in our behavior over the years (Table 1), emphasizing quantity at the expense of quality. This issue may be especially troubling for attracting and retaining altruistically minded students, particularly women and underrepresented minorities (WURM), in STEM research careers. Because modern scientific careers are perceived as focusing on “the individual scientist and individual achievement” rather than altruistic goals (Thoman et al., 2014), and WURM students tend to be attracted toward STEM fields for altruistic motives, including serving society and one’s community (Diekman et al., 2010, Thoman et al., 2014), many leave STEM to seek careers and work that is more in keeping with their values (e.g., Diekman et al., 2010; Gibbs and Griffin, 2013; Campbell, et al., 2014).

Under the subheading “If nothing is done, we will create a corrupt academic culture,” the authors warn about the collapse of the academic commons.  At the end of their paper, they offer several possibilities for starting to address these issues, including research into the dimensions of research misconduct, a more explicit discussion of values, and a reconfiguration of incentives:

  1. (1) The scope of the problem must be better understood, by systematically mining the experiences and perceptions held by academics in STEM fields, through a comprehensive survey of high-achieving graduate students and researchers.

  2. (2) The National Science Foundation should commission a panel of economists and social scientists with expertise in perverse incentives, to collect and review input from all levels of academia, including retired National Academy members and distinguished STEM scholars. The panel could also develop a list of “best practices” to guide evaluation of candidates for hiring and promotion, from a long-term perspective of promoting science in the public interest and for the public good, and maintain academia as a desirable career path for altruistic ethical actors.

  3. (3) Rather than pretending that the problem of research misconduct does not exist, science and engineering students should receive instruction on these subjects at both the undergraduate and graduate levels. Instruction should include a review of real world pressures, incentives, and stresses that can increase the likelihood of research misconduct.

  4. (4) Beyond conventional goals of achieving quantitative metrics, a PhD program should also be viewed as an exercise in building character, with some emphasis on the ideal of practicing science as service to humanity (Huber, 2014).

  5. (5) Universities need to reduce perverse incentives and uphold research misconduct policies that discourage unethical behavior.

Written by katherinechen

October 31, 2016 at 4:48 pm

should you publish in PLoS One?

My response to this question on Facebook:

  1. Do not publish in PLoS if you need a status boost for the job market or promotion.
  2. Do publish if journal prestige is not a factor. My case: good result but I was in a race against other computer scientists. Simply could not wait for a four year journal process.
  3. Reviewer quality: It is run mainly by physical and life scientists. Reviews for my paper were similar in quality to what CS people gave me on a similar paper submitted to CS conferences/journals.
  4. Personally, I was satisfied. Review process fair, fast publication, high citation count. Would not try to get promoted on the paper by itself, though.
  5. A lot of people at strong programs have PLoS One pubs but usually as part of a larger portfolio of work.
  6. A typical good paper in PLoS is from a strong line of work but the paper just bounced around or too idiosyncratic.
  7. PLoS One publishes some garbage.
  8. Summary: right tool for the right job. Use wisely.

Another person noted that many elite scientists use the “Science, Nature, or PLoS One model.” In other words, you want high impact or just get it out there. No sense wasting years of time with lesser journals.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street 

Written by fabiorojas

October 11, 2016 at 12:13 am

does making one’s scholarly mark mean transplanting the shoulders of giants elsewhere?

The Society for the Advancement of Socio-Economics (SASE) website has made Neil Fligstein‘s powerpoint slides on the history of economic sociology available for general viewing as a PDF.  (Here are the slides in powerpoint form: 1469704310_imagining_economic_sociology_-socio-economics-fligstein) It’s a fascinating read of the development of a sub-field across continents, and it also includes discussion of a challenge that some believes plagues the sociology discipline:

Both Max Weber and Thomas Kuhn recognized that Sociology as a discipline might be doomed to never cumulate knowledge.

  • Sociology would proceed as a set of research projects which reflected the current concerns and interests of a small set of scholars
  • When the group hit a dead end in producing novel results, the research program would die out only to be replaced by another one
  • Progress in economic sociology is likely to be made by putting our research programs into dialogue with one another to make sense of how the various mechanisms that structure markets interact
  • Failure to do so risks the field fragmenting of the field into ever smaller pieces and remaining subject to fashion and fad

Fligstein’s claim for these field-fragmenting tendencies stems from the current structure of the academic field.  He depicts sociology as rewarding scholars for applying ideas from one area to another area where current theorizing is insufficient, rather than expanding existing research:

  • … the idea is not to work on the edge of some mature existing research program with the goal of expanding it
  • But instead, one should be on the lookout for new ideas from different research programs to borrow to make sense for what should be done next

In short, scholars tend to form intellectual islands where they can commune with other like-minded scholars.  Bridging paths to other islands can generate rewards, but the efforts needed to disseminate knowledge more widely – even within a discipline – can exceed any one person’s capacity.

 

Written by katherinechen

October 10, 2016 at 6:30 pm

putting limits on the academic workday

Today, among the various administrative tasks of scheduling meetings with students and other responsibilities, I decided to RSVP yes for an upcoming evening talk.  I didn’t make this decision lightly, as it involved coordinating schedules with another party (i.e., fellow dual career parent).

With the use of technology such as email, increasing job precarity, and belief in facetime as signalling productivity and commitment, the workday in the US has elongated, blurring boundaries to the point that work can crowd out other responsibilities to family, community, hobbies, and self-care.  However, one Ivy  institution is exhorting its members to rethink making evening events and meetings the norm.

In this nuanced statement issued to department chairs, Brown University’s provost outlines the stakes and consequences of an elongated workday:

This burden [of juggling work and family commitments] may disproportionately affect female faculty members. Although data on Brown’s faculty is not available, national statistics indicate that male faculty members (of every rank) are more likely than female faculty members (of every rank) to have a spouse or partner whose comparably flexible work schedule allows that spouse or partner to handle the bulk of evening-time household responsibilities. Put differently, male faculty members are more likely than female faculty members to have the household support to attend campus events after 5:30. We must be attuned to issues of gender equity when we think about program scheduling. We must also take into consideration the particular challenges faced by single parents on the faculty when required to attend events outside the regular hours of childcare.

Read the rest of this entry »

Written by katherinechen

September 28, 2016 at 7:16 pm

bad reporting on bad science

This Guardian piece about bad incentives in science was getting a lot of Twitter mileage yesterday. “Cut-throat academia leads to natural selection of bad science,” the headline screams.

The article is reporting on a new paper by Paul Smaldino and Richard McElreath, and features quotes from the authors like, “As long as the incentives are in place that reward publishing novel, surprising results, often and in high-visibility journals above other, more nuanced aspects of science, shoddy practices that maximise one’s ability to do so will run rampant.”

Well. Can’t disagree with that.

But when I clicked through to read the journal article, the case didn’t seem nearly so strong. The article has two parts. The first is a review of review pieces published between 1962 and 2013 that examined the levels of statistical power reported in studies in a variety of academic fields. The second is a formal model of an evolutionary process through which incentives for publication quantity will drive the spread of low-quality methods (such as underpowered studies) that increase both productivity as well as the likelihood of false positives.

The formal model is kind of interesting, but just shows that the dynamics are plausible — something I (and everyone else in academia) was already pretty much convinced of. The headlines are really based on the first part of the paper, which purports to show that statistical power in the social and behavioral sciences hasn’t increased over the last fifty-plus years, despite repeated calls for it to do so.

Well, that part of the paper basically looks at all the papers that reviewed levels of statistical power in studies in a particular field, focusing especially on papers that reported small effect sizes. (The logic is that such small effects are not only most common in these fields, but also more likely to be false positives resulting from inadequate power.) There were 44 such reviews. The key point is that average reported statistical power has stayed stubbornly flat. The conclusion the authors draw is that bad methods are crowding out good ones, even though we know better, through some combination of poor incentives and selection that rewards researcher ignorance.

 

 

The problem is that the evidence presented in the paper is hardly strong support for this claim. This is not a random sample of papers in these fields, or anything like it. Nor is there other evidence to show that the reviewed papers are representative of papers in their fields more generally.

More damningly, though, the fields that are reviewed change rather dramatically over time. Nine of the first eleven studies (those before 1975) review papers from education or communications. The last eleven (those after 1995) include four from aviation, two from neuroscience, and one each from health psychology, software engineering, behavioral ecology, international business, and social and personality psychology. Why would we think that underpowering in the latter fields at all reflects what’s going on in the former fields in the last two decades? Maybe they’ve remained underpowered, maybe they haven’t. But statistical cultures across disciplines are wildly different. You just can’t generalize like that.

The news article goes on to paraphrase one of the authors as saying that “[s]ociology, economics, climate science and ecology” (in addition to psychology and biomedical science) are “other areas likely to be vulnerable to the propagation of bad practice.” But while these fields are singled out as particularly bad news, not one of the reviews covers the latter three fields (perhaps that’s why the phrasing is “other areas likely”?). And sociology, which had a single review in 1974, looks, ironically, surprisingly good — it’s that positive outlier in the graph above at 0.55. Guess that’s one benefit of using lots of secondary data and few experiments.

The killer is, I think the authors are pointing to a real and important problem here. I absolutely buy that the incentives are there to publish more — and equally important, cheaply — and that this undermines the quality of academic work. And I think that reviewing the reviews of statistical power, as this paper does, is worth doing, even if the fields being reviewed aren’t consistent over time. It’s also hard to untangle whether the authors actually said things that oversold the research or if the Guardian just reported it that way.

But at least in the way it’s covered here, this looks like a model of bad scientific practice, all right. Just not the kind of model that was intended.

[Edited: Smaldino points on Twitter to another paper that offers additional support for the claim that power hasn’t increased in psychology and cognitive neuroscience, at least.]

Written by epopp

September 22, 2016 at 12:28 pm

how ‘who you worked with’ doesn’t work

“Who did you work with?” It’s a question applicants get all the time on the job market, and for good reason: if you know someone’s dissertation advisor, you probably know a bit more about how that person works, the kinds of questions they study, the sorts of methods and theory they use. Of course, the American job market doesn’t want to hire disciples, so a student can’t be too close to the teacher: there has to be some difference, theoretical or methodological, creative or substantive. Yet there needs to be some commonality too, if nothing more than a commonality of competence. Knowing someone’s mentor has chops is supposed to translate into knowing that person has chops as well. Those of us who have grad students have them because we believe that being a good sociologist can be taught, and that skills—even excellence—can be reproduced. And so status moves forward, down the genealogical line: begat, begat, begat.

There are a lot of problems with this focus on individual-level mentoring, with an advisor’s status functioning as a high-level credential (eg: “Oh X is solid. She worked with Y”). First off, the sociology of it is not at all obvious to me. It’s simply an empirical question how much a mentor matters in the formation of good scholars: we are, after all, a big wide community, and why can’t it take a village to raise a sociologist? That “village” might extend beyond a graduate school to the discipline as a whole: think of a student at a low-prestige grad program whose mentor is not well-known nor super invested in the discipline (or grad students). Yet this grad student reads widely, attends conferences, and networks assertively, finding other people to read her work. She doesn’t have the currency of a high status mentor, but she might well have some good publications. It’s interesting how much the status of the mentor still matters when I think about that student’s chances on the market (and the status of her university).

I’m not saying anything new to say that it’s deeply ironic how a discipline so committed to fighting inequality in the world at large maintains such deep inequalities in its own house. And a focus on mentors as tokens of worth, while important and understandable, can have a significant role in maintaining those inequalities. What if, instead of thinking of ourselves as a guild of masters and apprentices, we thought of ourselves as a community of practitioners, eager to help everyone get better at what they do? I have no idea how that would work out practically, but it’s worth thinking about, and I’ll write more about this. If you have any thoughts, do please let me know in the comments or over e-mail.

 

 

Written by jeffguhin

September 7, 2016 at 6:36 pm

Posted in academia

Tagged with , , ,

agreements and disagreements with rob warren

Rob Warren, of the University of Minnesota, wrote some very engaging and insightful comments about his time as the editor of Sociology of Education. Jeff Guhin covered this last week. Here, I’ll add my own comments. First, a strong nod of agreement:

First, a large percentage of papers had fundamental research design flaws. Basic methodological problems—of the sort that ought to earn a graduate student a B- in their first-year research methods course—were fairly common.4 (More surprising to me, by the way, was how frequently reviewers seemed not to notice such problems.) I’m not talking here about trivial errors or minor weaknesses in research designs; no research is perfect. I’m talking about problems that undermined the author’s basic conclusions. Some of these problems were fixable, but many were not.

Yes. Professor Warren is correct. Once you are an editor, or simply an older scholar who has read a lot of journal submissions, you quickly realize that there a lot of papers that really, really flub research methods 101. For example, a lot of paper rely on convenience samples, which lead to biased results. Warren has more on this issue.

Now, let me get to where I think Warren is incorrect:

Second, and more surprising to me: Most papers simply lacked a soul—a compelling and well-articulated reason to exist. The world (including the world of education) faces an extraordinary number of problems, challenges, dilemmas, and even mysteries.  Yet most papers failed to make a good case for why they were necessary. Many analyses were not well motivated or informed by existing theory, evidence, or debates. Many authors took for granted that readers would see the importance of their chosen topic, and failed to connect their work to related issues, ideas, or discussions. Over and over again, I kept asking myself (and reviewers also often asked): So what?

About five years ago, I used to think this way. Now, I’ve mellowed and come to a more open minded view. Why? In the past, I have rejected a fair number of papers on “framing” grounds. Later, I will see them published in other journals, often with high impact. Also, in my own career, leading journals have rejected my work on “framing” grounds and when it gets published in another leading journal, the work will get cited. The framing wasn’t that bad. Lesson? A lot of complaints about are framing are actually arbitrary. Instead, let the work get published and let the wider community decide, not the editor and a few peer reviewers.

The evidence on the reliability of the peer review process suggests that there is a lot of randomness in the process. If some of these “soul-less” papers had been resubmitted a few months later, some of them would have been accepted with enthusiastic reviews. Here’s a 2006 review of the literature on journal reliability and here’s the classic 1982 article showing that a lot of journal acceptance is indeed random. Ironically, Peters and Ceci (1982) note that “serious methodological flaws” are a common reason for rejecting papers – that had already been accepted!

This brings me to Warren’s third point – a complaint about people who submit poorly developed papers. He suggests that there are job pressures and a lack of training. On the training point, there is nothing to back up his assertion. Most social science programs have a fairly standard sequence of quantitative methods courses. The basic issues regarding causation v. description, identification, and assessment of instrument quality are all pretty easy to learn. Every year, the ICPSR offers all kinds of training. Training we have, in spades.

On the jobs point, I would like to blame people like Professor Warren and his colleagues on hiring and promotion committees (which includes me!!). The job market for the better positions in sociology (R1 jobs and competitive liberal arts schools) has essentially evolved into whoever gets into the top journals in graduate school plus graduate program reputation.

I’d suggest we simply think about the incentives here. Junior scholars live in a world where a lot of weight is placed on a very small number of journals. They also live in a world where journal acceptance is random. They also live in a world where journals routinely lose papers, reject after multiple R&R rounds and takes years (!) to respond (see my journal horror stories post). How would any rational person respond to this environment? Answer: just send out a lot of stuff until something hits. There is no incentive to develop a paper well if it will be randomly rejected after sitting at the journal for 16 months.

This is why I openly praise and encourage reforms of the journal system. I praise “platform” publishing like PLoS One. I praise “up or down” curated publishing, like Sociological Science. I praise Socius, the open access ASA journal. I praise socArxiv for creating an open pre-print portal. I praise editors who speed the review process and I praise multiple submissions practices. The basic issues that Professor Warren discusses are real. But the problem isn’t training or stressed out junior scholars. The problem is the archaic journal system. Let’s make it better.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

August 9, 2016 at 12:01 am