orgtheory.net

asr review guidelines

In a totally commendable attempt to broaden the range of methods represented in ASR, the new editorial team is working to develop guidelines for reviewers of papers using ethnographic and interview methods, theory papers, and comparative-historical papers. The idea is that if reviewers, especially those who don’t write such papers themselves, are given a more explicit sense of what a “good” article in one of these areas looks like, they will be less likely to dismiss them on grounds borrowed inappropriately from another type of research.

Jenn Lena posted links on Twitter to guidelines for the first two, and the comparative-historical section is forming a committee to provide input on the last one.

Personally, I think this is a great idea. I don’t know if it will work, and I might have some quibbles around the margins (I think really great work can come from ethnographic sites basically chosen for the sake of convenience, and that systematicity of method in choosing who to talk to isn’t as important as working to check and cross-check emerging findings), but by and large, it’s an admirable effort. I particularly liked the openness to the descriptive contribution of ethnography. Causality is terrific, but not everything has to be causal.

The tough thing, I think, is that we all think of ASR as a certain kind of journal, and review submissions to it accordingly. I know I’ve probably reviewed pieces negatively for ASR that I would really have liked for another journal, just because they didn’t seem like ASR pieces. Moving the needle is hard when even people who should be friendly to a certain type of work see it as just “not fitting.” (Much like other kinds of social processes?) But it’s worth trying, and this seems like a useful step.

Your reactions?

Written by epopp

September 3, 2015 at 8:09 am

18 Responses

Subscribe to comments with RSS.

  1. I decided a while ago that I will no longer discuss whether or not an article should be published in my reviews. I just write about the merits. The decision about publishing goes to all the things about journals I don’t support, like journal ranking and print page limits. Which means “fit” with the journal is not my business as a reviewer – that’s on the editors.

    Liked by 2 people

    Philip N. Cohen

    September 3, 2015 at 11:15 am

  2. I think it’s hard to hold a single epistemological standard across journals. Not in a better-worse way, but in a “what constitutes a meaningful contribution to knowledge” way. Maybe this is different for a demographer; I’m not sure. But my work is mostly small-n comparative. When I review similar work for ASR I expect it to have certain trappings that I might not care about for another journal. Even one that that is fairly quantitative but I perceive as more “open,” like, say, Socio-Economic Review. I don’t necessarily think the trappings actually make it better. But they feel necessary for ASR.

    Another way to put it is that for ASR my gut is to emphasize systematicity, while for another outlet I might prioritize insight and interestingness. And really I might prefer the more interesting, insightful article.

    Like

    epopp

    September 3, 2015 at 12:20 pm

  3. I could see why you’d take that view as ASR editor, but it doesn’t seem like the job of a reviewer. The peer review is about the merits and quality of the work – as I see it anyway. When we get reviews for Contexts – which has very idiosyncratic editorial taste – I don’t want to hear reviewers comment on the suitability of a paper for the journal, I want to hear whether it’s good (true, new, important, etc).

    Liked by 1 person

    Philip N. Cohen

    September 3, 2015 at 1:39 pm

  4. I think this is seriously wrong-headed and the editors will find that these rules (and the tenor of the times) make reviewers MORE skeptical of ethnographic, historical, and comparative papers. Why are there no rules for quantitative analyses (e.g., about data archiving, transparency in measurement, etc.)? Reviewers get dazzled with cutting-edge quantitative methods and let all sorts of corner-cutting and mystification slide.

    Liked by 1 person

    TimBartley

    September 3, 2015 at 1:40 pm

  5. They’re soliciting similar guidelines for experimental research. If anyone is interested in contributing, I believe Alison Bianchi (Iowa) is organizing the effort (alongside a number of talented and experienced experimentalists).

    Like

    jessica

    September 3, 2015 at 1:41 pm

  6. Here’s my suggested version (in its entirety): At ASR we are committed to showcasing the full range of work produced by sociologists today. This includes papers that rely on observational, ethnographic, and other “qualitative” methods as their main (or only) source of empirical evidence. We realize that some reviewers may be tempted to equate methodological rigor with attempts to identify (potentially) causal net effects in quantitative models, but we believe this is insufficient. Sociology is distinguished by its methodological pluralism, and reviewers should in all cases strive to evaluate the fit between argument and evidence and note potential problems with this fit in their reviews, regardless of the data or method employed.

    Liked by 2 people

    TimBartley

    September 3, 2015 at 1:55 pm

  7. I had not seen the qual- and theory-specific guidelines, but I recently read the general guidelines (https://docs.google.com/document/d/1GspfX5W5cv0jSTretyXThFpxpbN2cf1fhNJwoz1T5xg/pub) as part of a review and thought they were really strong. I have not been shy about the opinion that the main problem with peer review is the reviewers (https://codeandculture.wordpress.com/2013/11/18/youbrokepeerreview/) and this is a positive step for editors to try to reshape reviewer behavior. I especially like the bit in the theory guidelines about wrong versus disagree and the bit in the general guidelines about not straining to find issues if there really aren’t any.

    Liked by 1 person

    gabrielrossman

    September 3, 2015 at 3:48 pm

  8. I agree with Beth and Gabriel that having more specific reviewer guidelines is a good thing, although I have no opinion about the content of any given area’s guidelines. Many reviewers, even some that have been around for a long time, simply don’t know what is expected of them. They don’t know the difference between evaluation versus offering an opinion/preference. For example, I recently had a reviewer tell my coauthor and I, “I’m not a huge fan of [your modeling approach].” The reviewer didn’t offer a substantive critique of the approach or explain why another modeling approach was better, which based on my knowledge it was not. The reviewer simply stated a preference. That’s an example of bad reviewing that I’d like to see stamped out of journals, and I think having stronger guidelines could help.

    Liked by 1 person

    brayden king

    September 3, 2015 at 5:37 pm

  9. The guidelines for theory papers seem okay and maybe even an improvement. But the guidelines for qualitative papers seem clearly to hold such papers to a substantially higher standard than other papers. There is a very long list of criteria, inappropriate use of quantitative logic (sampling?) and an insistence that papers must address “theoretically relevant research questions of *discipline-wide interest, appeal, and substantive significance*.” Do quant papers really face this test? I don’t think so.

    Like

    Steven Vallas

    September 3, 2015 at 11:52 pm

  10. I’m glad to see some actual debate over this. I think there are two underlying questions: 1) to what extent do such guidelines help by spelling out appropriate criteria for evaluation vs. making regressions the default and everything else require special justification (setting aside the question of whether certain criteria — like a focus on sampling — are themselves inappropriate), and 2) to what extent do reviewers use the guidelines to gauge what quality looks like in a method that is not their own, vs. turning them into a list of veto points that the aforementioned paper full of regressions might not have to pass by.

    Or who knows, maybe reviewers will just ignore all the guidelines and do what they will.

    Like

    epopp

    September 4, 2015 at 1:19 am

  11. Here is the ENTIRE list of Sociological Science’s “publication criteria” for authors:

    1. The manuscript is an original piece of sociological scholarship that has not been published in whole or in part in another peer-reviewed outlet.
    2. The manuscript’s argument is well-structured and logically sound.
    3. The manuscript is written in clear, standard English.
    4. Any empirical evidence that is used to support the manuscript’s argument is based on data collection and analysis procedures that meet rigorous standards and that are reported in sufficient detail that a trained specialist could replicate the study.
    5. The manuscript, and the research on which it is based, meets all applicable ethical standards for research integrity.

    These leave a lot for reviewer discretion — suggests that the editors’ most important job is to pick “good” reviewers.” And then, Brayden’s point comes into play: how well-trained/socialized/schooled are the reviewers?

    I agree with my old colleague and Phillip — when I was an editor at ASQ, I deleted from reviewers’ comments to the authors any mention of “this should be/not be published.” That’s not a reviewer’s job.

    Liked by 1 person

    Howard Aldrich

    September 4, 2015 at 3:10 pm

  12. I do like having guidelines, though most reviewers ignore them. But they help clarify for the journal itself, how it is thinking about work, and the qualitative guidelines here remind me of what the NSF panel on qualitative research came up with years ago, and which remains useful for qualitative researchers wondering how to get funding from NSF. As someone who uses quant, qual, and comp-hist methods, I believe that we all face methods policing, but the tenor that policing takes differs by journal.

    One of the central issues is that reviewers do not see the breadth of papers that come to the journal, or know how few can be accepted. The editor has so much more information than the reviewers, and generally, more information should lead to better decisions. At G&S, I always asked reviewers not to say what their recommendation was, and I also often had to delete it since they did anyway.

    Liked by 1 person

    joyamisra

    September 4, 2015 at 7:53 pm

  13. For the record, I was not suggesting that reviewers should be making publication recommendations in the review itself, and yes, editors should be the ones deciding whether a piece fits. My point was that there is not one universal standard of “good” and “good” is inevitably colored by what you’re reviewing for. To take Philip’s example, “good” for Contexts =/= “good” for ASR — they’re doing entirely different things. To pick examples that are less distant (but still pretty distant), “good” for JHSB =/= “good” for Social Studies of Science. I can imagine research in medical sociology that was publishable in both, but I can’t imagine that you’d review it identically for the two journals. (Okay, I’ve never reviewed for JHSB.)

    Do others really have one universal standard for evaluating work across all outlets? What if you review for an interdisciplinary journal?

    Like

    epopp

    September 4, 2015 at 9:27 pm

  14. This sentence gave me pause: “The idea is that if reviewers, especially those who don’t write such papers themselves, are given a more explicit sense of what a ‘good’ article in one of these areas looks like, they will be less likely to dismiss them on grounds borrowed inappropriately from another type of research.” Maybe I’m misunderstanding this, or maybe I’m hopelessly naive, but isn’t peer review the review of a paper you’ve written by others who write similar papers? When I get sent a paper to review that uses methods I’m not familiar with, I decline the invitation. If I do identify myself as a peer, then I don’t need special guidelines, I just need to apply my own standards. Is this a paper I would like to see published? Does this paper contribute to the field?

    Like

    Thomas B

    September 6, 2015 at 5:25 am

  15. Howard – just for clarification, Sociological Science doesn’t actually use reviewers in the traditional sense. Their decisions are made without a review process. The papers go to the editors and they decide if it meets the criteria for publication. This is why they’re able to make decisions so quickly and without requiring much revising to the original argument/analysis. It also means that papers that are not as polished are quickly rejected at Soc Science.

    Like

    brayden king

    September 6, 2015 at 5:42 pm

  16. Brayden,

    That’s close, but not entirely right and partly depends on what you mean by “editors.” While Soc Science does rely more on the judgment of the editor and deputy editors (ie, the board), we also solicit a lot of reviews from our consulting editors (ie, a highly screened pool of reviewers who have committed to the journal’s mission).

    Like

    gabrielrossman

    September 6, 2015 at 6:29 pm

  17. Thomas, in sociology there seems to be a lot of cross-reviewing by people who use different methods. I would imagine that editors are careful to make sure someone using similar methods is among the reviewers, but my sense is that top journals often want people who different methods to review a paper to ensure it has broad appeal to, and seems legitimate to, a range of sociologists. Of course, I note if there are methods used in a paper that I can’t fully evaluate and specify the limits of my review.

    Like

    epopp

    September 6, 2015 at 9:22 pm

  18. […] Review invited members of the Comparative-Historical Sociology Section to help develop a new set of review and evaluation guidelines. The ASR editors — including orgtheory’s own Omar Lizardo — hope that developing such […]

    Like


Comments are closed.