orgtheory.net

Archive for the ‘journals’ Category

sociology journal reviewing is dumb (except soc sci and contexts) and computer conference reviewing is the way to go. seriously.

This post is an argument for moving away from the current model of sociology journal reviewing and adopting the computer science model. Before I get into it, I offer some disclaimers:

  1. I do not claim that the CS conference system is more egalitarian or produces better reviews. Rather, my claim is that it is more efficient and better for science.
  2. Philip Cohen will often chime in and argue that journals should be abolished and we should just dispense with peer review. I agree, but I am a believer in intermediary steps.
  3. I do not claim that computer science lacks journals. Rather, that field treats journals as a secondary form of publication and most of the action happens in the conference proceeding format.
  4. Some journals are very well run – Sociological Science does live up to its promise, for example, as a no nonsense place for publication. I am not claiming that every single journal is lame. Just most of them.

Let’s start. How do most sociology journals operate? It goes something like this:

  1. A scholarly organization or press appoints an editor, or a team, to run a journal.
  2. There is a limit on how many articles can be published. Top journals may about only 1 in 20 submitted articles. Many journals desk reject a proportion of the submissions.
  3. When you submit an article, the editors ask people to review the paper. There are  deadlines, but they are routinely broken and people vary wildly in terms of the attention they give to papers.
  4. When the reviews are written, which can take as short as a few days but as long as a year or more, the editors then make a judgment.
  5. Most papers with positive reviews and that the editors like go through massive revisions.
  6. The paper is reviewed again, completely from scratch and often with new reviews.
  7. If the paper is accepted, then this takes as little as a semester but more like a year or two.

This system made sense in a world of limited resources. But it has many, many flaws. Let’s list them:

  1. Way too much power in the hands of editors. For example, I was told a day or two ago that a previous editor of a major journal simply desk rejected all papers using Twitter data. A while ago, another editor a major journal just decided she had enough of health papers and started desk rejecting them as well. Maybe these choices are justified, maybe they aren’t.
  2. Awful, awful reviewer incentives. Basically, we beg cranky over worked people to spend hours reading papers. Some people do a good job, but many are simply bad at it. Even when they try, they may not be the best people to read it.
  3. Massive time wasting. Basically, we have a system where it is normal for papers to bounce around the journal system *for years.*
  4. Bloated papers. Many of the major advances in science, in previous ages, where made in 5 and 10 page papers. Now, to head off reviewers, people write massive papers with tons of appendices.

Ok, if the system is lame, then what is the alternative? It is simple and very easy to do: move to peer reviewed conference system of computer sciecne. How does that work?

  1. Set up a yearly conference.
  2. Like an editorial board, you recruit a pool of peer reviewers and they commit to peer review *before seeing the papers.* Every year, the conference had new “chairs,” who organize the pool.
  3. Set hard page/word limits. The computer will not accept papers that are not in the right range.
  4. Once papers and abstracts are submitted, the reviewers *choose* which papers to review. People can indicate how badly they want a paper and you then allocate.
  5. Each paper had a “guide” who hounds reviewers and guides conversation
  6. Set hard deadlines. These will be followed (mostly) because there serious consequences if it doesn’t.
  7. Papers can then be ranked in terms of reviews and the conference chairs can have final say. Papers are not perfect or make everyone happy. They just have to be in the top X% of papers.
  8. CS proceedings sometimes allow discussion between reviewers, which can clarify issues.
  9. Some conferences allow an “R&R” stage. If the paper’s authors think they can respond to reviews, they can submit a “rebuttal.”
  10. In any case, accepted or revised papers also have to stay under the limit and must be submitted by a hard deadline.
  11. From submission to acceptance might be 3 months, tops. And this applies to all papers. The processes

Let’s review how this system is superior to the traditional journal system:

  1. Speed: a paper that may take 2-3 years to find a home in the sociology system, takes about one or two semesters in this system. The reason is that the process concludes quickly for every single paper and there are usually multiple conferences you can try.
  2. Lack of editorial monopoly: The reviewers and chairs rotate every conference, so if you think you just got a bad draw, just try again next year.
  3. Conversation: In the CS conference software (easychair.org), reviewers can actually talk to each other to clarify what they think.
  4. (Slightly) Better Reviews: People can choose which papers to review, which means you are way more likely to get someone who cares. Unlike the current system, papers don’t get orphaned and you are more likely to get someone invested in the process.
  5. Hard page limits: No bloated papers or response memos. It is tightly controlled.

The system is obviously faster. You get the same variety of good and bad reviews, but it is way, way faster. Papers don’t get orphaned or forgotten at journals and all reviews conclude within about 2 months. Specific editors no longer matter and single gatekeepers don’t bottle neck the system. It is better for science because more papers get out faster.

Rise up – what do you have to lose except your bloated R&Rs?

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($4.44 – cheap!!!!)/Theory for the Working Sociologist (discount code: ROJAS – 30% off!!)/From Black Power/Party in the Street / Read Contexts Magazine– It’s Awesome!

 

Written by fabiorojas

December 6, 2017 at 5:01 am

response to gelman on what retraction does and does not do

In our recent discussion about retraction, Andrew Gelman wrote the following:

I’m on record as saying that retraction is not much of a solution to anything given that it’s performed so rarely.

So I agree with you, I guess, and I’d probably go further and say that we can’t realistically expect papers that are fraudulent or fatally erroneous. Again, the problem is that there are so many papers that are fraudulent or fatally erroneous, that most of them aren’t gonna get retracted anyway.

We have to get away from the whole idea that, just cos a paper is published in a serious journal (even a top journal), that it’s correct or even reasonable. Top journals regularly publish crap. They publish good stuff too, but they also publish a lot of crap. And, to the extent that retraction is a way to “protect the brand,” I’m against it.

This comment made me think about the problem with litigation – while it may help the plaintiff achieve an outcome, it rarely solves any broader problem. This is because taking people to court is a lengthy, expensive and inefficient process. Retraction is really similar. It is simply not a tool meant for more systematic monitoring of academic work. It is a blunt tool meant only for really extreme cases.

What would I suggest? 1. Encourage openness and replication. 2. Institute rules so people can share data. 3. Create systems were discussions of papers can be appended to papers. These are all less expensive and more decentralized ways to monitor work.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($4.44 – cheap!!!!)/Theory for the Working Sociologist (discount code: ROJAS – 30% off!!)/From Black Power/Party in the Street / Read Contexts Magazine– It’s Awesome 

Written by fabiorojas

October 4, 2017 at 4:01 am

why is it bad to retract non-fraudulent and non-erroneous papers?

It is bad to demand the retraction non-fraudulent papers. But why? I think the argument rests on three intuitions. First, there is a legal reason. When an editor and publisher accept a paper, they enter into a legal contract. The authors produces the paper and the publisher agrees to publish. To rescind publication of a paper is to break a contract, except in cases of fraud. The other exception is error in analysis that invalidates the paper’s claim (e.g., a math paper that has a non-correctable flaw in a proof or mis-coded data whose corrections leads to an entirely new conclusion – even then, maybe the paper should just be rewritten).

Second, there is a pragmatic reason. When you cater to retraction demands, outside of fraud and extreme error, you then undermine the role of the editor. Basically, an editor is given the position of choosing papers for an audience. They are not obligated to accept or reject any papers except those they deem interesting or of high quality. And contrary to popular belief, they do not have to accept papers that receive good reviews nor must they reject papers that receive bad reviews. Peer review is merely advisory, not a binding voting mechanism, unless the editor decides to simply let the majority rule. Thus, if editors ceded authority of publishing to the “masses,” they would simply stop being editors and more like advertisers, who cater to the whims of the public.

Third, I think it is unscholarly. Retraction is literally suppression of speech and professors should demand debate. We are supposed to be the guardians of reason, not the people leading the charge for censorship.

So what should you do if you find that a journal publishes bad, insulting or inflammatory material? Don’t ask for a retraction. There are many proper responses. Readers can simply boycott the journal, by not reading it or citing it. Or they can ask a library to stop paying for it. Peers can agree to stop reviewing for it or to dissociate themselves from the journal. A publisher can review the material and then decide to not renew an editor’s contract. Or if the material is consistently bad, they can fire the editor.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($4.44 – cheap!!!!)/Theory for the Working Sociologist (discount code: ROJAS – 30% off!!)/From Black Power/Party in the Street / Read Contexts Magazine– It’s Awesome 

Written by fabiorojas

September 21, 2017 at 4:01 am

third world world quarterly should not retract “the case for colonialism”

Third World Quarterly recently published an article by Bruce Gilley called “The Case for Colonialism (TCfC)” The article makes a few related claims, but it boils down to:

  1. Many anti-colonial movements were horrible.  (see pp. 5-6)
  2. Colonialism rests on “cosmopolitanism” and this is a a good thing. (see p. 8)
  3. Thus, when you properly consider the costs and benefits, you should rethink the value of colonialism.

I’ll address these claims below, but I first wanted to address the movement to retract TCfC – see here. Basically, the retraction advocates think the article is offensive and appalling. It may be, but that doesn’t mean it should be retracted.

The job of a journal editor is to select articles that they think advances their field and raise interesting issues. If they think “The Case for Colonialism” does so, they should not retract the article. It represents an argument they think should be debated. Retraction should not be done simply because the article is bad or offensive. Retraction should only happen if the article turns out to be fraudulent. Otherwise, if an editor thinks the article has value, let it be. Critics can write their own response. Or, if they think the level of scholarship is horrid, they can stop reading it and ask the library to drop it.

Now, what about the argument? Does colonialism get a bad rap? Let’s start with what I think is correct. I believe it to be true that many anti-colonial movements and post-colonial governments were horrible. For every leader like Ghandi, we get other leaders who, simply put, were savage killers, from the corrupt Mobutu Sese Seku of Congo to the Marxist movements of Ethiopia and North Korea, which brought mass death. So yes, the reflexive praise of anti-colonial movements often overlooks the grotesque outcomes

Here is what I disagree with. Analyses like Gilley’s often overlook the massive death brought by Western colonizers. Let’s take just one example – the colonial government in the Belgian Congo is thought to have killed 10 million people. This is murder on Hitlerian and Stalinist proportions. Belgian Congo is not an isolated case. Mass murder accompanied Western colonization in many places. Even if Gilley is correct in that colonial governments may have brought some values, it is hard to believe how they would balance out this massive loss of humanity.

Let me end on a constructive note. As written, Gilley’s article is an intellectual failure. But we can extract a valuable insight. Colonialism wasn’t about bringing the best of the West to the world. It brought the worst of the West to the world. Western culture has produced amazing things – the belief in human rights and equality and modern science. But that is not what was brought to the people of the world. If Western governments had truly prioritized the best of Western culture, then Gilley might have a point. Similarly, the critics might be right if anti-colonialists had rejected the worst of the West and brought the best of the West. Instead, many anti-colonial movements retreated into Marxism, Maoism and other ideologies that killed and impoverished millions. They should have espoused tolerance, liberal culture and markets.

Bottom line: Debate, don’t retract. And in terms of colonialism, it deserves its reputation. it was horrible. In opposing repression, we can do better than what happened in the post-colonial era.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($4.44 – cheap!!!!)/Theory for the Working Sociologist (discount code: ROJAS – 30% off!!)/From Black Power/Party in the Street / Read Contexts Magazine– It’s Awesome 

 

Written by fabiorojas

September 20, 2017 at 4:01 am

indiana is the center of the sociological universe

Weird fact: 4 of the ASA journals are now hosted at Indiana state universities. The American Sociological Review (Notre Dame), the Journal of Health and Social Behavior (Purdue), the Sociology of Education (Purdue) and Contexts (IU). I am not sure what to say about this. It may be dumb luck, but it’s probably a reflection of the fact that this small place has an unusually high number of good sociologists. I’m lucky to be here and I hope sociology continues to grow in this great state.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($4.44 – cheap!!!!)/Theory for the Working Sociologist (discount code: ROJAS – 30% off!!)/From Black Power/Party in the Street / Read Contexts Magazine– It’s Awesome 

Written by fabiorojas

September 18, 2017 at 4:01 am

three cheers for speedy open access

Over at Scatter Plot, Dan Hirschman discusses the advantages of publishing in Sociological Sciences, which employs a simple “up or down” decision process and fast time to print:

When we finished our first revisions, we could have sent the paper to a traditional journal and waited. If we were lucky, the paper might have been reviewed “quickly” in just a couple months, received an R&R, been re-reviewed in a couple more months, eventually accepted, and published, a process that would have taken at least a year, and typically more like 2. Instead, on June 21st we submitted the paper for review at Sociological Science and simultaneously uploaded the draft to SocArXiv. Posting the paper to SocArXiv meant that whether or not the paper was accepted in a timely fashion at a journal it would be available to anyone who was interested.

Sociological Science conditionally accepted the paper on July 17, just under a month later. We revised the paper and resubmitted it on July 27. The revised version was accepted on July 29th, page proofs came on August 9th, and the published version came out August 28th. Total time from submission to print: just over two months.

Dan also notes that his paper was read by a gazillion people when the Trump administration signalled that it would (re)-litigate affirmative action. By having a public draft in SocArxiv, millions could access the paper. A win for Dan and Ellen and a win for science. Three cheers for open access.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($4.44 – cheap!!!!)/Theory for the Working Sociologist (discount code: ROJAS – 30% off!!)/From Black Power/Party in the Street / Read Contexts Magazine– It’s Awesome!

Written by fabiorojas

August 31, 2017 at 12:01 am

that gender studies hoax is dumb, but look at this business model

Today’s five-minute-hate is on gender studies, or people who dump on gender studies, depending on your POV. The short version for those of you not paying attention: A philosopher and a math PhD decided gender studies is dumb and ideological. They wrote up a jargon- and buzzword-filled article titled “The Conceptual Penis as a Social Construction” and paid to get it published in a peer-reviewed journal no one’s ever heard of. Ha ha ha! Take that, gender studies!

This is a stupid prank that has already been taken down in about five different places. I’m not going to bother with that.

But in looking at the original journal, I noticed this crazy business model they have. The journal, Cogent Social Sciences, is an open-access outlet published by Cogent OA. It charges $1350 to publish an article, unless you don’t have $1350, in which case they’ll take some unspecified minimum.

Okay, so far it sounds like every other scammy “peer-reviewed” open access journal. But wait. Cogent OA, it turns out, is owned by Taylor & Francis, one of the largest academic publishers. Taylor & Francis owns Routledge, for instance, and publishes Economy and SocietyEnvironmental Sociology, and Justice Quarterly, to pick a few I’ve heard of.

Cogent OA has a FAQ that conveniently asks, “What is the relationship between Cogent OA and Taylor & Francis?” Here’s the answer (bold is mine):

Cogent OA is part of the Taylor & Francis Group, benefitting from the resources and experiences of a major publisher, but operates independently from the Taylor & Francis and Routledge imprints.

Taylor & Francis and Routledge publish a number of fully open access journals, under the Taylor & Francis Open and Routledge Open imprints. Cogent OA publishes the Cogent Series of multidisciplinary, digital open access journals.

Together, we also provide authors with the option of transferring any sound manuscript to a journal in the Cogent Series if it is unsuitable for the original Taylor & Francis/Routledge journals, providing benefits to authors, reviewers, editors and readers.

So get this: If your article gets rejected from one of our regular journals, we’ll automatically forward it to one of our crappy interdisciplinary pay-to-play journals, where we’ll gladly take your (or your funder’s or institution’s) money to publish it after a cursory “peer review”. That is a new one to me.

There’s a hoax going on here all right. But I don’t think it’s gender studies that’s being fooled.

Written by epopp

May 20, 2017 at 4:16 pm

at contexts, we’ll stand by your article

Over at Contexts, Phillip and Syed have their own response to the Hypatia/Tuvel controversy. They think that Hypatia should stand by their article and then they describe their recent experience publishing a controversial interview of Rachel Dolezal, conducted by NYU’s Ann Morning:

Contexts as a magazine, we as editors, and Ann Morning took a lot of flak for our publishing this interview. Why did we publish it? Dolezal, like her or not, has a fascinating story, and Morning did a great job interviewing her. Are there reasons to be critical? Of course there are—there always are. So go ahead and criticize, we can take it. We are firmly in the camp that it’s better to publish stuff that sparks a conversation than to not. Haters will hate (cool), but the constructive criticizers help to make our science better. Isn’t that what we signed up for? Criticizers could pitch us an article idea—arguably a better use of time than a Twitter rant. No one actually did this yet, though.

Everything that goes into Contexts (and we should think pretty much every publication, and certainly academic publications) has been approved by the editors (us) whether it is peer-reviewed or not. Unless there’s fraud, you stand behind the authors and their work you publish. Like we did with Ann Morning’s interview with Rachel Dolezal. If you can’t do that, you should resign.

Normally, editors can’t promise much. We can try to get your article reviewed in a timely fashion and we can give tips on responding to reviewers. But I’ll make this additional promise when Rashawn and I take over Contexts in a few months. We may agree with your article, or we may disagree with it, but if we publish it, we’ll stand by it.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($4.44 – cheap!!!!)/Theory for the Working Sociologist (discount code: ROJAS – 30% off!!)/From Black Power/Party in the Street  

Written by fabiorojas

May 10, 2017 at 12:02 pm

it’s all about contexts

It is my pleasure to announce that Rashawn Ray and I will join Contexts as the new editors in Winter 2018. Contexts: Understanding People in Their Social worlds is the ASA’s magazine which brings the cutting edge of sociology to the public. Rashawn and I are humbled by the appointment. A lot of top notch people have edited this journal and we hope to live up to their legacy.

Let me tell you a little bit about Rashawn. I first met Rashawn when he was a graduate student at Indiana University. Immediately, he struck me as a highly intelligent and outgoing person. He begins a conversation with a smile. He is interested in what you have to say and really wants to learn from you. But more than that, he had a real interest in linking sociology to the concerns of everyday life. As time passed, this became clear to me. His research focuses on how social inequality affects health and well being and he is extremely active in getting the sociological vision out there through Facebook, Twitter and public speaking. The right guy for the right job – and associate professors can’t say “no!”

So what do we have in mind? First, we want to build on a decade and a half of excellence. Contexts is a magazine that pleases the mind and the eye. It is also an intellectual magazine that offers the public well-grounded but accessible accounts of academic research. Second, we want to really start engaging with the audiences that might enjoy sociological work, whether it be people in the policy world, business, or the arts. Rashawn and I are excited about the possibilities.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($4.44 – cheap!!!!)/Theory for the Working Sociologist (discount code: ROJAS – 30% off!!)/From Black Power/Party in the Street  

Written by fabiorojas

April 21, 2017 at 12:01 am

socarxiv is launched

cropped-banner2

Over the summer, SocArXiv announced its development. What is SocArXiv, you ask? It’s a free, open source, open access depository for prepublication versions of papers — a way to get your work out there faster, and to more people. Think SSRN or Academia or ResearchGate, but not-for-profit (SSRN is now owned by Elsevier) and fundamentally committed to accessibility.

Today, a beta version of SocArXiv has launched.

SocArXiv has had the great fortune to partner with the Center for Open Science, the folks who brought you the Reproducibility Project. Because COS was already working on building infrastructure, SocArXiv was quickly able to put up a temporary drop site for papers. (Full disclosure: I’m on the SocArXiv steering committee.)

Just on the basis of that, more than 500 papers have been deposited and downloaded over 10,000 times. Now a permanent site is up, and we will be working to get the word out and encourage sociologists and other social scientists to make the jump. With financial support from the Open Society Foundation and the Alfred P. Sloan Foundation, this thing is looking pretty real.

More developments will be coming in the months ahead. We’ve partnered with the LSE’s International Inequalities Institute to establish our first working paper series, and will be spearheading an outreach effort to academics, as well as continuing to develop additional features. I will doubtless be highlighting some of those here.

In the meanwhile, take a look, and add a paper of your own. It’s quick and painless, and will help you make your work quickly accessible while contributing to the development of open science infrastructure.

For more info on SocArXiv, visit the blog, or follow on Twitter or Facebook.

Written by epopp

December 7, 2016 at 3:46 pm

should you publish in PLoS One?

My response to this question on Facebook:

  1. Do not publish in PLoS if you need a status boost for the job market or promotion.
  2. Do publish if journal prestige is not a factor. My case: good result but I was in a race against other computer scientists. Simply could not wait for a four year journal process.
  3. Reviewer quality: It is run mainly by physical and life scientists. Reviews for my paper were similar in quality to what CS people gave me on a similar paper submitted to CS conferences/journals.
  4. Personally, I was satisfied. Review process fair, fast publication, high citation count. Would not try to get promoted on the paper by itself, though.
  5. A lot of people at strong programs have PLoS One pubs but usually as part of a larger portfolio of work.
  6. A typical good paper in PLoS is from a strong line of work but the paper just bounced around or too idiosyncratic.
  7. PLoS One publishes some garbage.
  8. Summary: right tool for the right job. Use wisely.

Another person noted that many elite scientists use the “Science, Nature, or PLoS One model.” In other words, you want high impact or just get it out there. No sense wasting years of time with lesser journals.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street 

Written by fabiorojas

October 11, 2016 at 12:13 am

the pager paper, sociological science, and the journal process

Last week, we discussed Devah Pager’s new paper on the correlation between discrimination in hiring and firm closure. As one would expect from Pager, it’s a simple and elegant paper using an audit study to measure the prevalence and consequences of discrimination in the labor market. In this post, I want to use the paper to talk about the journal publication process. Specifically, I want to discuss why this paper appeared in Sociological Science.

First, it may be the case that Professor Pager directly went to Sociological Science without trying another peer reviewed journal. If so, then I congratulate both Pager and Sociological Science. By putting a high quality paper into public access, both Professor Pager and the editors of Sociological Science have shown that we don’t need the lengthy and cumbersome developmental review system to get work out there.

Second, it may be the case that Professor Pager tried another journal, probably the ASR or AJS or an elite specialty journal and it was rejected. If so, that raises an important question – what specifically was “wrong” with this paper? Whatever one thinks of the Becker theory of racial discrimination, one can’t critique the paper on lacking a “framing” or have a simple and clean research design. One can’t critique statistical technique because it’s a simple comparison of means. One can’t critique the importance of the finding – the correlation between discrimination in hiring and firm closure is important to know and notable in size. And, of course, the paper is short and clearly written.

Perhaps the only criticism I can come up with is a sort of “identification fundamentalism.” Perhaps reviewers brought up the fact discrimination was not randomly assigned to firms so you can’t infer anything from the correlation. That is bizarre because it would render Becker’s thesis un-testable. What experimental design would allow you get a random selection of firms to suddenly become racist in their hiring practices? Here, the only sensible approach is Bayesian – you collect high quality observational data and revise your beliefs accordingly. This criticism, if it was made, isn’t sound upon reflection. I wonder what, possibly, could the grounds for rejection be aside from knee jerk anti-rational choice comments or discomfort with a finding that markets do have some corrective to racial discrimination.

Bottom line: Pager and the Sociological Science crew are to be commended. Maybe Pager just wanted this paper “out there” or just got tired of the review process. Either way, three cheers for Pager and the Soc Sci Crew.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

September 28, 2016 at 12:10 am

agreements and disagreements with rob warren

Rob Warren, of the University of Minnesota, wrote some very engaging and insightful comments about his time as the editor of Sociology of Education. Jeff Guhin covered this last week. Here, I’ll add my own comments. First, a strong nod of agreement:

First, a large percentage of papers had fundamental research design flaws. Basic methodological problems—of the sort that ought to earn a graduate student a B- in their first-year research methods course—were fairly common.4 (More surprising to me, by the way, was how frequently reviewers seemed not to notice such problems.) I’m not talking here about trivial errors or minor weaknesses in research designs; no research is perfect. I’m talking about problems that undermined the author’s basic conclusions. Some of these problems were fixable, but many were not.

Yes. Professor Warren is correct. Once you are an editor, or simply an older scholar who has read a lot of journal submissions, you quickly realize that there a lot of papers that really, really flub research methods 101. For example, a lot of paper rely on convenience samples, which lead to biased results. Warren has more on this issue.

Now, let me get to where I think Warren is incorrect:

Second, and more surprising to me: Most papers simply lacked a soul—a compelling and well-articulated reason to exist. The world (including the world of education) faces an extraordinary number of problems, challenges, dilemmas, and even mysteries.  Yet most papers failed to make a good case for why they were necessary. Many analyses were not well motivated or informed by existing theory, evidence, or debates. Many authors took for granted that readers would see the importance of their chosen topic, and failed to connect their work to related issues, ideas, or discussions. Over and over again, I kept asking myself (and reviewers also often asked): So what?

About five years ago, I used to think this way. Now, I’ve mellowed and come to a more open minded view. Why? In the past, I have rejected a fair number of papers on “framing” grounds. Later, I will see them published in other journals, often with high impact. Also, in my own career, leading journals have rejected my work on “framing” grounds and when it gets published in another leading journal, the work will get cited. The framing wasn’t that bad. Lesson? A lot of complaints about are framing are actually arbitrary. Instead, let the work get published and let the wider community decide, not the editor and a few peer reviewers.

The evidence on the reliability of the peer review process suggests that there is a lot of randomness in the process. If some of these “soul-less” papers had been resubmitted a few months later, some of them would have been accepted with enthusiastic reviews. Here’s a 2006 review of the literature on journal reliability and here’s the classic 1982 article showing that a lot of journal acceptance is indeed random. Ironically, Peters and Ceci (1982) note that “serious methodological flaws” are a common reason for rejecting papers – that had already been accepted!

This brings me to Warren’s third point – a complaint about people who submit poorly developed papers. He suggests that there are job pressures and a lack of training. On the training point, there is nothing to back up his assertion. Most social science programs have a fairly standard sequence of quantitative methods courses. The basic issues regarding causation v. description, identification, and assessment of instrument quality are all pretty easy to learn. Every year, the ICPSR offers all kinds of training. Training we have, in spades.

On the jobs point, I would like to blame people like Professor Warren and his colleagues on hiring and promotion committees (which includes me!!). The job market for the better positions in sociology (R1 jobs and competitive liberal arts schools) has essentially evolved into whoever gets into the top journals in graduate school plus graduate program reputation.

I’d suggest we simply think about the incentives here. Junior scholars live in a world where a lot of weight is placed on a very small number of journals. They also live in a world where journal acceptance is random. They also live in a world where journals routinely lose papers, reject after multiple R&R rounds and takes years (!) to respond (see my journal horror stories post). How would any rational person respond to this environment? Answer: just send out a lot of stuff until something hits. There is no incentive to develop a paper well if it will be randomly rejected after sitting at the journal for 16 months.

This is why I openly praise and encourage reforms of the journal system. I praise “platform” publishing like PLoS One. I praise “up or down” curated publishing, like Sociological Science. I praise Socius, the open access ASA journal. I praise socArxiv for creating an open pre-print portal. I praise editors who speed the review process and I praise multiple submissions practices. The basic issues that Professor Warren discusses are real. But the problem isn’t training or stressed out junior scholars. The problem is the archaic journal system. Let’s make it better.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

August 9, 2016 at 12:01 am

next article discussion: racism vs. favoritism by nancy ditomaso

Last week, we discussed “The Suffocation Model” by Finkel et al, suggested by Chris Martin. Before Finkel at al., we had two posts on Tanya Golash-Boza’s article on race theory in sociology. Next month, we will discuss “Racism and discrimination versus advantage and favoritism: Bias for versus bias against” by Nancy DiTomaso, which appeared in Research in Organizational Behavior 2015. This article was suggested by Dan Hirschman.

The purpose of the “article discussion” series is to highlight articles that don’t appear in the leading journals. If you want the blog to shine some light on an article, or working paper, just put it in the comments or send me a message. The only rule is that it can’t be from an “A” journal like ASR/AJS/SF/SP or even a highly visible specialty journal. Thanks for reading.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

July 26, 2016 at 12:26 am

inside the american journal of sociology

Don’t worry, I won’t give away state secrets.

In the 2000-01 and 2002-03 academic years, I worked at the American Journal of Sociology as a member of the manuscript intake group and later as an associate editor. I also worked for a while, roughly at the same time, as the managing editor of Sociological Methodology, which was then edited by my dissertation advisor, Rafe Stolzenberg. In this post, I want to tell you a little bit about how top academic journals work. This is important because academics reward people based on getting into highly selective journals. There should be a lot of discussion about how the institution works and what does and does not get accepted.

Background: The AJS is the oldest general interest journal in American sociology and has, during its entire existence, been based at the Department of Sociology at the University of Chicago. To my knowledge, it has never rotated to another program. In fact, the relationship between the Department and the journal is so strong that one of Chicago’s faculty, Andy Abbott, has written a very nice monograph just about the AJS called Department and Discipline. It’s a good book and you should read it if you want to either understand the evolution of journals or how Chicago fits in to the broader discipline.

Some time ago, the AJS developed this system where students were strongly involved in the operation of the journal. For example, the AJS usually is run by a full time manager, the incredible Susan Allan, and a few students who run the office. These folks do budgets, office organization, crazy amounts of paper work, and a whole lot more. But it goes beyond administration. Students are deeply involved in the shaping the journal’s content.

At the time I was a doctoral student, the AJS was organized into three major committees: the editorial board, which is always headed by a senior professor; a manuscript intake group, which assigned reviewers to papers; and a book review board, also headed by faculty. The manuscript intake group and the book review board are mainly staffed by students. The editorial board usually has one or two students on it, who have a major voice.

In contrast, Sociological Methodology was run like many specialty journals. You had a manager (me) and the editor (Prof. Stolzenberg) who choose reviewers, read reviews, and made decisions. These two people did about 90% of the work running the journal

Lessons from working at AJS: In many ways, the AJS resembled other major journals that must process hundreds of papers per year. There is a basic intake/review/decision cycle. That process has up and down sides. The up side is that the journal review process is actually pretty decent at weeding out garbage. After a while, you can easily spot bad papers. Unending rants, poor spelling, poor formatting, lack of data. Another upside is that many papers do actually improve once people respond to reviewers.

I also saw some of the downsides of the review process. For example, I discovered that only about a third of people agree to consistently review papers, making the workload highly unequal. Some of the patterns are obvious. A lot of people stop answering the mail post-tenure. People in some sub-fields are simply bad citizens and refuse to write reviews or write bad ones. Finally, like a lot of journals, we could let papers fall between the cracks and go without a decision.

Perhaps the biggest insight that I had was the power of editors and the randomness that goes into a “great paper.” Example: while I was on the editorial board, we had a paper with ok but not great reviews. I read it and disagreed strongly. Right as the chief editor was about to assign it to the reject pile, I interjected. It was published and was covered by the national media. This may sound like a great story, which it is, it also shows the weakness of the journal system. If I had been absent that week,  or if we had another student editor, the paper would have been rejected. Conversely, I am sure that I overlooked some excellent work.

A related lesson is that the chief editor matters a great deal. An editor can doom a paper from a scholar they don’t like, or on a topic they hate, by simply assigning it to known mean reviewers. Editorial influence appears in other ways. While most papers are clear rejects, many are on the border. An interventionist editor can strongly affect what is accepted from these border line cases. One editor I worked with would actually ask the authors for the data and rerun the analysis to see if reviewer 2’s criticism was right. Another is very comfortable with adding a few suggestions and then just tossing it back to the authors. The power of editors, and the Chicago department, also manifests itself in the fact that AJS is way more tolerant of longer, theory driven papers than other peer social science journals.

A second lesson is that there are big structural factors that influence what gets published. The first factor is type of research. Simply put, ethnographers produce papers at a slower rate than demographers. So if you have a small number of papers, it doesn’t make sense to risk it all on the AJS. Instead, you move to the book or more specialized journals.That’s one reason why ethnographic work is rare in top journals  A second factor is culture. There are some sub-fields where the reviewers seem to be really difficult. For example, during the late 1990s, there seemed to be a sort of feud in social psychology. Each side would tank the others in the review process. Ethnography is similar. When people did submit field work papers, it was nearly impossible to get 2 or 3 reviewers to say “this is good enough.” Just endless and endless demands.

The final lesson I took is that we are humans and we are biased. While 95% of decisions really based on reviews, there were definitely times that our biases showed. There were one or two papers I promoted because I was excited about social movement research. At other times, decisions took into account touchy political situations and author prestige. As I said, this is not typical but it does happen and I include myself in this evaluation.

Lessons from Working at Sociological Methodology: This was a totally different experience. Instead of being embedded in a larger group, it was just literally me and a filing cabinet and my advisor. We had a weekly meeting to discuss submissions, I took notes, and he told me what to do.

Probably the biggest take home point from working with Professor Stolzenberg was that editors make or break the journal. The dude was really on top of things and few papers went past 2 or 3 months. Once a paper couldn’t get a single review after six months and the editor wrote a letter to the author explaining the situation and they mutually agreed to release the paper from review.

Stolzenberg was also not afraid of people, a strong trait for an editor. He didn’t mind rejecting people and making the process speed up. Although he didn’t desk reject often, he was good about getting reviews and writing detailed rejection letters. That way, the journal didn’t get clogged with orphaned papers. The lesson is that there really is no excuse for slow reviews. Get reviews, reject the paper, or get the hell out of the editing business.

Final note – authors and reviewers are lame: I conclude with a brief discussion of reviewers and authors. First, authors are quite lame. They are slow at responding to editor. They fail to read reviewer comments or take them seriously. And even more puzzling, they fail to resubmit after the R&R. I was shocked to discover that a fairly large fraction of AJS and SM papers at the R&R stage were not resubmitted. Perhaps a third or so. Second, reviewers are lame. As Pamela Oliver put it so well in the recent American Sociologist, the review process is simply broken. Reviewers ask for endless revisions, the focus on vague issues like framing, or simply write hostile and unhelpful reviews. So I thank the 1/3 of academics who write prompt and professional reviews and I curse the 2/3 of shirkers and complainers to an eternity of reading late reviews that criticize the framing of the paper.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

June 6, 2016 at 12:01 am

Posted in academia, fabio, journals

new blog feature: journal article commentary

To celebrate 10 years on the blog, we are introducing a new feature – “article commentary.” The  concept is simple: we choose an article, read it, and comment. However, we’ll add a twist. The article can’t be from one of the core journals (ASR, AJS, SF, SP) and we’ll try to avoid the top field journals. In other words, we want to explore ideas that might be overlooked. They can be old or new, short or long. So please use the comments, the Facebook page, or plain old email to suggest an article. Suggest your own article or someone else’s. It’s all good. We’ll do this once or twice a semester.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

 

Written by fabiorojas

May 20, 2016 at 1:34 am

some really bad journal experiences i have had

Rejection is never fun, but I don’t think it’s unfair. Vague and conflicting reviews aren’t fun either, but that’s life. What I do think is unfair is when editors are negligent and incompetent, which leads to enormous wasting of time and, in some cases, can end careers. In this post, I’d like to share a few of my personal horror stories with journals so you know what often happens in the review process. Here are a few of my “favorites:”

  • The Journal of Mathematical Sociology (previous editor, not current) once waited 24 months to send me a rejection based on a 1 sentence review.
  • Social Networks (also a previous editor) lost a manuscript twice, resulting in a paper being tied in review for almost two years before it got a review.*
  • The American Journal of Public Health never reviewed a paper that was submitted. After a few months, I was asked to review my own paper! Then, after I complained, they never obtained reviews. About a year later, after me and my co-author complained, the paper was not reviewed or even desk rejected. Technically, I suppose, it might still  be under review!

On top of this, some folks are plain dishonest. For example, a previous editor of Sociology of Education rejected a paper of mine after the R&R (which is fair) and said that they don’t do 2nd R&Rs but then asked me to review a paper on the 2nd R&R. A book editor, after rejecting my manuscript told me face to face that they simply didn’t accept books by first time authors. That press, and most others, actually publish first time book authors, including friends of mine.

I am under no pretense that we can eliminate incompetence and dishonesty. But there are simple reforms that can lessen the cost of poor editing. For example, I am an advocate of multiple submission for journals – you can submit to as many journals at once. That way, if you get an incompetent editor, you can simply take your business elsewhere and not bother waiting for a response or dealing with chaotic and contradictory reviews. I also think Sam Lucas is onto something when he suggests that we should not allow reviewers to write open ended and vague reviews. Bottom line: the journal system allows people to do all kinds of bad things, but simple reforms can reduce the risk to authors.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

* The paper was rejected on “frustrating” grounds – a computer simulation was sent to experimentalists who wanted to see an experiment. Not unfair, but frustrating and a waste of time. If the journal doesn’t accept computational papers, it should have been desk rejected.

Written by fabiorojas

May 4, 2016 at 12:02 am

the business side of PLoS One

Disclaimer: I am a proud author of an article in PLoS One and I am extremely biased.

Michael Eisen, one of the biologists who promoted PLoS One, has an interesting blog post discussing the economics of PLoS One. He wrote it in response to the discovery that PLoS One’s management gets paid *very well.* His response is a nice discussion of how PLoS One works as a business. And just for the record, I am not against anyone making a profit so long as they produce something that benefits the rest of us. PLoS One has really opened the door for a lot of scholarship to come out quickly and free to the public.

Back to Eisner:

If I weren’t involved with PLOS, and I’d stumbled upon PLOS’s Form 990 now, I’d have probably raised a storm about it. I have absolutely no complaints about Andy’s efforts to understand what he was seeing – non-profits are required to release this kind of financial information precisely so that people can scrutinize what they are doing. And I understand why Andy and others find some of the info discomforting, and share some of his concerns. But having spent the last 15 years trying to build PLOS and turn it into a stable enterprise, I have a different perspective, and I’d like to explain it.
And:
The reality is, however, that it costs PLOS a lot more than $0 to handle a paper. We handle a lot of papers – close to 200 a day – each one different.  There’s a lot of manual labor involved in making sure the submission is complete, that it passes ethical and technical checks, in finding an editor and reviewers and getting them to handle the paper in a timely and effective manner. It then costs money to turn the collection of text and figures and tables into a paper, and to publish it and maintain a series of high-volume websites. All together we have a staff of well over 100 people running our journal operations, and they need to have office space, people to manage them, an HR system, an accounting system and so on – all the things a business has to have. And for better or worse our office is in San Francisco (remember that two of the three founders were in the Bay Area, and we couldn’t have started it anywhere else), which is a very expensive place to operate. We have always aimed to keep our article processing charges (APCs) as low as possible – it pains me every time we’ve had to raise our charges, since I think we should be working to eliminate APCs, not increase them. But we have to be realistic about what publishing costs us.
And:
I’ve argued for a long time that we should do away with selective journals, but so long as people want to publish in them, they’re going to have this weird economics. And note this is not just true of open access journals – higher impact subscription journals bring in a lot more money per published paper than low impact subscription journals, for essentially the same reason.
Read the whole article.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

 

Written by fabiorojas

March 23, 2016 at 12:01 am

visualizing the quant-qual divide in sociology

quant qual pic

The blog CWTS has a post that uses bibliometric data to map how quantitative and qualitative research segregates in sociology:

Perhaps we can conclude from this small foray into the quantitative-qualitative divide that particular research topics are often confined to one method. A qualitative approach yields a richer, thicker description, and embeds an analysis in a wider context. At the same time, it may trigger questions that require a more quantitative answer, which in turn may require again a more qualitative analysis. We may thus continuously switch between qualitative and quantitative methods. Rather than trying to integrate the two, which is for example promoted under the heading of mixed methods, we should perhaps mostly keep challenging both views from the other perspective. We should not be blind to the challenges posed by the other perspective, but accept that the other perspective can supplement and nuance our conclusions, rather than invalidate them. Fortunately, when looking at the distribution of publications in journals, some of the more general journals, such as American Sociological Review and American Journal of Sociology do include publications from both perspectives (although the quantitative perspective seems more present). More specialised journals, such as Cultural Sociology and Social Forces, mainly focus on respectively qualitative and quantitative research. At least, there are some common fora for discussion, but there is room for improvement.

Recommended.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street 

Written by fabiorojas

February 25, 2016 at 12:01 am

credit where credit is due: gender and authorship conventions in economics and sociology

[Ha — I wrote this last night and set it to post for this morning — when I woke up saw that Fabio had beat me to it. Posting anyway for the limited additional thoughts it contains.]

Last week Fabio launched a heated discussion about whether economics is less “racially balanced” than other social sciences. Then on Friday Justin Wolfers (who has been a vocal advocate for women in economics) published an Upshot piece arguing that female economists get less credit when they collaborate with men.

The Wolfers piece covers research by Harvard economics PhD candidate Heather Sarsons, who used data on tenure decisions at top-30 economics programs in the last forty years to estimate the effects of collaboration (with men or women) on whether women get tenure, controlling for publication quantity and quality and so on. (Full paper here.) Only 52% of the women in this population received tenure, compared to 77% of the men.

The takeaway is that women got no marginal benefit (in terms of tenure decision) from coauthoring with men, while they received some benefit (but less than men did) if they coauthored with at least one other women. Their tenure chances did, however, benefit as much as men’s from solo-authored papers. Sarsons’ interpretation (after ruling out several alternative possibilities) is that while women are given full credit when there is no question about their role in a study, their contributions are discounted when they coauthor with men.

Interesting from a sociologist’s perspective is that Sarsons uses a more limited data set from sociology as a comparison. Looking at a sample of 250 sociology faculty at top-20 programs, she finds no difference in tenure rates by gender, and no similar disadvantage from coauthorship.

While it would be nice to interpret this as evidence of the great enlightenment of sociology around gender issues, that is probably premature. Nevertheless, Sarsons points to one key difference between sociology and economics (other than differing assumptions about women’s contributions) that could potentially explain the divergence.

Sociology, as most of you probably know, has a convention of putting the author who made the largest contribution first in the authorship list. Economics uses alphabetical order. Other disciplines have their own conventions — lab sciences, for example, put the senior author last. This means that sociologists can infer a little bit more than economists about who played the biggest role in a paper from authorship order — information Sarsons suggests might contribute to women receiving more credit for their collaborative work.

This sounds plausible to me, although I also wouldn’t be surprised if the two disciplines made different assumptions, ceteris paribus, about women’s contributions. It might be worth looking at sociology articles with the relatively common footnote “Authors contributed equally; names are listed in alphabetical order” (or reverse alphabetical order, or by coin toss, or whatever). Of course such a note still provides information about relative contribution — 50-50, at least in theory — so it’s not an ideal comparison. But I would bet that readers mentally give one author more credit than the other for these papers.

That may just be the first author, due to the disciplinary convention. But one could imagine that a male contributor (or a senior contributor) would reap greater rewards for these kinds of collaborations. It wouldn’t say much about the hypothesis if that were not the case, but if men received more advantage from papers with explicitly equal coauthors, that would certainly be consistent with the hypothesis that first-author naming conventions help women get credit.

Okay, maybe that’s a stretch. Sarsons closes by noting that she plans to expand the sociology sample and add disciplines with different authorship conventions. It will be challenging to tease out whether authorship conventions really help women get due credit for their work, and I’m skeptical that that’s 100% of the story. But even if it could fix part of the problem, what a simple solution to ensure credit where credit is due.

Written by epopp

January 11, 2016 at 1:47 pm

asr reviewer guidelines: comparative-historical edition

[The following is an invited guest post by Damon Mayrl, Assistant Professor of Comparative Sociology at Universidad Carlos III de Madrid, and Nick Wilson, Assistant Professor of Sociology at Stony Brook University.]

Last week, the editors of the American Sociological Review invited members of the Comparative-Historical Sociology Section to help develop a new set of review and evaluation guidelines. The ASR editors — including orgtheory’s own Omar Lizardo — hope that developing such guidelines will improve historical sociology’s presence in the journal. We applaud ASR’s efforts on this count, along with their general openness to different evaluative review standards. At the same time, though, we think caution is warranted when considering a single standard of evidence for evaluating historical sociology. Briefly stated, our worry is that a single evidentiary standard might obscure the variety of great work being done in the field, and could end up excluding important theoretical and empirical advances of interest to the wider ASR audience.

These concerns derive from our ongoing research on the actual practice of historical sociology. This research was motivated by surprise. As graduate students, we thumbed eagerly through the “methodological” literature in historical sociology, only to find — with notable exceptions, of course — that much of this literature consists of debates about the relationship between theory and evidence, or conceptual interventions (for instance, on the importance of temporality in historical research). What was missing, it seemed, were concrete discussions of how to actually gather, evaluate, and deploy primary and secondary evidence over the course of a research project. This lacuna seemed all the more surprising because other methods in sociology — like ethnography or interviewing — had such guides.

With this motivation, we set out to ask just what kinds of evidence the best historical sociology uses, and how the craft is practiced today. So far, we have learned that historical sociology resembles a microcosm of sociology as a whole, characterized by a mosaic of different methods and standards deployed to ask questions of a wide variety of substantive interests and cases.

One source for this view is a working paper in which we examine citation patterns in 32 books and articles that won awards from the ASA Comparative-Historical Sociology section. We find that, even among these award-winning works of historical sociology, at least four distinct models of historical sociology, each engaging data and theory in particular ways, have been recognized by the discipline as outstanding. Importantly, the sources they use and their modes of engaging with existing theory vary dramatically. Some works use existing secondary histories as theoretical building blocks, engaging in an explicit critical dialogue with existing theories; others undertake deep excavations of archival and other primary sources to nail down an empirically rich and theoretically revealing case study; and still others synthesize mostly secondary sources to provide new insights into old theoretical problems. Each of these strategies allows historical sociologists to answer sociologically important questions, but each also implies a different standard of judgment. By extension, ASR’s guidelines will need to be supple enough to capture this variety.

One key aspect of these standards concerns sources, which for historical sociologists can be either primary (produced contemporaneously with the events under study) or secondary (later works of scholarship about the events studied). Although classic works of comparative-historical sociology drew almost exclusively from secondary sources, younger historical sociologists increasingly prize primary sources. In interviews with historical sociologists, we have noted stark divisions and sometimes strongly-held opinions as to whether primary sources are essential for “good” historical sociology. Should ASR take a side in this debate, or remain open to both kinds of research?

Practically speaking, neither primary nor secondary sources are self-evidently “best.” Secondary sources are interpretive digests of primary sources by scholars; accordingly, they contain their own narratives, accounts, and intellectual agendas, which can sometimes strongly shape the very nature of events presented. Since the quality of historical sociologists’ employment of secondary works can be difficult for non-specialists to judge, this has often led to skepticism of secondary sources and a more favorable stance toward primary evidence. But primary sources face their own challenges. Far from being systematic troves of “data” readily capable of being processed by scholars, for instance, archives are often incomplete records of events collected by directly “interested” actors (often states) whose documents themselves remain interpretive slices of history, rather than objective records. Since the use of primary evidence more closely resembles mainstream sociological data collection, we would not be surprised if a single standard for historical sociology explicitly or implicitly favored primary sources while relatively devaluing secondary syntheses. We view this to be a particular danger, considering the important insights that have emerged from secondary syntheses. Instead, we hope that standards of transparency, for both types of sources, will be at the core of the new ASR guidelines.

Another set of concerns relates to the intersection of historical research and the review process itself. For instance, our analysis of award-winners suggests that, despite the overall increased interest in original primary research among section members, primary source usage has actually declined in award-winning articles (as opposed to books) over time, perhaps in response to the format constraints of journal articles. If the new guidelines heavily favor original primary work without providing leeway in format constraints (for instance, through longer word counts), this could be doubly problematic for historical sociological work attempting to appear in the pages of ASR.  Beyond the constraints of word-limits, moreover, as historical sociology has extended its substantive reach through its third-wave “global turn,” the cases historical sociologists use to construct a theoretical dialogue with one another can sometimes rely on radically different and particularly unfamiliar sources. This complicates attempts to judge and review works of historical sociology, since the reviewer may find their knowledge of the case — and especially of relevant archives — strained to its limit.

In sum, we welcome efforts by ASR to provide review guidelines for historical sociology.  At the same time, we encourage plurality—guidelines, rather than a guideline; standards rather than a standard. After all, we know that standards tend to homogenize and that guidelines can be treated more rigidly than originally intended. In our view, this is a matter of striking an appropriate balance. Pushing too far towards a single standard risks flattening the diversity of inquiry and distorting ongoing attempts among historical sociologists to sort through what the new methodological and substantive diversity of the “third wave” of historical sociology means for the field, while pushing too far towards describing diversity might in turn yield a confusing sense for reviewers that “anything goes.” The nature of that balance, however, remains to be seen.

Written by epopp

September 8, 2015 at 5:51 pm

asr review guidelines

In a totally commendable attempt to broaden the range of methods represented in ASR, the new editorial team is working to develop guidelines for reviewers of papers using ethnographic and interview methods, theory papers, and comparative-historical papers. The idea is that if reviewers, especially those who don’t write such papers themselves, are given a more explicit sense of what a “good” article in one of these areas looks like, they will be less likely to dismiss them on grounds borrowed inappropriately from another type of research.

Jenn Lena posted links on Twitter to guidelines for the first two, and the comparative-historical section is forming a committee to provide input on the last one.

Personally, I think this is a great idea. I don’t know if it will work, and I might have some quibbles around the margins (I think really great work can come from ethnographic sites basically chosen for the sake of convenience, and that systematicity of method in choosing who to talk to isn’t as important as working to check and cross-check emerging findings), but by and large, it’s an admirable effort. I particularly liked the openness to the descriptive contribution of ethnography. Causality is terrific, but not everything has to be causal.

The tough thing, I think, is that we all think of ASR as a certain kind of journal, and review submissions to it accordingly. I know I’ve probably reviewed pieces negatively for ASR that I would really have liked for another journal, just because they didn’t seem like ASR pieces. Moving the needle is hard when even people who should be friendly to a certain type of work see it as just “not fitting.” (Much like other kinds of social processes?) But it’s worth trying, and this seems like a useful step.

Your reactions?

Written by epopp

September 3, 2015 at 8:09 am

journal wiki needs some love

A reader from the home office in Singapore asks me to publicize the Sociology Journal Turnaround wiki. The goal is simple: to collect information on how fast sociology journals are processing submissions. Click here and add your own story.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

June 1, 2015 at 2:51 am

Posted in fabio, journals