Archive for the ‘journals’ Category
It is my pleasure to announce that Rashawn Ray and I will join Contexts as the new editors in Winter 2018. Contexts: Understanding People in Their Social worlds is the ASA’s magazine which brings the cutting edge of sociology to the public. Rashawn and I are humbled by the appointment. A lot of top notch people have edited this journal and we hope to live up to their legacy.
Let me tell you a little bit about Rashawn. I first met Rashawn when he was a graduate student at Indiana University. Immediately, he struck me as a highly intelligent and outgoing person. He begins a conversation with a smile. He is interested in what you have to say and really wants to learn from you. But more than that, he had a real interest in linking sociology to the concerns of everyday life. As time passed, this became clear to me. His research focuses on how social inequality affects health and well being and he is extremely active in getting the sociological vision out there through Facebook, Twitter and public speaking. The right guy for the right job – and associate professors can’t say “no!”
So what do we have in mind? First, we want to build on a decade and a half of excellence. Contexts is a magazine that pleases the mind and the eye. It is also an intellectual magazine that offers the public well-grounded but accessible accounts of academic research. Second, we want to really start engaging with the audiences that might enjoy sociological work, whether it be people in the policy world, business, or the arts. Rashawn and I are excited about the possibilities.
Over the summer, SocArXiv announced its development. What is SocArXiv, you ask? It’s a free, open source, open access depository for prepublication versions of papers — a way to get your work out there faster, and to more people. Think SSRN or Academia or ResearchGate, but not-for-profit (SSRN is now owned by Elsevier) and fundamentally committed to accessibility.
SocArXiv has had the great fortune to partner with the Center for Open Science, the folks who brought you the Reproducibility Project. Because COS was already working on building infrastructure, SocArXiv was quickly able to put up a temporary drop site for papers. (Full disclosure: I’m on the SocArXiv steering committee.)
Just on the basis of that, more than 500 papers have been deposited and downloaded over 10,000 times. Now a permanent site is up, and we will be working to get the word out and encourage sociologists and other social scientists to make the jump. With financial support from the Open Society Foundation and the Alfred P. Sloan Foundation, this thing is looking pretty real.
More developments will be coming in the months ahead. We’ve partnered with the LSE’s International Inequalities Institute to establish our first working paper series, and will be spearheading an outreach effort to academics, as well as continuing to develop additional features. I will doubtless be highlighting some of those here.
In the meanwhile, take a look, and add a paper of your own. It’s quick and painless, and will help you make your work quickly accessible while contributing to the development of open science infrastructure.
My response to this question on Facebook:
- Do not publish in PLoS if you need a status boost for the job market or promotion.
- Do publish if journal prestige is not a factor. My case: good result but I was in a race against other computer scientists. Simply could not wait for a four year journal process.
- Reviewer quality: It is run mainly by physical and life scientists. Reviews for my paper were similar in quality to what CS people gave me on a similar paper submitted to CS conferences/journals.
- Personally, I was satisfied. Review process fair, fast publication, high citation count. Would not try to get promoted on the paper by itself, though.
- A lot of people at strong programs have PLoS One pubs but usually as part of a larger portfolio of work.
- A typical good paper in PLoS is from a strong line of work but the paper just bounced around or too idiosyncratic.
- PLoS One publishes some garbage.
- Summary: right tool for the right job. Use wisely.
Another person noted that many elite scientists use the “Science, Nature, or PLoS One model.” In other words, you want high impact or just get it out there. No sense wasting years of time with lesser journals.
Last week, we discussed Devah Pager’s new paper on the correlation between discrimination in hiring and firm closure. As one would expect from Pager, it’s a simple and elegant paper using an audit study to measure the prevalence and consequences of discrimination in the labor market. In this post, I want to use the paper to talk about the journal publication process. Specifically, I want to discuss why this paper appeared in Sociological Science.
First, it may be the case that Professor Pager directly went to Sociological Science without trying another peer reviewed journal. If so, then I congratulate both Pager and Sociological Science. By putting a high quality paper into public access, both Professor Pager and the editors of Sociological Science have shown that we don’t need the lengthy and cumbersome developmental review system to get work out there.
Second, it may be the case that Professor Pager tried another journal, probably the ASR or AJS or an elite specialty journal and it was rejected. If so, that raises an important question – what specifically was “wrong” with this paper? Whatever one thinks of the Becker theory of racial discrimination, one can’t critique the paper on lacking a “framing” or have a simple and clean research design. One can’t critique statistical technique because it’s a simple comparison of means. One can’t critique the importance of the finding – the correlation between discrimination in hiring and firm closure is important to know and notable in size. And, of course, the paper is short and clearly written.
Perhaps the only criticism I can come up with is a sort of “identification fundamentalism.” Perhaps reviewers brought up the fact discrimination was not randomly assigned to firms so you can’t infer anything from the correlation. That is bizarre because it would render Becker’s thesis un-testable. What experimental design would allow you get a random selection of firms to suddenly become racist in their hiring practices? Here, the only sensible approach is Bayesian – you collect high quality observational data and revise your beliefs accordingly. This criticism, if it was made, isn’t sound upon reflection. I wonder what, possibly, could the grounds for rejection be aside from knee jerk anti-rational choice comments or discomfort with a finding that markets do have some corrective to racial discrimination.
Bottom line: Pager and the Sociological Science crew are to be commended. Maybe Pager just wanted this paper “out there” or just got tired of the review process. Either way, three cheers for Pager and the Soc Sci Crew.
Rob Warren, of the University of Minnesota, wrote some very engaging and insightful comments about his time as the editor of Sociology of Education. Jeff Guhin covered this last week. Here, I’ll add my own comments. First, a strong nod of agreement:
First, a large percentage of papers had fundamental research design flaws. Basic methodological problems—of the sort that ought to earn a graduate student a B- in their first-year research methods course—were fairly common.4 (More surprising to me, by the way, was how frequently reviewers seemed not to notice such problems.) I’m not talking here about trivial errors or minor weaknesses in research designs; no research is perfect. I’m talking about problems that undermined the author’s basic conclusions. Some of these problems were fixable, but many were not.
Yes. Professor Warren is correct. Once you are an editor, or simply an older scholar who has read a lot of journal submissions, you quickly realize that there a lot of papers that really, really flub research methods 101. For example, a lot of paper rely on convenience samples, which lead to biased results. Warren has more on this issue.
Now, let me get to where I think Warren is incorrect:
Second, and more surprising to me: Most papers simply lacked a soul—a compelling and well-articulated reason to exist. The world (including the world of education) faces an extraordinary number of problems, challenges, dilemmas, and even mysteries. Yet most papers failed to make a good case for why they were necessary. Many analyses were not well motivated or informed by existing theory, evidence, or debates. Many authors took for granted that readers would see the importance of their chosen topic, and failed to connect their work to related issues, ideas, or discussions. Over and over again, I kept asking myself (and reviewers also often asked): So what?
About five years ago, I used to think this way. Now, I’ve mellowed and come to a more open minded view. Why? In the past, I have rejected a fair number of papers on “framing” grounds. Later, I will see them published in other journals, often with high impact. Also, in my own career, leading journals have rejected my work on “framing” grounds and when it gets published in another leading journal, the work will get cited. The framing wasn’t that bad. Lesson? A lot of complaints about are framing are actually arbitrary. Instead, let the work get published and let the wider community decide, not the editor and a few peer reviewers.
The evidence on the reliability of the peer review process suggests that there is a lot of randomness in the process. If some of these “soul-less” papers had been resubmitted a few months later, some of them would have been accepted with enthusiastic reviews. Here’s a 2006 review of the literature on journal reliability and here’s the classic 1982 article showing that a lot of journal acceptance is indeed random. Ironically, Peters and Ceci (1982) note that “serious methodological flaws” are a common reason for rejecting papers – that had already been accepted!
This brings me to Warren’s third point – a complaint about people who submit poorly developed papers. He suggests that there are job pressures and a lack of training. On the training point, there is nothing to back up his assertion. Most social science programs have a fairly standard sequence of quantitative methods courses. The basic issues regarding causation v. description, identification, and assessment of instrument quality are all pretty easy to learn. Every year, the ICPSR offers all kinds of training. Training we have, in spades.
On the jobs point, I would like to blame people like Professor Warren and his colleagues on hiring and promotion committees (which includes me!!). The job market for the better positions in sociology (R1 jobs and competitive liberal arts schools) has essentially evolved into whoever gets into the top journals in graduate school plus graduate program reputation.
I’d suggest we simply think about the incentives here. Junior scholars live in a world where a lot of weight is placed on a very small number of journals. They also live in a world where journal acceptance is random. They also live in a world where journals routinely lose papers, reject after multiple R&R rounds and takes years (!) to respond (see my journal horror stories post). How would any rational person respond to this environment? Answer: just send out a lot of stuff until something hits. There is no incentive to develop a paper well if it will be randomly rejected after sitting at the journal for 16 months.
This is why I openly praise and encourage reforms of the journal system. I praise “platform” publishing like PLoS One. I praise “up or down” curated publishing, like Sociological Science. I praise Socius, the open access ASA journal. I praise socArxiv for creating an open pre-print portal. I praise editors who speed the review process and I praise multiple submissions practices. The basic issues that Professor Warren discusses are real. But the problem isn’t training or stressed out junior scholars. The problem is the archaic journal system. Let’s make it better.
Last week, we discussed “The Suffocation Model” by Finkel et al, suggested by Chris Martin. Before Finkel at al., we had two posts on Tanya Golash-Boza’s article on race theory in sociology. Next month, we will discuss “Racism and discrimination versus advantage and favoritism: Bias for versus bias against” by Nancy DiTomaso, which appeared in Research in Organizational Behavior 2015. This article was suggested by Dan Hirschman.
The purpose of the “article discussion” series is to highlight articles that don’t appear in the leading journals. If you want the blog to shine some light on an article, or working paper, just put it in the comments or send me a message. The only rule is that it can’t be from an “A” journal like ASR/AJS/SF/SP or even a highly visible specialty journal. Thanks for reading.
Don’t worry, I won’t give away state secrets.
In the 2000-01 and 2002-03 academic years, I worked at the American Journal of Sociology as a member of the manuscript intake group and later as an associate editor. I also worked for a while, roughly at the same time, as the managing editor of Sociological Methodology, which was then edited by my dissertation advisor, Rafe Stolzenberg. In this post, I want to tell you a little bit about how top academic journals work. This is important because academics reward people based on getting into highly selective journals. There should be a lot of discussion about how the institution works and what does and does not get accepted.
Background: The AJS is the oldest general interest journal in American sociology and has, during its entire existence, been based at the Department of Sociology at the University of Chicago. To my knowledge, it has never rotated to another program. In fact, the relationship between the Department and the journal is so strong that one of Chicago’s faculty, Andy Abbott, has written a very nice monograph just about the AJS called Department and Discipline. It’s a good book and you should read it if you want to either understand the evolution of journals or how Chicago fits in to the broader discipline.
Some time ago, the AJS developed this system where students were strongly involved in the operation of the journal. For example, the AJS usually is run by a full time manager, the incredible Susan Allan, and a few students who run the office. These folks do budgets, office organization, crazy amounts of paper work, and a whole lot more. But it goes beyond administration. Students are deeply involved in the shaping the journal’s content.
At the time I was a doctoral student, the AJS was organized into three major committees: the editorial board, which is always headed by a senior professor; a manuscript intake group, which assigned reviewers to papers; and a book review board, also headed by faculty. The manuscript intake group and the book review board are mainly staffed by students. The editorial board usually has one or two students on it, who have a major voice.
In contrast, Sociological Methodology was run like many specialty journals. You had a manager (me) and the editor (Prof. Stolzenberg) who choose reviewers, read reviews, and made decisions. These two people did about 90% of the work running the journal
Lessons from working at AJS: In many ways, the AJS resembled other major journals that must process hundreds of papers per year. There is a basic intake/review/decision cycle. That process has up and down sides. The up side is that the journal review process is actually pretty decent at weeding out garbage. After a while, you can easily spot bad papers. Unending rants, poor spelling, poor formatting, lack of data. Another upside is that many papers do actually improve once people respond to reviewers.
I also saw some of the downsides of the review process. For example, I discovered that only about a third of people agree to consistently review papers, making the workload highly unequal. Some of the patterns are obvious. A lot of people stop answering the mail post-tenure. People in some sub-fields are simply bad citizens and refuse to write reviews or write bad ones. Finally, like a lot of journals, we could let papers fall between the cracks and go without a decision.
Perhaps the biggest insight that I had was the power of editors and the randomness that goes into a “great paper.” Example: while I was on the editorial board, we had a paper with ok but not great reviews. I read it and disagreed strongly. Right as the chief editor was about to assign it to the reject pile, I interjected. It was published and was covered by the national media. This may sound like a great story, which it is, it also shows the weakness of the journal system. If I had been absent that week, or if we had another student editor, the paper would have been rejected. Conversely, I am sure that I overlooked some excellent work.
A related lesson is that the chief editor matters a great deal. An editor can doom a paper from a scholar they don’t like, or on a topic they hate, by simply assigning it to known mean reviewers. Editorial influence appears in other ways. While most papers are clear rejects, many are on the border. An interventionist editor can strongly affect what is accepted from these border line cases. One editor I worked with would actually ask the authors for the data and rerun the analysis to see if reviewer 2’s criticism was right. Another is very comfortable with adding a few suggestions and then just tossing it back to the authors. The power of editors, and the Chicago department, also manifests itself in the fact that AJS is way more tolerant of longer, theory driven papers than other peer social science journals.
A second lesson is that there are big structural factors that influence what gets published. The first factor is type of research. Simply put, ethnographers produce papers at a slower rate than demographers. So if you have a small number of papers, it doesn’t make sense to risk it all on the AJS. Instead, you move to the book or more specialized journals.That’s one reason why ethnographic work is rare in top journals A second factor is culture. There are some sub-fields where the reviewers seem to be really difficult. For example, during the late 1990s, there seemed to be a sort of feud in social psychology. Each side would tank the others in the review process. Ethnography is similar. When people did submit field work papers, it was nearly impossible to get 2 or 3 reviewers to say “this is good enough.” Just endless and endless demands.
The final lesson I took is that we are humans and we are biased. While 95% of decisions really based on reviews, there were definitely times that our biases showed. There were one or two papers I promoted because I was excited about social movement research. At other times, decisions took into account touchy political situations and author prestige. As I said, this is not typical but it does happen and I include myself in this evaluation.
Lessons from Working at Sociological Methodology: This was a totally different experience. Instead of being embedded in a larger group, it was just literally me and a filing cabinet and my advisor. We had a weekly meeting to discuss submissions, I took notes, and he told me what to do.
Probably the biggest take home point from working with Professor Stolzenberg was that editors make or break the journal. The dude was really on top of things and few papers went past 2 or 3 months. Once a paper couldn’t get a single review after six months and the editor wrote a letter to the author explaining the situation and they mutually agreed to release the paper from review.
Stolzenberg was also not afraid of people, a strong trait for an editor. He didn’t mind rejecting people and making the process speed up. Although he didn’t desk reject often, he was good about getting reviews and writing detailed rejection letters. That way, the journal didn’t get clogged with orphaned papers. The lesson is that there really is no excuse for slow reviews. Get reviews, reject the paper, or get the hell out of the editing business.
Final note – authors and reviewers are lame: I conclude with a brief discussion of reviewers and authors. First, authors are quite lame. They are slow at responding to editor. They fail to read reviewer comments or take them seriously. And even more puzzling, they fail to resubmit after the R&R. I was shocked to discover that a fairly large fraction of AJS and SM papers at the R&R stage were not resubmitted. Perhaps a third or so. Second, reviewers are lame. As Pamela Oliver put it so well in the recent American Sociologist, the review process is simply broken. Reviewers ask for endless revisions, the focus on vague issues like framing, or simply write hostile and unhelpful reviews. So I thank the 1/3 of academics who write prompt and professional reviews and I curse the 2/3 of shirkers and complainers to an eternity of reading late reviews that criticize the framing of the paper.