Archive for the ‘fabio’ Category
Rejection is never fun, but I don’t think it’s unfair. Vague and conflicting reviews aren’t fun either, but that’s life. What I do think is unfair is when editors are negligent and incompetent, which leads to enormous wasting of time and, in some cases, can end careers. In this post, I’d like to share a few of my personal horror stories with journals so you know what often happens in the review process. Here are a few of my “favorites:”
- The Journal of Mathematical Sociology (previous editor, not current) once waited 24 months to send me a rejection based on a 1 sentence review.
- Social Networks (also a previous editor) lost a manuscript twice, resulting in a paper being tied in review for almost two years before it got a review.*
- The American Journal of Public Health never reviewed a paper that was submitted. After a few months, I was asked to review my own paper! Then, after I complained, they never obtained reviews. About a year later, after me and my co-author complained, the paper was not reviewed or even desk rejected. Technically, I suppose, it might still be under review!
On top of this, some folks are plain dishonest. For example, a previous editor of Sociology of Education rejected a paper of mine after the R&R (which is fair) and said that they don’t do 2nd R&Rs but then asked me to review a paper on the 2nd R&R. A book editor, after rejecting my manuscript told me face to face that they simply didn’t accept books by first time authors. That press, and most others, actually publish first time book authors, including friends of mine.
I am under no pretense that we can eliminate incompetence and dishonesty. But there are simple reforms that can lessen the cost of poor editing. For example, I am an advocate of multiple submission for journals – you can submit to as many journals at once. That way, if you get an incompetent editor, you can simply take your business elsewhere and not bother waiting for a response or dealing with chaotic and contradictory reviews. I also think Sam Lucas is onto something when he suggests that we should not allow reviewers to write open ended and vague reviews. Bottom line: the journal system allows people to do all kinds of bad things, but simple reforms can reduce the risk to authors.
* The paper was rejected on “frustrating” grounds – a computer simulation was sent to experimentalists who wanted to see an experiment. Not unfair, but frustrating and a waste of time. If the journal doesn’t accept computational papers, it should have been desk rejected.
A lot of people have slammed The Party Decides for not getting this year’s primary completely correct. But I take a different view. The Democratic primary is going as planned, so that supports the theory pretty well. Even on the GOP side, there is some evidence that party dynamics are working as expected.
So let’s get the stuff that doesn’t fit the theory out of the way. Yes, Trump’s impending victory doesn’t fit but that’s not hard to understand in my view. And yes, the two major establishment candidates, Jeb Bush and Marco Rubio, both massively failed.
But there is one important feature of the GOP primary that does fit the Party Decides model – the elites successfully blocked Ted Cruz from becoming president. There is a great deal of evidence that the GOP elites actively hate Cruz:
- Even though he’s the sole remaining viable contender to Trump, Cruz has very few endorsements – only 7 of 31 GOP governors have endorsed, only 6 of 54 GOP senators have endorsed, and about 30 of 234 GOP representatives have endorsed.
- Mitch McConnell hates him.
- John Boehner hates him, too.
Let’s be clear here: party leaders hate Cruz and only about 20% of the national leadership will endorse him over an egomaniac billionaire and a second tier regional politician. It’s clear – the party decided they hate Ted Cruz.
To celebrate our 10th year, we are introducing a new blogger – Jeff Guhin. Jeff graduated from Yale with his Ph.D. in sociology and is the author of the upcoming The Problem of America: Practices of Moral Authority in Muslim and Christian Schools (Oxford University Press). He is a scholar of culture, religion, and education. This Fall, he will begin teaching at the sociology department at UCLA. Welcome aboard!!
This is part 2 of our book forum on Emirbayer and Desmond’s The Racial Order. Here, I’ll discuss the first 80 pages of the book, which starts with an amazingly ill advised sentence: “there has never been a comprehensive and systematic theory of race.” This is a really bad starting point because even a non-specialist such as myself can easily come up with three (!) major systematic and comprehensive theories of race:
- Race is a socially constructed group division based on ancestry and physical appearance: This theory was articulated in classical theory, such as Weber’s discussion of caste and DuBois’ work on American race relations. It has many, many proponents.
- Race is a biological variation in human beings: The modern version of this theory comes from studies of genetic variation. In sociology, the journal Sociological Theory (ahem) had a massive symposium on genomic theories of race, which we discussed here.
- Race is a social category meant to signal a group’s place in the means of production or political system: This theory is less discussed in sociology, but is a popular theory in anthropology. For example, John Comaroff is a well known anthropologist who explores this argument as do many others.
So, from my view, the problem isn’t that we lack a theory of race. Rather, we have *tons* of theories of race and *tons* of empirical evidence.The problem is sorting it all out.
Adding to this issue is the avoidance of work that would seem to help bolster various parts of the book. For example, one crucial element of Emirbayer and Desmond’s theory is work on race that its insistence on an unconscious and interactional dimension of race, as would be suggested by Bourdieusian theory. The modern “racism without racists” school actively draws on Bourdieusian sociology very clearly, as does the work on race, cultural capital and status attainment. Yet, the work of Eduardo Bonilla-Silva or Prudence Carter are barely mentioned in text. Another example: In the recent Theory of Fields (2012), Neil Fligstein and Doug McAdam actually have an entire chapter applying field theory to civil rights mobilization. These are not obscure points. This is a major issue: why does a supposedly systematic treatment of race avoid the many major scholars whose work defines race scholarship in modern sociology? I am puzzled.
Before I wrap up, a stylistic point and a nit picky point. Stylistic: I think one drawback of the book is that it employs a classical “theory bloat” style of writing. For example, it doesn’t actually tell you it’s theory of race for 80 pages!! It also takes detours into reflexivity theory and a bunch of other issues. I really suggest that readers skip directly to Part II for the good stuff. This reminds me of the time I read Jeffrey Alexander’s Neofunctionalism and After – which doesn’t tell you what neofunctionalism is until page 110!
Nit picky: the book occasionally has some points of intellectual laziness. For example, at one point, there is a detour about the evils of regression analysis. Bizarre. Given that sociology is moving into a comfortable mixed method approach to data, we don’t need grad school seminar cheap shots. Regression analysis is fine and it’s perfectly good for studying trends in data, assuming you’ve put in the effort to collect high quality data. That sort of cheap shot is below these authors.
Next week: We’ll discuss Part II of The Racial Order. Spoiler: I like it!
Samuel R. Lucas is professor of sociology at the University of California, Berkeley. He works on education, social mobility, and research methods. This guest post proposes a reform of the journal review process.
On-going discussion about the journal publication process is laudable. I support many of the changes that have been suggested, such as the proposal to move to triple-blind review, and implemented, such as the rise of new journals that reject “dictatorial revi”–oops, I mean “developmental review.” I suggest, however, that part of the problem is that reviewers are encouraged to weigh in on anything–literally anything! I’ve reviewed papers and later received others’ reviews only to find a reviewer ignored almost all of the paper, weighing in on such issues as punctuation and choice of abbreviations for some technical terms. Although such off-point reviews are rare, they indicate that reviewers perceive it legitimate to weigh in on anything and everything. But a system allowing unlimited bases of review is part of the problem with peer review, for it shifts too much power to reviewers while at the same time providing insufficient guidance on what will be helpful in peer review. I contend that we need dispense with our kitchen-sink reviewing system by removing from reviewer consideration two aspects of papers: framing and findings.
Framing is a matter of taste and, as there is no accounting for taste, framing offers fertile ground for years of delay. Framing is an easy way to hold a paper hostage, because most solid papers could be framed in any one of several ways, and often multiple frames are equally valid. Authors should be allowed to frame their work as they see fit, not be forced to alter the frame because a reviewer reads the paper differently than the author. A reviewer who feels a paper should be framed differently should wait for its publication and then submit a paper that notes that the paper addressed Z but missed its connection to Q. Such an approach would make any worthwhile debate on framing public while freeing authors to place their ideas into the dialogue as well.
As for findings, peer review should be built on the following premise: if you accept the methods, then you accept the findings enough for the paper to enter the peer-reviewed literature. Thus, reviewers should assess whether the paper’s (statistical, experimental, qualitative) research design can answer the paper’s research question, but not the findings produced by the solid research design. Allowing reviewers to evaluate findings allows reviewers to (perhaps inadvertantly) scrutinize papers differently depending on the findings. To prevent such possibilities, journals should allow authors to request a findings-embargoed review, for which the journal would remove the findings section of the paper as well as the findings from: 1)the abstract, and, 2)the discussion/conclusion section of the paper before delivering the paper for review. As some reviewers may regard reading soon-to-be-published work early as a benefit of reviewing, reviewers could be sent full manuscripts if the paper is accepted for publication.
A review system in which reviewers do not review framing and findings is a design-focused review system. Once a paper passes a design-focused review, editors can conduct an in-house assessment to assure findings are accurately conveyed and the framing is coherent. The editors, unlike reviewers, see the population of submissions, and thus, unlike reviewers, are well-placed to fairly and consistently assess any other issues. Editors will be even more enabled to make such calls if they can make them only for the papers reviewers have determined satisfy the basic criterion of having a design solid enough to answer the question the paper poses.
The current kitchen-sink review system has become increasingly time-consuming and perhaps capricious, hardly positive features for effective peer review. If findings were embargoed and reviewers were discouraged from treating their preferred frame as essential to a quality paper, review times could be chopped dramatically and revise and resubmit processes would be focused on solidifying design. As a result, design-focused review could lower our collective workload by reducing the number of taste-driven rounds of review we experience as authors and reviewers, while simultaneously reducing authors’ potentially paralyzing concern that mere matters of taste will block their research from timely publication. Design-focused review may thus make peer review work better for everyone.
The 2016 Democratic primary is a mirror image of the 2008 primary. In 2008, Hillary Clinton fell behind in delegates on Super Tuesday and required blow out victories to regain the lead. Even though it was extraordinarily unlikely that she could do that, Clinton continued to run until the very, very end. Now Hillary has done the same to Sanders in 2016. He got a big win in New Hampshire and a tie in Iowa, but did very poorly in South Carolina and never recovered. He can only climb back into the lead if he gets big wins in big states to offset Clinton’s lead, which didn’t happen this week and is unlikely to happen over the next month. Yet, Sanders is still running strong. Why?
A few reasons:
- By basing his campaign on small donors, it is possible to continually raise money. He can bypass the party establishment who would normally yank support for a campaign at this stage.
- He’s an ideological candidate. Sure, he’d love to win and is trying his best, but he wants to change policy and the terms of debate. That doesn’t require him to win the most pledged delegates.
- It’s fun. If the support is there and you’re winning a bunch of states, even small ones, why quit? It’s gotta be more interesting than Vermont.
- A Clinton indictment: Let’s say there is a 1% chance that Federal prosecutors will indict on a misdemeanor or felony. If Sanders places a strong second in the nomination contest, he’d make a strong case that he should be the back up. And if he gets the nomination, there’s a good chance he’ll win the presidency since the economy is relatively strong. So a 1% chance of becoming president is easily worth the time and effort.
Clinton will likely get the pledged delegate majority in May, but the primary will continue to Bern.