Last Friday, I attended a fascinating talk at the Berkeley Institute for Data Science by Lenny Teytelman, a computational biologist and cofounder of protocols.io, a platform for sharing experimental protocols (recipes, if you will) in the biological sciences. He produced a remarkably nuanced sociological analysis of how the internet is changing academic publishing. I want to outline his talk and consider its implications for sociology specifically and the social sciences in general.
Before we get to the future, let’s make sure we understand the past… The first scholarly journals were published in 1665. The Journal des Sçavans [Journal of Learned Men] was founded in Paris on January 5th by counselor and scholar to Parlement Dennis de Sallo. It published reviews of scholarly books; announcements of scientific inventions and experiments; essays on chemistry, astronomy, anatomy, religion, history, and the arts; obituaries of famous men of letters and science; and news about the Sorbonne. Inspired by this innovation, the Philosophical Transactions was launched in London on March 6th by Henry Oldenberg, the Secretary of the Royal Society of London. It recorded investigations into the natural sciences and report news about the activities of English scientists. Later, it became the official organ of the Royal Society. These pioneering journals left a deep imprint on science: 351 years later, their descendants dominate academic publishing. Contemporary academic journals operate very similarly to these pioneers: authors submit articles, editors decide on what merits inclusion (relying now on peer review), and publishers distribute journals to subscribers (now often in electronic rather than print form). Over this vast sweep of history, academic journals have created strong – if not always peaceful – communities of scientists that span the globe.
In March 1989, Tim Berners-Lee, a software engineer at CERN [Conseil Européen pour la Recherche Nucléaire] near Geneva, proposed a new communications medium for sharing information among academic scientists, the world wide web, that would bring together some existing technologies (hypertext, TCP/IP, FTP, and internet domain names) and create others (HTTP, HTML, URLs). The many, many developments in distributed information sharing that have occurred since Berners-Lee’s vision of the world wide web went live in December 1990, when the first web page appeared on the internet, are threatening to erode the dominance of academic journals. In his talk, Teytelman noted six ways that the internet has changed academic publishing specifically and academic information-sharing in general.
1) What we publish: Beyond printing summaries of our investigations, we can now “publish” our data and analytical procedures in repositories for data, data visualizations, and research protocols/methods at such sites as figshare, data dryad, protocol.io, and ResearchGate. This idea has filtered into the social sciences. Journals published by the American Economics Association require authors to deposit their data and the computer programs required to reproduce their analyses. And starting in 2014, the American Psychological Association instituted an optional “open practices” policy, offering open-data, open-materials, and preregistered-analysis-plan “badges” for authors of accepted manuscripts. This was part of the discipline’s effort to increase transparency and reduce “p-hacking,” the practice of collecting or selecting data or results that are statistically significant. This idea is not new to sociology: in the 1990s, people who published papers in ASA journals were urged to make their data publically available, in an effort to facilitate reproduction, but that effort to create more open access to the behind-the-scenes work that goes into published papers soon faltered. If we are to get serious about honesty and reproducibility of sociological research –and in light of several recent controversies, we should – we need to develop standards, protocols, and repositories for our data and methods of analysis – not just for quantitative analysis, but also for qualitative analysis of observational, interview, and historical data.
2) How we publish: online. Several online-only, open-access “mega-journals” have appeared since the turn of the twentieth century, most famously the Public Library of Science (PLoS), which was launched December 2001, as well as mega-journals that are affiliated with established scientific imprints such as Science (Science Advances), Nature (Scientific Reports (Nature)), Cell (Cell Reports), and SAGE (SAGE Open). Peter Binfield, cofounder of PLoS and PeerJ, has argued that these journals, which cover broad subject areas, review submissions on technical merit only, and require authors to pay for the costs of article preparation, “are not limited in their potential output and as such are able to grow commensurate with any growth in submissions.” He stated that in as of mid-2013, mega-journals were publishing almost 4,000 articles per month. In sociology, we now have two online-only, open-access journals that may, someday, grow in to mega-journals: the independent, editor-reviewed Sociological Science, and the ASA-supported, peer-reviewed Socius. These offer much more rapid turnarounds that standard journals – less than 30 days.
3) How we publish part 2: preprinting. Repositories for the optimistically named “preprints” (some will never be “printed” in either an online or paper journal) – more commonly known as working papers – are making it possible for scientists to share their work without waiting for the often-long and tortuous review process to reach its conclusion. The first such repository is arXiv.org, which began by covering mathematics and physics, and later expanded to computer science, computational biology, finance, and statistics. Its success recently spawned several other archives, including bioRciv, engrXiv, and PsyArXiv. This summer, Philip Cohen and several colleagues, in partnership with the Center for Open Science, launched SocArXiv, a repository for sociology working papers. (You may have seen the buttons being passed out at this year’s ASA meeting.) And of course there are other, multi-disciplinary, repositories, notably the Social Science Research Network (SSRN) and ResearchGate.
4) How we publish part 3: open review. Some journals practice “open review,” a term that covers a variety of experiments by publishers – including some long-established journal publishers. In one form, every stage of the submission, review, and publication process is “open” to scrutiny by posting all materials online immediately. After authors submit articles, they are posted online, and editors solicit peer reviews. After those reviews are received, they are posted online. After editors make decisions, their letters to authors are posted online. After authors revise their papers to respond to reviews, their papers and response letters are posted online… And so it goes up to the final – accepted for publication – version of the paper. This procedure is used at F1000Research, a platform for life scientists. Obviously, this process is not double-blind, but instead double-cited: the identities of authors are known to reviewers, and vice versa. In another form of open review, the process is double-blind until after papers are accepted for publication. At that point, all versions of papers and all authorial and reviewer correspondence are posted online. A variant of this is an option at the online medical journal BMJOpen. I’m not sure that the discipline of sociology is ready to consider this, but it’s an intriguing idea.
5) After we publish: version control. It used to be that publishing was an absorbing state: the end of the road, after which nothing changed. Such a temporal structure implies that what is published is the truth. But methods and theories are always evolving, and when they do, they may invalidate prior publications. But researchers who are not aware of new methods and theories to waste time and effort using invalidated theories and outdated methods. The constant improvement of scientific theories and methods makes it useful to make public new and improved “versions” of methods and results. It is for that reason that Teytelman’s protocol.io was founded: to publicly track the evolution of life-science experimental protocols.
6) After we publish part 2: discovery. With the rapid growth of science – ever larger numbers of scientists seeking to publish their work in ever larger numbers of (sometimes very large-scale) outlets – it is becoming increasingly difficult to thoroughly review the extant literature, to find among the mass of studies the specific work that is relevant to your own project. There are several online tools that automatically notify you of relevant research, including ResearchGate, Google Scholar, RePEc for economics, and SciReader for biomedicine.
7) After we publish part 3: post-publication discussion. There is no central hub for internet-mediated discussions of published research. Only Sociological Science offers readers a place to comment, and authors to reply to such comments. Perusal of the first 25 articles published in that journal revealed only 23 comments, and the modal number of comments was zero. So this is clearly not a common activity for sociologists. But that might change in the future.
 Social-network proximity disclosure: I am on the editorial boards of both journals.
Well, it’s the scariest time of year. For some, the scariest stuff reaches its apotheosis on Election Day, Nov. 8, while for others, Halloween is the celebration of choice. For a sociological take on the Oct. 31st festivities, check out Sociological Images’s compendium of Halloween blog posts.
I’ve been counting down these weeks to recommend reading Margee Kerr‘s book Scream: Chilling Adventures in the Science of Fear (hat-tip to a neuroscientist friend for the rec), about the mechanisms underlying fear among humans. In her book, Kerr takes readers on a worldwide journey to investigate fear in different contexts, from a derelict prison where inmates served their time in solitary confinement to Japan’s notorious Suicide Forest.
Kerr is also a practicing sociologist who also designs and refines an experimental haunted house, ScareHouse, located in Pittsburgh. In chapter 8 of her book, she describes how people want to bond with others after being scared and how she and colleagues have channeled that intense emotional energy with an anonymous “confessional” room where people can unload secrets. Overall, Kerr’s experiences shows how sociology and related research can directly inform and shape experiences.
Now for some of our social scientists’ fear… Trigger warning !!! after the jump, courtesy of Josh de Leeuw.
[The following is a guest post from Joe Gibbons, assistant professor at San Diego State.]
One of the best kept secrets in the world of journal publishing is appealing a journal rejection. In the 4 or so years that I have been actively trying to get my work published, I have successfully appealed two desk rejections and three reviewer rejections. What makes this surprising for me are the shocked looks I get from colleagues, some junior faculty and some more senior, many of whom did not think it was possible to appeal rejections. Certainly it is not the norm, but if my brief experience is any indication, it is a viable option for some.
The key is the nature of rejection. Plenty of us have gotten that rejection where you felt you could deal with the reviewer comments or you got a boiler plate reason from the editor as to why your paper was not sent out to reviewers at all. It’s no big secret that many reputable journals are inundated with submissions. I have enough managing editor friends and mentors with editing experience to know that they have to learn to make snap decisions to deal with the backlog. While I am sure most of these are calls are the best ones, the bar for quality publications should be high, I have encountered more than one decision on my work that I found to be hasty.
For the following, I lay down some ground in the art of the appeal based on my own experiences:
- Thoroughly…no…exhaustively read the reviews. This should include the following substeps:
- Ensure no ‘head shots’: fundamental flaws with your methods or core argument that are not fixable or would lead to a completely different paper if revised.
- Gauge the waters with the reviewers. How positive were the reviews? I find that at least one very positive review is at least some call for appeal… provided of course no other reviewer identified a head shot. For the more lukewarm reviews, keep your eyes peeled for lines such as “this would make a good contribution to this journal…BUT” or “this article makes interesting arguments…BUT.” This means they are open to your paper, but it needs work. Sometimes you also just get the cranky ones who have nothing nice to say, but nothing too mean either.
- Did the reviewers offer sound advice on how to fix the paper? I find that if I can point to specific areas where the reviewers tell me how to fix the paper in the appeal that it goes a long way with the editor. There are all kinds of reviewers out there but there are some truly marvelous people who can distance themselves from personal biases and offer objective suggestions on what to change about paper.
- Did the reviewers just not get what you are doing in the paper? As someone who frequently publishes in multidisciplinary Urban and Demographic journals, I often get people from the lands of Public Health or Urban Policy who have their own ways of doing things and are not fans of people who do otherwise. Sometimes there is not much you can do about these situations. Other times, however, you can try to argue why your approach is similarly valid to their approach. Oftentimes just offering more explanation on your approach should do the trick. Also, entertain the possibility of incorporating their approach, provided it does not compromise what you are doing.
- In writing your appeal to the editor, strike that fine balance of deferential and assertive. You want to strongly make the case that your paper is worth reconsideration without setting the editor off: “They think they know better than me?!” Rely heavily on the evidence. Make the case that the reviewers like you, or at least think you are redeemable. Point to the places where they thought you could fix the manuscript. Argue that that these changes should sufficiently fix the paper. Make it clear why you think their journal, of all the journals out there, is the best home for this manuscript. This point is especially important for the desk rejections. At the same time, make it clear that you respect the editor’s authority and will accept whatever decision they ultimately make.
- Don’t be afraid to follow up. Again, editors and managing editors are busy people. It should not come as to much of a shock if your appeal falls through the cracks. If you hear nothing in two weeks, email them and ask politely if they have had time to consider your response. For reviewer rejections, time is somewhat of the essence here as you want to get the original reviewers.
- Hope for the best, expect the worst. Failure is the close companion of an academic. Sometimes you can make the best argument in the world to see it fall upon deaf ears. For example, earlier this year I had my paper rejected by a respected ASA topical journal. I had a reviewer who really did not like my methods and framing at all and tore the paper apart. I felt that they offered little substantive reason why my approach would not work. I appealed to the editor pointing out what I would change and got a ‘tough luck’ response. There are people out there who are confident in their views and will not bend. Pissing them off by pushing further will only hurt you in the long run. If you get a hard no, or they keep ducking your emails, take a deep breath and move on.
- Be appreciative either way. Even if you know for sure it was ridiculous for your paper to get rejected in the first place, remember once again that the editors and managing editors are very busy people doing the often thankless task of identifying quality research to share with the world. Thank them for taking the time to consider your request.
Good luck out there! As Wayne Gretzky once said, “You miss one hundred percent of the shots you don’t take.” So take your shot if there is a chance you will make it!
A few days ago, Mark Suchman, chair of ASA’s OOW section, circulated a Google Doc with a call for people to add movies they use in class to illustrate work and organizational concepts to students. Orgtheory has had a couple of threads on this topic over the years, and I just added a couple of my own favorites (sadly not so current) to the document. I definitely will be checking some of these out next time I teach undergrad orgs — check it out, or add some contributions of your own.
My response to this question on Facebook:
- Do not publish in PLoS if you need a status boost for the job market or promotion.
- Do publish if journal prestige is not a factor. My case: good result but I was in a race against other computer scientists. Simply could not wait for a four year journal process.
- Reviewer quality: It is run mainly by physical and life scientists. Reviews for my paper were similar in quality to what CS people gave me on a similar paper submitted to CS conferences/journals.
- Personally, I was satisfied. Review process fair, fast publication, high citation count. Would not try to get promoted on the paper by itself, though.
- A lot of people at strong programs have PLoS One pubs but usually as part of a larger portfolio of work.
- A typical good paper in PLoS is from a strong line of work but the paper just bounced around or too idiosyncratic.
- PLoS One publishes some garbage.
- Summary: right tool for the right job. Use wisely.
Another person noted that many elite scientists use the “Science, Nature, or PLoS One model.” In other words, you want high impact or just get it out there. No sense wasting years of time with lesser journals.
The Society for the Advancement of Socio-Economics (SASE) website has made Neil Fligstein‘s powerpoint slides on the history of economic sociology available for general viewing. (Update: looks the link is broken at the moment, so here are the slides: 1469704310_imagining_economic_sociology_-socio-economics-fligstein) It’s a fascinating read of the development of a sub-field across continents, and it also includes discussion of a challenge that some believes plagues the sociology discipline:
Both Max Weber and Thomas Kuhn recognized that Sociology as a discipline might be doomed to never cumulate knowledge.
- Sociology would proceed as a set of research projects which reflected the current concerns and interests of a small set of scholars
- When the group hit a dead end in producing novel results, the research program would die out only to be replaced by another one
- Progress in economic sociology is likely to be made by putting our research programs into dialogue with one another to make sense of how the various mechanisms that structure markets interact
- Failure to do so risks the field fragmenting of the field into ever smaller pieces and remaining subject to fashion and fad
Fligstein’s claim for these field-fragmenting tendencies stems from the current structure of the academic field. He depicts sociology as rewarding scholars for applying ideas from one area to another area where current theorizing is insufficient, rather than expanding existing research:
- … the idea is not to work on the edge of some mature existing research program with the goal of expanding it
- But instead, one should be on the lookout for new ideas from different research programs to borrow to make sense for what should be done next
In short, scholars tend to form intellectual islands where they can commune with other like-minded scholars. Bridging paths to other islands can generate rewards, but the efforts needed to disseminate knowledge more widely – even within a discipline – can exceed any one person’s capacity.