the internet and the future of academic publishing

leave a comment »

[Just in time for Open Access Week, the following is a guest post by Heather Haveman, professor of sociology and business at Berkeley.]

Last Friday, I attended a fascinating talk at the Berkeley Institute for Data Science by Lenny Teytelman, a computational biologist and cofounder of, a platform for sharing experimental protocols (recipes, if you will) in the biological sciences.  He produced a remarkably nuanced sociological analysis of how the internet is changing academic publishing.  I want to outline his talk and consider its implications for sociology specifically and the social sciences in general.

Before we get to the future, let’s make sure we understand the past…  The first scholarly journals were published in 1665.  The Journal des Sçavans [Journal of Learned Men] was founded in Paris on January 5th by counselor and scholar to Parlement Dennis de Sallo.  It published reviews of scholarly books; announcements of scientific inventions and experiments; essays on chemistry, astronomy, anatomy, religion, history, and the arts; obituaries of famous men of letters and science; and news about the Sorbonne.  Inspired by this innovation, the Philosophical Transactions was launched in London on March 6th by Henry Oldenberg, the Secretary of the Royal Society of London.  It recorded investigations into the natural sciences and report news about the activities of English scientists.  Later, it became the official organ of the Royal Society.  These pioneering journals left a deep imprint on science:  351 years later, their descendants dominate academic publishing.  Contemporary academic journals operate very similarly to these pioneers:  authors submit articles, editors decide on what merits inclusion (relying now on peer review), and publishers distribute journals to subscribers (now often in electronic rather than print form).  Over this vast sweep of history, academic journals have created strong – if not always peaceful – communities of scientists that span the globe.

In March 1989, Tim Berners-Lee, a software engineer at CERN [Conseil Européen pour la Recherche Nucléaire] near Geneva, proposed a new communications medium for sharing information among academic scientists, the world wide web, that would bring together some existing technologies (hypertext, TCP/IP, FTP, and internet domain names) and create others (HTTP, HTML, URLs).  The many, many developments in distributed information sharing that have occurred since Berners-Lee’s vision of the world wide web went live in December 1990, when the first web page appeared on the internet, are threatening to erode the dominance of academic journals.  In his talk, Teytelman noted six ways that the internet has changed academic publishing specifically and academic information-sharing in general.

1) What we publish:   Beyond printing summaries of our investigations, we can now “publish” our data and analytical procedures in repositories for data, data visualizations, and research protocols/methods at such sites as figshare, data dryad,, and ResearchGate.  This idea has filtered into the social sciences.  Journals published by the American Economics Association require authors to deposit their data and the computer programs required to reproduce their analyses.  And starting in 2014, the American Psychological Association instituted an optional “open practices” policy, offering open-data, open-materials, and preregistered-analysis-plan “badges” for authors of accepted manuscripts.  This was part of the discipline’s effort to increase transparency and reduce “p-hacking,” the practice of collecting or selecting data or results that are statistically significant.  This idea is not new to sociology:  in the 1990s, people who published papers in ASA journals were urged to make their data publically available, in an effort to facilitate reproduction, but that effort to create more open access to the behind-the-scenes work that goes into published papers soon faltered.  If we are to get serious about honesty and reproducibility of sociological research –and in light of several recent controversies, we should – we need to develop standards, protocols, and repositories for our data and methods of analysis – not just for quantitative analysis, but also for qualitative analysis of observational, interview, and historical data.

2) How we publish:  online.  Several online-only, open-access “mega-journals” have appeared since the turn of the twentieth century, most famously the Public Library of Science (PLoS), which was launched December 2001, as well as mega-journals that are affiliated with established scientific imprints such as Science (Science Advances), Nature (Scientific Reports (Nature)), Cell (Cell Reports), and SAGE (SAGE Open).  Peter Binfield, cofounder of PLoS and PeerJ, has argued that these journals, which cover broad subject areas, review submissions on technical merit only, and require authors to pay for the costs of article preparation, “are not limited in their potential output and as such are able to grow commensurate with any growth in submissions.”  He stated that in as of mid-2013, mega-journals were publishing almost 4,000 articles per month.  In sociology, we now have two online-only, open-access journals that may, someday, grow in to mega-journals:  the independent, editor-reviewed Sociological Science, and the ASA-supported, peer-reviewed Socius.[1]  These offer much more rapid turnarounds that standard journals – less than 30 days.

3) How we publish part 2:  preprinting.   Repositories for the optimistically named “preprints” (some will never be “printed” in either an online or paper journal) – more commonly known as working papers – are making it possible for scientists to share their work without waiting for the often-long and tortuous review process to reach its conclusion.  The first such repository is, which began by covering mathematics and physics, and later expanded to computer science, computational biology, finance, and statistics.  Its success recently spawned several other archives, including bioRciv, engrXiv, and PsyArXiv.  This summer, Philip Cohen and several colleagues, in partnership with the Center for Open Science, launched SocArXiv, a repository for sociology working papers.  (You may have seen the buttons being passed out at this year’s ASA meeting.)  And of course there are other, multi-disciplinary, repositories, notably the Social Science Research Network (SSRN) and ResearchGate.

4) How we publish part 3:  open review.  Some journals practice “open review,” a term that covers a variety of experiments by publishers – including some long-established journal publishers.  In one form, every stage of the submission, review, and publication process is “open” to scrutiny by posting all materials online immediately.  After authors submit articles, they are posted online, and editors solicit peer reviews.  After those reviews are received, they are posted online.  After editors make decisions, their letters to authors are posted online.  After authors revise their papers to respond to reviews, their papers and response letters are posted online…  And so it goes up to the final – accepted for publication – version of the paper.  This procedure is used at F1000Research, a platform for life scientists.  Obviously, this process is not double-blind, but instead double-cited:  the identities of authors are known to reviewers, and vice versa.  In another form of open review, the process is double-blind until after papers are accepted for publication.  At that point, all versions of papers and all authorial and reviewer correspondence are posted online.  A variant of this is an option at the online medical journal BMJOpen.  I’m not sure that the discipline of sociology is ready to consider this, but it’s an intriguing idea.

5) After we publish:  version control.  It used to be that publishing was an absorbing state:  the end of the road, after which nothing changed.  Such a temporal structure implies that what is published is the truth.  But methods and theories are always evolving, and when they do, they may invalidate prior publications.  But researchers who are not aware of new methods and theories to waste time and effort using invalidated theories and outdated methods.  The constant improvement of scientific theories and methods makes it useful to make public new and improved “versions” of methods and results.  It is for that reason that Teytelman’s was founded:  to publicly track the evolution of life-science experimental protocols.

6) After we publish part 2:  discovery.  With the rapid growth of science – ever larger numbers of scientists seeking to publish their work in ever larger numbers of (sometimes very large-scale) outlets – it is becoming increasingly difficult to thoroughly review the extant literature, to find among the mass of studies the specific work that is relevant to your own project.  There are several online tools that automatically notify you of relevant research, including ResearchGate, Google Scholar, RePEc for economics, and SciReader for biomedicine.

7) After we publish part 3:  post-publication discussion.  There is no central hub for internet-mediated discussions of published research.  Only Sociological Science offers readers a place to comment, and authors to reply to such comments.  Perusal of the first 25 articles published in that journal revealed only 23 comments, and the modal number of comments was zero.  So this is clearly not a common activity for sociologists.  But that might change in the future.

[1] Social-network proximity disclosure:  I am on the editorial boards of both journals.

Written by epopp

October 26, 2016 at 5:28 pm

Posted in uncategorized

scary stuff trigger alert

with 2 comments

Well, it’s the scariest time of year.  For some, the scariest stuff reaches its apotheosis on Election Day, Nov. 8, while for others, Halloween is the celebration of choice.  For a sociological take on the Oct. 31st festivities, check out Sociological Images’s compendium of Halloween blog posts.

I’ve been counting down these weeks to recommend reading  Margee Kerr‘s book Scream: Chilling Adventures in the Science of Fear (hat-tip to a neuroscientist friend for the rec), about the mechanisms underlying fear among humans.   In her book, Kerr takes readers on a worldwide journey to investigate fear in different contexts, from a derelict prison where inmates served their time in solitary confinement to Japan’s notorious Suicide Forest.

Kerr is also a practicing sociologist who also designs and refines an experimental haunted house,  ScareHouse, located in Pittsburgh.  In chapter 8 of her book, she describes how people want to bond with others after being scared and how she and colleagues have channeled that intense emotional energy with an anonymous “confessional” room where people can unload secrets.  Overall, Kerr’s experiences shows how sociology and related research can directly inform and shape experiences.

To learn more about fear from Kerr, read the Jezebel interview with Kerr here or watch this video on about how fear evolved and “Why is being scared so fun?”:

Now for some of our social scientists’ fear… Trigger warning !!! after the jump, courtesy of Josh de Leeuw.

Read the rest of this entry »

Written by katherinechen

October 25, 2016 at 11:07 pm

appealing journal rejections

with 3 comments

[The following is a guest post from Joe Gibbons, assistant professor at San Diego State.]

One of the best kept secrets in the world of journal publishing is appealing a journal rejection. In the 4 or so years that I have been actively trying to get my work published, I have successfully appealed two desk rejections and three reviewer rejections. What makes this surprising for me are the shocked looks I get from colleagues, some junior faculty and some more senior, many of whom did not think it was possible to appeal rejections. Certainly it is not the norm, but if my brief experience is any indication, it is a viable option for some.

The key is the nature of rejection. Plenty of us have gotten that rejection where you felt you could deal with the reviewer comments or you got a boiler plate reason from the editor as to why your paper was not sent out to reviewers at all. It’s no big secret that many reputable journals are inundated with submissions. I have enough managing editor friends and mentors with editing experience to know that they have to learn to make snap decisions to deal with the backlog. While I am sure most of these are calls are the best ones, the bar for quality publications should be high, I have encountered more than one decision on my work that I found to be hasty.

For the following, I lay down some ground in the art of the appeal based on my own experiences:

  1. Thoroughly…no…exhaustively read the reviews.  This should include the following substeps:
  • Ensure no ‘head shots’: fundamental flaws with your methods or core argument that are not fixable or would lead to a completely different paper if revised.
  • Gauge the waters with the reviewers. How positive were the reviews? I find that at least one very positive review is at least some call for appeal… provided of course no other reviewer identified a head shot. For the more lukewarm reviews, keep your eyes peeled for lines such as “this would make a good contribution to this journal…BUT” or “this article makes interesting arguments…BUT.” This means they are open to your paper, but it needs work.  Sometimes you also just get the cranky ones who have nothing nice to say, but nothing too mean either.
  • Did the reviewers offer sound advice on how to fix the paper? I find that if I can point to specific areas where the reviewers tell me how to fix the paper in the appeal that it goes a long way with the editor. There are all kinds of reviewers out there but there are some truly marvelous people who can distance themselves from personal biases and offer objective suggestions on what to change about paper.
  • Did the reviewers just not get what you are doing in the paper? As someone who frequently publishes in multidisciplinary Urban and Demographic journals, I often get people from the lands of Public Health or Urban Policy who have their own ways of doing things and are not fans of people who do otherwise. Sometimes there is not much you can do about these situations. Other times, however, you can try to argue why your approach is similarly valid to their approach. Oftentimes just offering more explanation on your approach should do the trick. Also, entertain the possibility of  incorporating their approach, provided it does not compromise what you are doing.
  1. In writing your appeal to the editor, strike that fine balance of deferential and assertive. You want to strongly make the case that your paper is worth reconsideration without setting the editor off: “They think they know better than me?!” Rely heavily on the evidence. Make the case that the reviewers like you, or at least think you are redeemable.  Point to the places where they thought you could fix the manuscript. Argue that that these changes should sufficiently fix the paper. Make it clear why you think their journal, of all the journals out there, is the best home for this manuscript. This point is especially important for the desk rejections. At the same time, make it clear that you respect the editor’s authority and will accept whatever decision they ultimately make.
  1. Don’t be afraid to follow up. Again, editors and managing editors are busy people. It should not come as to much of a shock if your appeal falls through the cracks. If you hear nothing in two weeks, email them and ask politely if they have had time to consider your response. For reviewer rejections, time is somewhat of the essence here as you want to get the original reviewers.
  1. Hope for the best, expect the worst. Failure is the close companion of an academic. Sometimes you can make the best argument in the world to see it fall upon deaf ears. For example, earlier this year I had my paper rejected by a respected ASA topical journal. I had a reviewer who really did not like my methods and framing at all and tore the paper apart. I felt that they offered little substantive reason why my approach would not work. I appealed to the editor pointing out what I would change and got a ‘tough luck’ response. There are people out there who are confident in their views and will not bend. Pissing them off by pushing further will only hurt you in the long run. If you get a hard no, or they keep ducking your emails, take a deep breath and move on.
  1. Be appreciative either way. Even if you know for sure it was ridiculous for your paper to get rejected in the first place, remember once again that the editors and managing editors are very busy people doing the often thankless task of  identifying quality research to share with the world. Thank them for taking the time to consider your request.

Good luck out there! As Wayne Gretzky once said, “You miss one hundred percent of the shots you don’t take.” So take your shot if there is a chance you will make it!

Written by epopp

October 24, 2016 at 6:39 pm

Posted in uncategorized

black mirror’s nosedive episode; also sf and social theory

with one comment

If you don’t already watch Black Mirror, it’s worth checking out, especially now that you get can get every episode on Netflix. It’s a wonderful science fiction/horror anthology, sort of a modern Twilight Zone, but with more of a focus on technology. The first episode of the latest season, Nosedive (see some reviews here and here, but spoilers!) is truly excellent. Bryce Dallas Howard plays a woman, Lacie, who is at once vulnerable and ambitious, smiling with a too-obvious strain at everyone she passes.  She smiles so hard because she’s literally being rated for each interaction. That’s the amazing premise of this episode: a facebook-like app gives everyone an averaged rating of between 1 and 5, and each interaction is a new chance to change your score.

There’s a lot going on there, and a tremendous amount that’s useful for us to think (and teach) with as sociologists.  First, there’s the obvious connection to the current pressure to like (and be liked!) on Facebook, Twitter, Instagram, and other social media platforms.  It’s also important that the main character here is a woman, and that so many of the interactions she has are also with women. The increased emotional labor expected of women (from men, of course, but also from women) is an important sociological insight, and it’s not surprising it’s reproduced online.

Yet what struck me even more about this episode is what it shows—albeit totally obliquely—about the micro-macro link.  The rich and powerful all have very high ratings, and while we never really find out how (surely the rich are sometimes jerks?) we get a sense of it through observing the interaction rituals Lacie goes through everyday. She wants to make sure she gets a 5 as often as possible, and a 5 from someone with a higher rating is weighted heavier.  As such, she has an incentive to give a 5 to everyone with higher status than hers, in the hopes that they’ll reciprocate.  Yet they obviously have less incentive to rate her highly, not least because her rating of them carries less weight in the metrics.

Those differences have real stakes: Lacie is basically “middle class” in that she’s in the low 4s.  Once you start getting less than that, many perks and privileges are taken away from you.  I kept thinking of Erving Goffman and Randall Collins as I watched the show, and also of recent work by people like Julia Ticona and Sherry Turkle. Which is to say: there’s a lot there, and I’d be interested in people’s thoughts.

Along those lines, it’s worth thinking about how science fiction as a genre provides great heuristics that push to 11 things that are already happening: in this case, what if everyone was rated on a 1 to 5 scale? What’s great about that is how similar it is to a certain way of thinking about social theory.  A good social theory simplifies a lot of complex social noise into an argument: religion is like opium, say, or cultural reproduction is like the accumulation of economic capital. Much like the science fiction premises, these don’t work in every context, but they can be very helpful ways to think about the world.

Written by jeffguhin

October 23, 2016 at 12:36 am

movies for teaching orgs & work

with 4 comments

A few days ago, Mark Suchman, chair of ASA’s OOW section, circulated a Google Doc with a call for people to add movies they use in class to illustrate work and organizational concepts to students. Orgtheory has had a couple of threads on this topic over the years, and I just added a couple of my own favorites (sadly not so current) to the document. I definitely will be checking some of these out next time I teach undergrad orgs — check it out, or add some contributions of your own.

Written by epopp

October 11, 2016 at 6:57 pm

Posted in uncategorized

should you publish in PLoS One?

with 2 comments

My response to this question on Facebook:

  1. Do not publish in PLoS if you need a status boost for the job market or promotion.
  2. Do publish if journal prestige is not a factor. My case: good result but I was in a race against other computer scientists. Simply could not wait for a four year journal process.
  3. Reviewer quality: It is run mainly by physical and life scientists. Reviews for my paper were similar in quality to what CS people gave me on a similar paper submitted to CS conferences/journals.
  4. Personally, I was satisfied. Review process fair, fast publication, high citation count. Would not try to get promoted on the paper by itself, though.
  5. A lot of people at strong programs have PLoS One pubs but usually as part of a larger portfolio of work.
  6. A typical good paper in PLoS is from a strong line of work but the paper just bounced around or too idiosyncratic.
  7. PLoS One publishes some garbage.
  8. Summary: right tool for the right job. Use wisely.

Another person noted that many elite scientists use the “Science, Nature, or PLoS One model.” In other words, you want high impact or just get it out there. No sense wasting years of time with lesser journals.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street 

Written by fabiorojas

October 11, 2016 at 12:13 am

does making one’s scholarly mark mean transplanting the shoulders of giants elsewhere?

with 2 comments

The Society for the Advancement of Socio-Economics (SASE) website has made Neil Fligstein‘s powerpoint slides on the history of economic sociology available for general viewing.  (Update: looks the link is broken at the moment, so here are the slides: 1469704310_imagining_economic_sociology_-socio-economics-fligstein) It’s a fascinating read of the development of a sub-field across continents, and it also includes discussion of a challenge that some believes plagues the sociology discipline:

Both Max Weber and Thomas Kuhn recognized that Sociology as a discipline might be doomed to never cumulate knowledge.

  • Sociology would proceed as a set of research projects which reflected the current concerns and interests of a small set of scholars
  • When the group hit a dead end in producing novel results, the research program would die out only to be replaced by another one
  • Progress in economic sociology is likely to be made by putting our research programs into dialogue with one another to make sense of how the various mechanisms that structure markets interact
  • Failure to do so risks the field fragmenting of the field into ever smaller pieces and remaining subject to fashion and fad

Fligstein’s claim for these field-fragmenting tendencies stems from the current structure of the academic field.  He depicts sociology as rewarding scholars for applying ideas from one area to another area where current theorizing is insufficient, rather than expanding existing research:

  • … the idea is not to work on the edge of some mature existing research program with the goal of expanding it
  • But instead, one should be on the lookout for new ideas from different research programs to borrow to make sense for what should be done next

In short, scholars tend to form intellectual islands where they can commune with other like-minded scholars.  Bridging paths to other islands can generate rewards, but the efforts needed to disseminate knowledge more widely – even within a discipline – can exceed any one person’s capacity.


Written by katherinechen

October 10, 2016 at 6:30 pm