At orgtheory we’ve tried to develop a loose environment for scholarly discussion. By loose, I mean a place where people can feel comfortable talking about serious ideas in a fun way, without the formality of a colloquium and more open and inclusive than most professional settings. For the most part we’ve been successful I think at facilitating that sort of feeling among contributors. Over the years we’ve had great conversations that have not been constrained by status, rank, or other forms of exclusivity. A community has formed around orgtheory that, while including a lot of sociologists, is fairly interdisciplinary and broad. Personally, that’s why I keep coming back and, even if I’ll go weeks without posting anything, I place a lot of value on this blog and the people who come here to speak their mind.
Our discussions frequently veer from their intended targets and most of the time that is totally okay and within the norms of orgtheory. This place would be boring if people were required to stay on point all the time. It’s consistent with the loose, collegial atmosphere we’ve tried to create. But occasionally (and I mean very infrequently) discussions turn in a sour direction. This wasn’t a problem for the first few years of the blog, perhaps because in those early years we knew almost everyone who came online to connect with us. We had a small community and it was easy to enforce norms with each other. But in the past couple of years, we’ve had a few posts where commenters have become a little snippy with each other. We’ve talked internally about how best to handle those outbursts. As I see it two ideals compete with each other. On the one hand, we value inclusiveness and believe that the best way to encourage real discussion and debate is not to censor. We want people to feel that their input is valued, regardless of status, rank, expertise, etc. On the other hand, we value civility and believe that if people treat each other according to the “golden rule” a greater variety of people will be more likely to participate. And it does seem to be true that when discussions get especially rancorous, many people drop out of the debate and the more impassioned voices surge to the front line. The rules of discussion that Fabio posted a few months ago were a response to the rising tide of incivility that we observed on the blog.
orgtheory bleg – seeking excerpts or articles on how to do participatory action research (par) in organizations
Orgheads, a student in my first ever Ph.D.-level orgtheory course has asked us to include a reading how to do participatory action research (PAR) in organizations. Alternatively, the reading could be in the area of public scholarship (I will assign Diane Vaughan’s AJS article) or community-based participatory research (CBPR). Anyone willing to share recommendations for readings relevant to organizational research? Thanks!
When advising PhD students, I try to dispel a misleading idea – all the “good” jobs go quickly and you are a complete failure if you can’t find employment by the Fall of your final grad skool year. This is simply incorrect. The sociology job market actually has three distinct phases. Once you appreciate this, it will help you out a lot:
- Round 1: The classic arts & sciences positions. In sociology, the research intensive programs usually advertise in summer, accept applications by October, interview in November, and extend offers by December (or earlier). The most competitive liberal arts colleges seem to recruit in round 1.
- Round 2: January-March – teaching intensive, professional schools, and post-docs. Winter break provides a nice cut point; many programs choose to go in the early Winter. In sociology, b-schools and ed schools will often interview in the Winter. A lot of high status, well funded post-docs, such as the recently deceased RWJ program, go at this time.
- Round 3: March-early summer. Pot luck – a diverse group of positions, including short term post-docs, very teaching intensive schools, private sector jobs, government, policy, and jobs at R1s that opened up due to last minute shifts in budgets. Some jobs may still be open if they were *really* slow in processing applications, or they had a long string of interviews that didn’t pan out. I’ve seen people get some very high quality jobs as late as April or May, because candidates 1-4 turned a department down.
I am not saying that there are a lot of jobs. It is still the case that academia is very competitive and some very good people won’t find jobs. What I am saying is that sociologists have a lot of options that are spread across the academic year. Don’t panic if things don’t immediately work out. It is in your interest to keep your eyes open and keep applying.
A guest post by Ezra Zuckerman, professor at MIT Sloan School of Management and co-founder and deputy editor of Sociological Science.
Last week, Rob Walsh posted an interesting question over on Scholastica’s “conversation” blog about whether Sociological Science—a new open-access journal of which I am privileged to be a co-founder—will challenge the status quo in academic publishing, or at least in sociology. It is an interesting question. Certainly, I think it will. To be clear though, this does not mean that SocSci will displace traditional options (which are based on a more “developmental” model, with multiple rounds of reviews and a tendency to err on the side of rejection). Rather, SocSci presents a distinct alternative (with the editorial team committed to rapid turn-around and a tendency to err on the side of publication, but with post-publication debate) to complement the traditional model. I hope that both models will be around to stay and that the presence of distinct alternatives will be healthy for the field.
And yet, while we think that journals with each of these approaches to peer review (as well as variations thereof) are viable, there is another question that is worth asking—i.e., whether all governance forms are viable and desirable, and specifically what is the role of for-profit versus non-profit ownership of journals. This question has recently been made more salient because some for-profit publishers have apparently been starting open-access journals. A notable example is Research and Politics. See here. This particular journal is following a somewhat different strategy than is SocSci (while we are open to some genres that the existing flagship sociology journals are not, our bread and butter will be articles that could also be found in existing flagship journals). The real questions, however are: (a) is a for-profit open-access journal the same as a non-profit? (b) if not, why not?
The answer to (a) is most definitely, “no.” When a journal is owned by a for-profit entity, its main goal is to make a profit. This means that while it may seek to become a great publication outlet, this is a means and not an end. And it will be willing to compromise that means, if it finds some other way (e.g., raising prices, directly or indirectly) to do so. By contrast, SocSci is owned by a set of sociologists whose motivations for making SocSci great have nothing to do with profit. You might say that we are out to burnish our reputations, but in that respect we are no different from any other sociologist when s/he seeks to publish a paper. And so our goals in managing SocSci are aligned with those of our authors and of sociologists generally. Our one and only goal is to put out a great sociology journal. We succeed when sociology succeeds, and vice versa.
Ok, but then why-oh-why might a publisher such as Sage be interested in starting open-access journals? Not only will it lose the revenues it usually makes from charging (libraries) for access, but it seems to be willing to subsidize publication fees (which are higher in open-access journals, to cover costs) for the first couple of years! Why?
Putting on my competitive strategy hat (my main teaching responsibility at MIT), there seem to be two related reasons. The first is what’s known as “product proliferation.” This is a classic entry-thwarting strategy. An analogy is Coke or Pepsi filling the soda aisle with all kinds of variations of soda instead of letting new entrants do it, and thereby gain a foothold in their market. This is a defensive move. The idea is that the incumbent uses its profits (and often its access to distribution, brand, and other assets) to pre-emptively fill up the product space and shut off the oxygen for would-be entrants. In short, while Sage and others would prefer open-access to go away, they figure that second best is to try to prevent competition by doing it themselves and doing it in such a way that it does not cannibalize their existing (profitable!) products. An implication is that you would expect for-profit open-access journals to adopt approaches that do not compete head to head with their existing franchises. And giving away the product is exactly what you would expect if they are trying to undercut the competition and limit entry. It is not sustainable in the long-term; but in the long-term, the threat of entry will have abated and then they can go back to business as usual.
The second possibility is that this is more like a rear-guard action, analogous to traditional newspapers’ attempts to address the threat of the internet. The point here is that they know that the old business model can’t work long-term and so they’re just trying what they can do to stem the tide. So they put the content out there for free and hope they’ll figure out a way to make it work down the road. We know how well this has worked out for the newspapers….
A slight variation on the last possibility is that they think that if they show that the newcomers can’t generate the relevant “content” on their own and that incumbents are in best position to do it. But this is nonsense, as SocSci is showing. With modern communication technologies (and great publishing partners like Scholastica), there is no reason social scientists cannot take matters into their own hands and do what we did.
So… social scientists of the world unite (into small teams like ours)! You have nothing to lose but those forms that sign away your copyright!
A guest post by Jerry Davis. He is the Wilbur K. Pierpont Collegiate Professor of Management at the Ross School of Business at the University of Michigan.
By this point everyone in the academy is familiar with the arguments of Nicholas Kristof and his many, many critics regarding the value of academics writing for the broader public. This weekend provided a crypto-quasi-experiment that illustrated why aiming to do research that is accessible to the public may not be a great use of our time. It also showed how the “open access” model can create bad incentives for social science to write articles that are the nutritional equivalent of Cheetos.
Balazs Kovacs and Amanda Sharkey have a really nice article in the March issue of ASQ called “The Paradox of Publicity: How Awards Can Negatively Affect the Evaluation of Quality.” (You can read it here: http://asq.sagepub.com/content/59/1/1.abstract) The paper starts with the intriguing observation that when books win awards, their sales go up but their evaluations go down on average. One can think of lots of reasons why this should not be true, and several reasons why it should, all implying different mechanisms at work. The authors do an extremely sophisticated and meticulous job of figuring out which mechanism was ultimately responsible. (Matched sample of winning and non-winning books on the short list; difference-in-difference regression; model predicting reviewers’ ratings based on their prior reviews; several smart robustness checks; and transparency about the sample to enhance replicability.) As is traditional at ASQ, the authors faced smart and skeptical reviewers who put them through the wringer, and a harsh and generally negative editor (me). This is a really good paper, and you should read it immediately to find out whodunit.
The paper has gotten a fair bit of press, including write-ups in the New York Times and The Guardian (http://www.theguardian.com/books/2014/feb/21/literary-prizes-make-books-less-popular-booker). And what one discovers in the comments section of these write-ups is that (1) there is no reading comprehension test to get on the Internet, and (2) everyone is a methodologist. Wrote one Guardian reader:
The methodology of this research sounds really flawed. Are people who post on Goodreads representative of the general reading public and/or book market? Did they control for other factors when ‘pairing’ books of winners with non-winners? Did they take into account conditioning factors such as cultural bias (UK readers are surely different from US, and so on). How big was their sample? Unless they can answer these questions convincingly, I would say this article is based on fluff.
Actually, answers to some of these questions are in The Guardian’s write-up: the authors had “compared 38,817 reader reviews on GoodReads.com of 32 pairs of books. One book in each pair had won an award, such as the Man Booker prize, or America’s National Book Award. The other had been shortlisted for the same prize in the same year, but had not gone on to win.” And the authors DID answer these questions convincingly, through multiple rounds of rigorous review; that’s why it was published in ASQ. The Guardian included a link to the original study, where the budding methodologist-wannabe could read through tables of difference-in-difference regressions, robustness checks, data appendices, and more. But that would require two clicks of a functioning mouse, and an attention span greater than that of a 12-year-old.
This is a non story based on very iffy research. Like is not compared with like. A positive review in the New York Times is compared with a less complimentary reader review on GoodReads…I’ll wait to fully read the actual research in case it’s been badly reported or incorrectly written up
Evidently this person could not even be troubled to read The Guardian’s brief story, much less the original article, and I’m a bit skeptical that she will “wait to fully read the actual research” (where her detailed knowledge of Heckman selection models might come in handy). After this kind of response, one can understand why academics might prefer to write for colleagues with training and a background in the literature.
Now, on to the “experimental” condition of our crypto-quasi-experiment. The Times reported another study this weekend, this one published in PLoS One (of course), which found that people who walked down a hallway while texting on their phone walked slower, in a more stilted fashion, with shorter steps, and less straight than those who were not texting (http://well.blogs.nytimes.com/2014/02/20/the-difficult-balancing-act-of-texting-while-walking/). Shockingly, this study did not attract wannabe methodologists, but a flood of comments about how pedestrians who text are stupid and deserve what they get. Evidently the meticulousness of the research shone through the Times write-up.
One lesson from this weekend is that when it comes to research, the public prefers Cheetos to a healthy salad. A simple bite-sized chunk of topical knowledge goes down easy with the general public. (Recent findings that are frequently downloaded on PLoS One: racist white people love guns; time spent on Facebook makes young adults unhappy; personality and sex influence the words people use; and a tiny cabal of banks controls the global economy.)
A second lesson is that there are great potential downsides to the field embracing open access journals like PLoS One, no matter how enthusiastic Fabio is. Students enjoy seeing their professors cited in the news media, and deans like to see happy students and faculty who “translate their research.” This favors the simple over the meticulous, the insta-publication over work that emerges from engagement with skeptical experts in the field (a.k.a. reviewers). It will not be a good thing if the field starts gravitating toward media-friendly Cheeto-style work.
People often complain, justifiably, that “big data” is a catchy phrase, not a real concept. And yes, it certainly is hot, but that doesn’t mean that you can’t come up with a useful definition that can guide research. Here is my definition – big data is data that has the following properties:
- Size: The data is “large” when compared to the data normally used in social science. Normally, surveys only have data from a few thousand people. The World Values Survey, probably the largest conventional data set used by social scientists, has about two hundred thousand people in it. “Big data” starts in the millions of observations.
- Source: The data is generated through the use of the Internet – email, social media, web sites, etc.
- Natural: It generated through routine daily activity (e.g., email or Facebook likes) . It is not, primarily, created in the artificial environment of a survey or an experiment.
In other words, the data is bigger than normal social science data; it is “native” to the Internet; and it is not mainly concocted by the researcher. This is a definition meant for social scientists- it is useful because it marks a fairly intuitive boundary between big data and older data types like surveys. It also identifies the need for a skill set that combines social science research tools and computer science techniques.