Archive for the ‘academia’ Category
One of the more interesting questions in evaluation an individual’s academic performance is the “one hit wonder” issue. An academic, like any other producer of ideas, might have a single great achievement and produce little else in their career. And I don’t mean a big hit followed by a more modest stream of works. I mean, a big it in grad school or shortly thereafter, with very little else after that.
Outside of academia, one hit wonders present no problems to people who hand out rewards. Music fans just buy what they like, coaches cut athletes from the roster. In academia, this is trickier. First, it is often hard to tell if someone is a one hit wonder or not. Second, some types of research are just slow. We don’t want to punish a ethnographer, just because their CV doesn’t look like a demographer’s. Third, the tenure committee or dean may run into trouble if they suspect that the person won’t do much in the future. How can you fire the person who wrote a classic?
Promoting, or rewarding, one hit wonders incurs risk because professors do more than research. They teach undergraduates, mentor PhD students, do administrative work, and help out the profession by participating in conferences, peer review, editing journals, and other professional functions. Thus, we want sustained engagement from everyone who achieves a degree of stature, such as a tenured position at a reputable university. Also, a long period of inactivity may, rightly or wrongly, suggest that the person is not managing their talent well, or that the hit was a fluke.
In the end, I go on a case by case basis. If the hit was truly epochal, I’m happy to give them a job for life. A little deadwood is fine if we can get the cure for cancer in exchange. But that’s exceptionally rare. From an institution’s perspective, though, you reward people with an eye for the future. It ain’t like paying the guy who just fixed your clogged sink. You have to live with this person for decades.
True story: In 2012, I reviewed a paper for a journal. I thought it was a good paper. With some modest revision, it could probably be accepted at a top journal. In summer 2013, I was asked to review the revision. At this point, I had learned that the journal had a notorious reputation for sending papers through three or four rounds of review and rejecting them after years of lengthy revisions.
So, I wrote to the managing editor and said that I was a bit worried about the multiple R&R policy. I didn’t want to be part of an extremely long R&R process unless there was a high probability that it would lead to publication. What is the point of me offering guidance when it is all thrown away as the authors try to make a third or fourth round of reviewers happy? It is unfair to everyone.
The managing editor offered a diplomatic answer. In general, they can’t discuss the state of a manuscript that is under review. Aside from that, the manager noted the paper was only on the first round as indicated by the “R1.” Fair enough.
I agree to review the paper because I don’t want the author to be stuck with a completely new reviewer with new demands. So I tell the journal that I will help out. In an attempt to humorously convey my concerns, I wrote back: “Ok, but if we go into triple R&R territory, your bosses will receive aggressive email from me.” The response, in its entirety:
Thank you again for your thoughts concerning this manuscript. Unfortunately, we are unable to accept your offer of review in terms that would constitute prior restraints on the possible outcome of the review process.
Interesting. Expressing disagreement with a policy is viewed as a “constraint.” Go figure. The up side is that I now have more time for reviews at other journals. The down side is that the authors(s) will probably get a new reviewer who is almost certainly slower than me and will definitely ask for a whole new set of revisions. Since I can’t break confidentiality, I can only vaguely express a vaguely directed apology for the problems that the author will now have to deal with. And the possibility of three more R&Rs and a rejection at the end.
This happened in August and I haven’t received any more requests for reviews, when I used to get requests all the time. So if you ever wondered what it would take to get banned from a journal’s reviewer roster, all it takes is some criticism of the editors’ quadruple R&R rejection policy.
Orgtheorist and loyal orgtheory commenter Howard E. Aldrich is featured in a video about his intellectual trajectory and the history of organizational studies. Learn about Howard’s start in urban sociology and organizational studies, why he finds cross-sectional studies “abhorrent,” his years at Cornell where he overlapped with Bill Starbuck, and how he got started publishing in organizational ecology. He also explains how the variation, selection, and retention VSR) approach was a “revelation” for him, and how various institutions (University of Michigan, Stanford, and others) have promoted his intellectual development via contact with various colleagues, collaborators, and graduate students. Towards the end of the interview, Aldrich describes his latest research on the Maker movement, including hacking and the rise of affordable 3-D printing and other hardware and software that may propel technological innovation.*
The videoed interview is courtesy of Victor Nee’s Center for Economy & Society at Cornell University. More videos, including a presentation on his work on entrepreneurship, are viewable here. Also, those looking for an organizational studies text should see his seminal Organizations Evolving with Martin Reuf here.
* The Maker movement has strong affinities with Burning Man. In fact, that’s partly how I started attending Maker Faire – check out my photos of past Maker Faires, which included performance artists from the now-defunct Deitch Art Parade.
One of the most frustrating aspects of social science reviewing is the slow review time. Gabriel Rossman says that we are the problem. Rather than focus on what can be easily fixed or provide up or down decisions, reviewers take too long, offer contradictory recommendations, and encourage bloated papers. If I were to summarize Gabriel’s post, I’d say that:
- Keep your review short. Don’t write that 6 page single space commentary. One page or so probably enough in most cases
- Don’t whine about what the authors should have written about. Evaluate what they actually wrote about.
- Be decisive. Yes or no.
- Don’t ask for endless citations, commentaries, extra analyses, etc.
- All suggestions should be constructive, not busy work.
- Let it go: after a while, it becomes counter productive. If you hate, just say so. If you like it, just say so. No more revisions. It’s done.
I also like Gabriel’s suggestion that reviewers should show some spine. In the summer, I was asked to review a 3rd R&R. My entire response was “Dude, seriously? Three R&R’s? Just accept it.” Result: paper accepted.
The editor of Social Problems, Becky Pettit, recently posted a review of submission practices and trends, with a focus on gender. Comments,* in no particular order:
- 8% accept? Holy canoli! I knew it was competitive, but that’s in the realm of ASR/AJS. ASR accept rate was 6%. AJS accepts 10%.
- Thankfully, SP does a lot of desk rejection.** About 30%.
- Even with desk rejection, it does seem to take a while – a mean time of 135 days. That’s about 4.5 months. So many papers take 5, 6 or 7 months. After dealing with the lightning fast world of biomedical journals, this is snail like.
- Senior profs review less than juniors. Female assistants review the most.
- Men are *way* more likely to appeal. As Phil Cohen notes, it would be good to know if it’s just that women have more accepts or if men just whine more. Ie, we want the appeal/reject ratio.
Bottom line: Social Problems is a de-facto top general journal in soc, it behaves like a typical social science journal in terms of turn around and some other factors, and there is definitely gender inequality in reviewer and author behavior.
* Disclosure: I have a soon to be rejected paper under review at Social Problems.
** Yes, I know – “deflection!”
A lot of sociologists buy into the theory of “sponsored mobility,” which means that elites pick who gets the mobility. So I think there should be a lot of sympathy for recent research showing that mentorship (communicating with more advanced people) does not have an effect on career advancement but sponsors (people who pick you, push you, and get benefit from it) do have an effect. Robin Hanson reviews a book by economist Sylvia Ann Hewett that makes this claim:
In a new book, economist Sylvia Ann Hewlett uses data to show that mentorship, in its classic wise-elder-advises-younger-employee form, doesn’t produce statistically significant career gains. What does however, her research found, is something she has termed “sponsorship”—a type of strategic workplace partnering between those with potential and those with power. … -
And there is an important implication for the study of gender and inequality:
Women are only half as likely as men to have a sponsor—a senior champion at work who will basically take a bet on them, tap them on the shoulder, and really give them a shot at leadership. Women have always had mentors, friendly figures who give lots of advice. They’re great. They’re good for your self-esteem; they’re good for your personal development. But no one’s ever been able to show that they do anything to help you actually move up. …
We find that women in particular often choose the wrong people. … They seek out a senior person they’re very comfortable with. … For a sponsor, you should go after the person with power, because you need someone who has a voice at those decision-making tables. You need to respect that person, you need to believe that person is a fabulous leader and going places, but you don’t need to like them. You don’t need to want to emulate them.
If true, this forces me to modify my views. I have always believed that sponsored mobility is important in academia, but I believe that mentorship matters as well. If Hewett is right, my belief is misplaced. It’s really about sponsored mobility. So, if you care about women or minorities advancing in some career track (like academia), then forget the nice lunches. Administrators should double down on matching people with power players. A bit rude, but it might be one concrete way to chip away at inequality in the leadership of the academy.
Becker and Faulkner’s Thinking Together: An E-mail Exchange and All That Jazz now available in print
Today, I met with first year grad students who wanted to know how sociologists develop research questions and studies while navigating grad school, academia, and other contexts. Although sociologists do give retrospective accounts in their publications and presentations, it’s not easy to fully convey the “back stage” behind the research. Rarely do readers get to see how a study unfolds. Luckily, Howie Becker and Bob Faulkner‘s latest book is now available both as an ebook and print book (update: corrected link), for those of us who like to read old school-style. According to Franck Leibovici,
the paperback version produces a different experience [from the ebook]. for example, it has an index which allows you to visualise how many people, scholars, musicians, anonymous people, have been mobilized to produce this investigation.
For those who like the ebook format, see our earlier post, which includes a summary by Becker himself.
Here’s the official summary of Thinking Together: An E-mail Exchange and All That Jazz:
When Rob Faulkner and Howie Becker, two sociologists who were also experienced professionals in the music business, decided to write something about this other part of their lives, they lived at opposite ends of the North American continent: Faulkner in Massachusetts, Becker in San Francisco. They managed the cooperation writing a book requires through e-mail. Instead of sitting around talking, they wrote e-mails to each other.
And so every step of their thinking, the false steps as well as the ideas that worked, existed in written form. So, when Franck Leibovici asked them to contribute something which showed the “form of life” that supported their work, they (helped along by a timely tip from Dianne Hagaman), they sent him the correspondence.
The result is one of the most complete and revealing records of scientific collaboration ever made public. And one of the most intimate pictures of the creative process in all its details that anyone interested in that topic could ask for. Investigative writing is not only about formulating chains of rational ideas (as the usual format of scientific articles would like us to believe), but also mixes plays on words, stories, and arguments in new arrangements.
this book is a contribution to the art project (forms of life)—an ecology of artistic practices, paris, 2011-2012, by franck leibovici.
curated by grégory castéra and edited by les laboratoires d’aubervilliers and questions théoriques, with the support of fnagp, la maison rouge, le fonds de dotation agnès b. see www.desformesdevie.org.
One of the songs that helped the two authors work on their cases of how musicians build their repertories:
A few weeks ago, we all laughed when MIT was praised for its well known (but nonexistent) sociology department. But a serious question went unasked: why doesn’t MIT have a degree granting sociology unit? At first, you think the answer is obvious. MIT is an engineering and science school. We shouldn’t expect it to offer any sociology aside from a few courses for general education of engineering students.
But hold on! MIT offers lots of non-STEM degrees. For example, it has a highly regarded business school and an architecture school. Ok, you say, maybe it’ll offer nuts and bolts professional programs that are closely allied with STEM fields. Yet, that argument doesn’t hold water. MIT also allows students to major and/or concentrate in music. It’s also got well known PhD programs in humanities fields like philosophy, social sciences like political science and economics, and a sort of catch-all program that combines history, anthropology, and science studies. Heck, you can even get the ultimate fluffy major – creative writing.
It’s even more baffling when you realize that it is amazingly easy to create a BS or PhD degree focusing on the quantitative side of sociology (e.g., applied regression, networks, demography, stochastic process models, soc psych/experimental, survey analysis, simulation/agent based models, rational choice/game theory, etc.)
My hypothesis is that the typical MIT faculty or alumni relies on the reputation of sociology, not what the field is actually about. Like a lot of folks, the field is written off as a hopeless quagmire of post-modernism, even though, ironically, most sociologists are not post-modernists. The reality is that the field is a fairly traditional positivist scholarly area with normal, cumulative research. Even qualitative research is often presented in ways that most normal science types would recognize. It’s really too bad. Sociology could use a healthy dose of ideas from the hard sciences, and MIT could be the place where that could happen.
As you well know, I think the PhD program is a terrible choice for most students. Quite simply, the PhD program is risky (only 50% completion rate), costly (5+ years), and many disciplines have poor job prospects (e.g., most of the humanities, many biological sciences and many social sciences). Furthermore, a lot of students think it is a credential that is needed for non-academic jobs, which is not generally true.
But still, maybe you weren’t phased by the “don’t go to grad school speech.” Maybe you really have a passion for teaching, or interpreting Foucault. Or maybe you simply don’t care about the negatives associated with academic careers. I welcome you to academia. I pity you as well.
So, then, what sort of PhD should you get? Here’s an argument for the sociology Ph.D.:
- Low barrier to entry – you just need a solid academic record, not extended training in math, foreign language, or other rare skills.
- You learn solid research skills like survey design, regression models, and interview technique that have non-academic labor market value.
- You can study a wide range of topics and do so almost immediately. No need to engage in endless post-docs.
- Policy relevance.
- Decent academic job prospects compared to most other fields. The sociology market is tight, but soc PhDs frequently get jobs in lots of other programs like education, business, policy, social work, and occasionally in adjacent areas like American studies, ethnic studies, political science, and anthropology.
- Broadly defined topic – if you have a real passion for a topic that is genuinely social in some way, you can probably find a way to write a dissertation on it.
The one big downside is that sociology programs adhere to the humanities model of long time to PhD. There is no need for this. If you focus on a dissertation topic early on, choose your dissertation chair wisely, and insist on getting published at least once, there is no need for your degree to take longer than 4 or 5 years.
A follow up from Monday’s discussion of productivity: Publishing too much is definitely a first world problem. In fact, it is so remarkably rare that in 10 years as IU faculty member have I seen one job applicant penalized for publishing too much. Normally, people are penalized for (a) not publishing, (b) publishing the “wrong stuff” (edited volumes vs. journal articles) or (c) not publishing in elite journals.
But once in a while, some people do publish too much. Why?
- If you are in an elite program, you *only* get credit for either top general journals or top field journals. So volume distracts you from getting the “right” hit.
- “Scatter”: Some programs want faculty to have a “coherent” publication output.
- Dilution: Some programs want a small number of high impact pieces.
- Credit: Sometimes a large volume requires many co-authors, which makes it look like you didn’t contribute much.
So think about it: How many of you are tenure track in top 5 programs? Or work in fields where you are expected to have one or two big impact pieces? Didn’t think so. In most cases, volume is not an issue, as long as it is peer reviewed and is of overall good quality.
A couple of weeks ago, Brayden commented on an essay by David Courpasson, which lamented the “culture of productivity.” The idea was that we often put too much emphasis on the production of articles, rather than the cultivation of ideas. At one level, I completely agree. The goal is to produce quality ideas. We aren’t paid by the word.
At another level, I am not terribly moved by Professor Courpasson’s essay. The complaint falls under the category of “first world problems.” The main problem, for most graduate students and faculty, isn’t that they are sucked up by an evil “culture of productivity.” The modal problem is that they aren’t producing anything at all. The underproduction of articles is highly correlated with not getting a job and not getting promoted. It is also a problem from a policy perspective. When we invest in students and faculty, we want them to be able to produce competent science, which is usually expressed in occasional publication.
But Professor Courpasson does have some important points that merit a response. One is that publication is adversarial, instead of cooperative. It’s about beating reviewers at some game. Here, I can only agree and add that the adversarial nature of reviews stems from limited resources. If ASQ, for example, will only publish the top 10% of papers, then the reviewers just need some excuse to “knock down” some good papers. If you want the recognition and rewards of the profession, then you need to master the game. Though I have never chosen a research topic to win some “game,” I openly admit that papers are written in sub-optimal (and often lamentable) ways just to avoid what I think are reviewer cheap shots. If Professor Courpasson wishes to avoid this game, I recommend that he closely follow two new(er) journals, PLOS ONE and Sociological Science. The former journal will publish all articles that follow scientific standards. The latter gives a simple “yes/no” decision, so there are no games with endless rounds of reviewers. Both formats reduced the “game” aspect of publishing.
Courpasson also complains about the lack of scholarship on power and related topics. And, I’m like, “DUDE!!! READ MY ARTICLE ABOUT POWER!!!! C’MON, BRO, PUMP UP MY CITES!!!!!” I’d also add that the reason that these topics are in retreat is that it is easy for reviewers to knock them down. For example, there is a very standard format for articles on, say, diffusion of innovation. But there is no standard for articles on power. Thus, it is harder to knock down a paper in the first genre. The “big” ideas that Professor Courpasson likes often generate controversy, and thus makes it really easy for a reviewer to write hand wringing reviews about how there are all these problems with the paper. So little ideas become easy to publish. Big ideas are left for the elder leaders of the profession.
Finally, I’ll address a related issue – over publication and the volume of research. Personally, I don’t think this is a real problem. While there a few great scholars who publish very little (Coase or Hirschman), most successful scholars tend to write a lot. Keith Sawyer’s book on creativity reports, for example, that in studies of novelists, famous authors wrote way more novels than authors from a random sample. Most of these novels weren’t great, but Sawyer makes the sensible observation that maybe you just need a lot of practice to write a great novel. Maybe better writers just make more ideas. Regardless, this suggests that we should be tolerant of volume. I do have sympathy for Professor Courpasson, though, since I’ve worked on journals. Big volume means a lot of work.
The Chronicle reports on a new ranking of “Faculty Media Impact” conducted by the Center for a Public Anthropology. The ranking “seeks to quantify how often professors engage with the public through the news media” and was done by trawling Google News to see which faculty were mentioned in the media most often. The numbers were averaged and “and then ranked relative to the federal funds their programs had received” to get the rankings. As you can see from the screenshot above, the ranking found that the top unit at MIT was the Sociology Department. This is fantastic news in terms of impact, because MIT doesn’t actually have a Sociology Department. While we’ve known for a while that quantitative rankings can have interesting reactive effects on the entities they rank, we are clearly in new territory here.
Of course, there are many excellent and high-profile sociologists working at MIT in various units, from the Economic Sociology group at Sloan to sociologists of technology and law housed elsewhere in the university. So you can see how this might have happened. We might draw a small but significant lesson about what’s involved in cleaning, coding, and aggregating data. But I see no reason to stop there. The clear implication, it seems to me, is that this might well become the purest case of the reactivity of rankings yet observed. If MIT’s Sociology Department has the highest public profile of any unit within the university, then it stands to reason that it must exist. While it may seem locally less tangible than the departments of Brain & Congitive Sciences, Economics, and Anthropology on the actual campus, this is obviously some sort of temporary anomaly given that it comfortably outranks these units in a widely-used report on the public impact of academic departments. The only conclusion, then, is that the Sociology Department does in fact exist and the MIT administration needs to backfill any apparent ontic absence immediately and bring conditions in the merely physically present university into line with the platonic and universal realm of being that numbers and rankings capture. I look forward to giving a talk at MIT’s Sociology Department at the first opportunity.
Disclaimer: I’ve been a long time advocate for journals like PLoS One and I have an article that’s working its way through that journal, which I will shamelessly self-promote at a later time.
Last week, John Bohannon announced a hoax. He intentionally wrote an obviously flawed article on cancer research and submitted it to a bunch of open access journals. About two thirds of the journals accepted the paper. I’m glad these folks exposed such chicanery. Once you’ve been in academia for a few years, you quickly learn that there’s a lot of publishers who have no scruples. The sting even caught journals managed by “legitimate” vendors such as Elsevier. Bring the sunlight.
Interestingly, one of the journals that did not fall for the hoax was the much maligned PLOS ONE (e.g., Andrew Gelman recently called it a “crap journal“). From Bohannon’s article:
The rejections tell a story of their own. Some open-access journals that have been criticized for poor quality control provided the most rigorous peer review of all. For example, the flagship journal of the Public Library of Science, PLOS ONE, was the only journal that called attention to the paper’s potential ethical problems, such as its lack of documentation about the treatment of animals used to generate cells for the experiment. The journal meticulously checked with the fictional authors that this and other prerequisites of a proper scientific study were met before sending it out for review. PLOS ONE rejected the paper 2 weeks later on the basis of its scientific quality.
Good for them. This speaks well of the PLOS ONE model. Normally, journals employ two criteria – technical competence (“is this study correctly carried out?”) and impact (“how important do we think this study is?”). PLoS sticks with the first criteria while rejecting the second. It’s an experiment that asks: “What happens when a journal publishes technically correct articles, but lets the scientific community – not the editors – decide what is important?”
Now we have part of the answer. A forum that drops editorial taste can still retain scientific integrity. By meticulously sticking to scientific procedure, bad science is likely to be weeded. And you’d be surprised how much gets weeded. Even though PLOS ONE is not competitive in any normal sense of the word, it still rejects over 30% of all submissions. In other words, almost one in three articles does not meet even the most basic standards of scientific competence.
Well managed open access journals like PLOS ONE will never replace traditional journals because we really do want juries to pick out winners. But having a platform where scientists can “let the people decide” is a good thing.
Last week, we had a fruitful discussion of graduate school and publishing. I think we all agreed that most graduate students should learn how to publish quickly. But we also raised some red flags. For example, we shouldn’t encourage people to publish “bad” articles. Others thought that we shouldn’t publish “too much.”
So let’s begin with a consensus: yes, if you are a graduate student, you should definitely learn the publishing process. No let’s move on to lower consensus issues. First, what counts as “bad” research? A few definitions:
- Research that is fraudulent.
- Research that is in a technical sense correct, but misleading.
- Research that is sloppy or poorly written.
- Research that is made in good faith, but in error.
- Research that is chopped up into lots of small chunks, in terms of article length/word or page counts.
- Research that makes extremely small or incremental arguments.
- Research that is in the “wrong” journal – low prestige, niche, online, or in a lower status discipline.
Now, when is it bad to publish work in any of these categories? There is overwhelming consensus that #1 is bad and should never be tolerated. In fact, academia has such a strong norm on #1 that fraudulent articles are almost always retracted and people might lose their job. I think we’d agree that #2 is also bad, though there is disagreement about what should be done with misleading articles, as we found out when discussing He-Who-Shall-Not-Be-Named-in-Texas.
Once we get past fraudulent and misleading research, it’s very unclear that any of the remaining categories can be claimed to be uniformly bad. For example, garbage can paper (March, Cohen, and Olson 1972) was successfully shown to be a very sloppy work (see the Bendor, Moe, & Schott 2000 APSR article). No way around it. But, as olderwoman points out, powerful ideas are often presented in sloppy packages.
Then we get to #4: good faith papers with mistakes. In some cases, #4 is obviously bad. We find out that the answer is different when we correct our code – retraction. But in other cases, it’s ok. For example, among mathematicians, incorrect proofs are sometimes left in the record. The overall idea remains promising, but maybe some future scholar can read the mistake and fix it.
#5, #6 and #7 are clearly not universal. If you look around academia, you see that some fields hate, hate, hate small articles (history) while other fields exist primarily in tiny, tiny articles spread out in big and small journals. Even with one field, like sociology, you see huge variance. Demographers routinely “chop and spread it,” while ethnographers save it all for one big AJS/ASR article.
I’ll finish with how I think about my own publication strategy. My first allegiance is to knowledge. So I have never suppressed any article that I thought had a specific contribution to make, big or small. Also, in my own experience, I have benefited greatly from articles published in some obscure places. “Small” doesn’t mean dumb or useless. Just small – which might be very important to someone out there (including me).
What I have ended up with is a sort of triage: articles that are “big” in some sense are channeled to major journals, while “small” contributions are sent to niche journals. That results in a output stream where the modal is “small” but the stream is punctuated by a few “bigs.” Finally, one thing that I don’t do is rewrite the same article over and over. I make no claim that this is optimal, only that this is what you get if you believe that “small” contributions and niche journals have a place in the academic world.
One of the most important things you can teach a graduate student is how to publish. While students can teach themselves the material, or learn through osmosis, most people need concrete instruction on the professional side of academia. And they need to publish early and (in many cases) often.
And it matters – a lot. A new article published in BioScience looks at the careers of academics and it shows that early publishers do the best later in the career. The article is called “Predicting Publication Success for Biologists,” and it is authored by William F Laurance, Diane Carolina Useche, Susan Gai Laurance and Corey J. A. Bradshaw.
Summarizing their work in the website “The Conversation“:
We attempted to predict the publishing winners and losers, focusing on biologists and environmental scientists on four continents, using five easily measured variables. Our findings seem surprisingly unequivocal but are already provoking strong reactions of agreement and disdain.
Here’s what we concluded.
It doesn’t matter whether you got your PhD at glittering Harvard University or a humble regional institution like the University of Ballarat. The supposed prestige of the academic institution has almost no bearing on your long-term success, once other key variables are accounted for.
Secondly, if you’re a woman, or if English isn’t your first language, you’re going to face some minor disadvantages in publishing. The differences are not huge, on average, and there’s enormous variability among different individuals, but men who are native English speakers do tend to have half a leg up in the publishing game.
Finally, by far the best predictor of long-term publication success is your early publication record – in other words, the number of papers you’ve published by the time you receive your PhD. It really is first in, best dressed: those students who start publishing sooner usually have more papers by the time they finish their PhD than do those who start publishing later.
The take-home message: publish early, publish often.
This reinforces what we already know. In sociology, the lesson holds as well, but qualitative people need worry less about volume.
The implication for graduate training is obvious. If you aren’t actively cultivating scholars who are trying to publish, you’re screwing over your PhD students.
Once you get a faculty job, you are confronted with many requests to be on committees. These requests should be handled very carefully. Turn too many down, and the work of the university will be left undone. And you’ll miss out on the nuts and bolts of academic policy. Accept too many, and you can wreck your productivity and possibly undermine your career.
So how do you deal with these requests? A lot of it depends on your career stage:
- Doctoral students: You should only do a single, important committee. For example, some departments allow students to sit on job search committees. At Chicago, I worked for the AJS and that was my contribution to department governance. It’s perfectly acceptable to completely avoid administrative work during this phase of your career.
- Junior faculty: You should avoid most committees, except those tied to your department. Then, if your chair gives you a choice, select, or ask, for the easiest committees possible, like the library committee. “Heavy” committees, such as admissions or job searches, should be left until you are well on your way to promotion.
- Senior faculty: Depends on what you want out of life. If you see yourself moving into administration, you’ll probably want to dabble in planning committees, tenure and promotions, and other high impact committees. If you see yourself focusing on teaching and research, you’ll probably want to limit yourself to committees that have an immediate impact, such as tenure and promotions.
A problem with many faculty is that they can’t say no, or they are too scared to say no. You have to shake this attitude for the following reasons. First, unless you are a complete shirker, no one will care if you turn down the occasional invitation. Second, your quality of life will be severely impacted by too many committees. Your schedule should only look like “committee Swiss cheese” if you are paid for the inconvenience (e.g., you are a chair of some sort). Third, is this what really drew you to academia? Seriously?
When judging committee invitations, I usually employ the following criteria:
- It is really important. For example, tenure and promotions is a core function. Study abroad committee probably not so important.
- I have a compelling personal interest. For example, I have a strong belief that more women and under-represented minorities need better support in the academic career. Thus, I will serve on committees that address this issue. Building committee? Important, but I’ll let someone with more expertise take that one.
- The committee is not bull—-. Honestly, a lot of committees exist to make people look good, or to do the hard work that should be done by administrators. Avoid these committees. For example, when it comes to women and under-represented minority issues, I will only do it if the committee actually has some power to do good, or punish evildoers. So, if we are handing out financial support, I help out. If you want another hand-wringing report, I’ll pass.
- Somebody will owe you a big time favor. ’nuff said here. Sometimes we do bull—- just to buddy up to others. That’s life in an organization.
- I’m paid/part of the job. Right now, I am director of undergrad studies, so I say “yes” to all undergrad issues committees. It’s my task, even though it makes me nauseous. And yes, curricular reform committee induces illness in me.
To sum up this post, you should only do committees if there is “value added.” Do it if it matters and realize that a lot of committees don’t matter. Don’t just say yes to everything. That’s crazy.
The Chronicle of Higher Education has an article on how sociologist Dean Savage and colleagues have kept track of what happens to those who graduate with a PhD in sociology from the Graduate Center. Here’s how that database kicked off:
During a particularly tough academic job market in the early 1990s, Dean B. Savage started the work of tracking down every student who had earned a Ph.D. in sociology from the Graduate Center to find out where they went on to work. With the help of graduate students, he has created an ever-growing database of 471 people that dates back to graduates from 1971.
The data, which Mr. Savage updates periodically, provide a snapshot of where former students are employed and what positions they hold. They also provide a window into other placement-related trends, such as how far outside New York City people were willing to cast their nets while job hunting, how often Ph.D.’s opted to pursue nonacademic jobs, and how long it took for sociology students to earn Ph.D.’s.
The database shows that about 50% among those who earned PhDs between 1980-1984 and could be located were employed in academic and nonacademic positions:
The data he has collected document the bleak reality that many people already know about the academic market: A full-time job as a professor isn’t a given for those who want one. In fact, since 1980, fewer than half of the sociology graduates hold full-time tenured or tenure-track jobs. But the data, which were most recently updated last year, also reveal some good news: The program’s record of placing students in full-time jobs inside and outside academe has shown improvement over the years.
Just over half of the 59 graduates who earned Ph.D.’s between 1980 and 1984, for example, were full-time professors or in full-time administrative, research, or nonacademic positions when Mr. Savage last tracked them down (11 of those were retired). Two held part-time academic positions, four were independent scholars or self-employed, and 21 couldn’t be located.
As for more recent graduates, their employment percentage is slightly lower, reflecting the economic downturn and changes in university hiring practices:
The placement rate for graduates between 2010 and 2012 dipped to 53 percent.
Interestingly, graduates don’t stray far from the Big Apple tree, suggesting that the two-body issue or other constraints and preferences limit job-seekers’ options to a particular geographic area:
According to Mr. Savage’s data, nearly 60 percent of all students who graduated between 1971 and 2012 work or live in New York State. They’re diehard fans of the Big Apple who often have family ties there, so they skip doing a national job search.
Check out the article for more comments and snippets, including commentary by the current graduate director John Torpey and graduates.
David Courpasson is finishing his term as the editor of Organization Studies, the official publication of the European Group for Organizational Studies (EGOS). As a parting gift, he wrote an essay about what he feels is right and wrong (okay, mostly wrong) about the current state of organizational scholarship. The essay is provocative and a bit pessimistic, although not unfairly so. One of the major problems plaguing our field, Courpasson believes, is the development of a culture of productivity in social science, which seems to have most severely infected organizational and management research. In this culture of productivity, scholarship is not evaluated based on relevance or the quality of ideas but rather on the sheer volume of research that a scholar can produce. Professors are compelled to write lots of journal articles, and they push them out quickly in order to boost the length, but not necessarily the quality, of their CVs. Although he doesn’t mention it, this culture of productivity seems to have numerous institutional sources, including the practice of many departments that determine merit raises and tenure cases by “number counting” (i.e., deciding that someone deserves tenure based on the number of “A journal publications” the person has produced).
The consequences of this culture of productivity is to increase the sheer volume of publications but at the sacrifice of social relevance. The culture also has negative effects on the review and editing processes. Reviewers are worn out, editors are overwhelmed with new submissions, and there are simply too many journal articles to read and process. Here is an excerpt from Courpasson’s article:
[O]ur current system of scientific manufacturing creates more papers to review, with less committed and less timely reviewers, with a lower density of challenging ideas, as well as of ideas that are less significant for ‘the world’; in other words, for other worlds than the closest colleagues and networks. The culture of ideas is therefore vanishing: due to publishing pressures, people feel more and more pushed to submit any paper because rejection is not necessarily harmful: a new dynamic is created where work is routinely submitted anyway, sometimes in a real hurry (that is to say, even when clearly unfinished, including incomplete lists of references or variety of colours in the text), overburdening journals and editors. Here individual arbitrations surely play a role: authors’ visibility can indeed be maximized by small improvements enabled by journals’ insightful reviews; at the same time, thanks to this principle of productivity, potential papers to submit by a single author are multiplied, often in a logic of replication and repetition that also leads to ‘deviant’ behaviours such as self-plagiarism. But that adds some items in a resume and that is important because items are counted. Again, this is a counterproductive game: because volume does not always match quality and innovation, editors are more and more inclined to focus on flaws to purposively (although not willingly) narrow down the number of papers under review and obviously, in this ‘negativist’ cycle, innovative papers can be sacrificed by the necessity of correlating the ‘quality’ of a journal and a high (desk) rejection rate.
The new journal, Sociological Science, is now up and running. The goal:
- Open access: Accepted works are freely available, and authors retain copyright
- Timely: Sociological Science will make editorial decisions within 30 days; accepted works appear online immediately upon receipt of final version
- Evaluative, not developmental: Rather than focus on identifying potential areas for improvement in a submission, editors focus on judging whether the submission as written makes a rigorous and thoughtful contribution to sociological knowledge
- Concise: Sociological Science encourages a high ratio of novel ideas and insights to written words
- A community: The journal’s online presence is intended as a forum for commentary and debate aimed at advancing sociological knowledge and bringing into the open conversations that usually occur behind the scenes between authors and reviewers
I congratulate them for doing this. This takes some courage to do. We need many different types of journals. And, sadly, we are lacking high impact journals that focus on shorter empirical work that is refereed in a timely fashion.
Good luck – and I look forward to being rejected by you!
Inside Higher Education ran an article on new numbers released by the Council of Graduate Schools. The big news? Humanities enrollments are up 7%. Scott Jaschik asked me about this and, frankly, I was puzzled. I was quoted in the article as saying it is puzzling because it is open knowledge that humanities PhD’s are very risky.
A few possibilities:
- Skepticism: This is statistical noise, or an artifact of how the Council computed this number.
- Shrinking opportunities for educated low productivity workers: In the old economy, there were lots of options for people with humanities degrees. In the new economy, the college premium disproportionately goes to people in finance, economics, or STEM fields.
- Debt avoidance: Stay in school forever and hope that inflation eats away at the debt you acquired.
- Cultural change: Maybe people just value scholarly careers more than they did before and are more accepting of risk. In an era where Wall Street and the law have taken big hits in the eyes of the public, maybe more people are turning to the academy.
For now, I’d wait one or two more years to rule out #1. Then, the list reflects my beliefs, which changes in the labor market first and cultural change last.
As some of our readers may know, the American Sociological Association (ASA) assigns section presentation slots for the annual meeting based on section membership numbers. As a result, sections may scramble at the year’s end to recruit section members to meet these targeted numbers. In short, more members = more presentation slots.
ASA section Organizations, Occupations, and Work (OOW) is looking for more members to round out 2013′s roster. Here’s the call:
“Dear OOW Members,
We are just 10 members short of 1000! If we can reach that threshold before September 30, we will be given an additional session at the 2014 ASA meetings. Please forward this to colleagues and friends who may be interested in OOW.
To add a section membership, just go to https://www.e-noah.net/asa/default.asp. Section membership is $12 ($10 for low income) for regular ASA members.
Please also note–free grad student memberships available: OOW members have generously donated funds to cover approximately 45 graduate student memberships.* Please note: OOW is offering to cover the grad student OOW membership fee for students who are current members of ASA and NOT current members of OOW. (This offer is not for the next year’s membership, only the remainder of 2013.)….Please pass this offer on to your friends who may be interested in OOW topics but are not members! Those students may sign up here and then we will pass that on to ASA to activate your OOW membership for 2013.
Thanks and regards,
* Along with other colleagues at the OOW meeting at ASA, I was one of the OOWers who stuffed a crumpled bill into a paper bag to help sponsor a grad student OOW membership. So, get on it, folks! :)
Jessica Collett, scatterista and social psychologist supreme, has a thoughtful post summarizing her recent research on “impostor syndrome” among academics. If you aren’t familiar with the idea, it means that people feel like they are fakes and subsequently curtail their ambitions or work. From her post at Scatterplot:
At this year’s ASA meetings in NYC, Jade Avelis and I presented research on the effect of impostorism (also known as the impostor syndrome or feelings of fraudulence) on academic career ambitions. We were specifically interested in impostorism as a potential causes of “downshifting”* (entering graduate school programs aspiring to a tenure track position at a research institution and changing during the course of study to a non-tenure track position or one with an emphasis on teaching), a trend almost twice as common among women as it is among men.
In the literature to date, researchers attribute higher rates of downshifting among women to their increased concerns about family friendliness compared to men. Drawing on qualitative and quantitative days from PhD students at a private, research institution in the Midwest, Jade and I test both this common explanation and an impostorism account. As reported today in Science Careers, over at the website for Science, we found trends consistent with previous research. Women were more likely to suffer from impostorism, more concerned about family friendliness, and more likely to downshift during graduate school than men were. However, we also found that women’s increased concerns about family friendliness did not explain their increased likelihood to downshift. Impostorism, on the other hand, played a significant role.
This is crucial research for anyone interested in gender disparities in the academy. Jessica has a concrete suggestion at the end – that imposterism might be combated by changing the atmosphere within PhD programs. Knowing that other people have anxiety is a nice way to help people overcome it. Fabio’s suggestion: RCT where some programs implement an anti-imposterism program for 1st years, then we follow up every few years to see if it made a difference.
There’s a recent study by researchers at Northwestern showing that part time instructors do better than tenured full timers. A few clips from an Inside Higher Ed piece addressing the issue:
A major new study has found that new students at Northwestern University learn more when their instructors are adjuncts than when they are tenure-track professors.
The study — released this morning by the National Bureau of Economic Research (abstract available here) — found that the gains are greatest for the students with the weakest academic preparation. And the study found that the gains extended across a wide range of disciplines. The authors of the study suggest that by looking at measures of student learning, and not just course or program completion, their work may provide a significant advance in understanding the impact of non-tenure-track instructors.
In the past couple of weeks, two journalists who I enjoy reading wrote controversial diatribes about the travesties of contemporary higher education. Both Matt Taibbi and Thomas Frank, each in their own brilliantly polemical ways, compared higher education to the housing bubble that led to our last serious financial crisis. Both writers attacked the integrity and ethics of the administrators of the current regime of academia. Both bashed a system that would allow students to acquire more debt than they could possibly pay given the job prospects for which their education prepares them. These are real nuggets that academics ought to consider seriously. Ignore, if it offends you, the abrasive rhetoric, but at the heart of both of their arguments is a logic that ought to resonate with our sociological sensibilities.
Here is Taibbi:
[T]he underlying cause of all that later-life distress and heartache – the reason they carry such crushing, life-alteringly huge college debt – is that our university-tuition system really is exploitative and unfair, designed primarily to benefit two major actors.
First in line are the colleges and universities, and the contractors who build their extravagant athletic complexes, hotel-like dormitories and God knows what other campus embellishments. For these little regional economic empires, the federal student-loan system is essentially a massive and ongoing government subsidy, once funded mostly by emotionally vulnerable parents, but now increasingly paid for in the form of federally backed loans to a political constituency – low- and middle-income students – that has virtually no lobby in Washington.
Next up is the government itself. While it’s not commonly discussed on the Hill, the government actually stands to make an enormous profit on the president’s new federal student-loan system, an estimated $184 billion over 10 years, a boondoggle paid for by hyperinflated tuition costs and fueled by a government-sponsored predatory-lending program that makes even the most ruthless private credit-card company seem like a “Save the Panda” charity.
Not long ago, spring made hearts lighthearted and young, with the prospect of a summer spent on research, reading, and righting/writing.
Tanya Maria Golash-Boza, at the blog, Get a Life, Ph.D., has a good post about parents and academia. Be nice to them! A few clips:
- Tip #1: Introduce them to other parents
- Tip #2: Keep their schedules in mind when planning events or meetings
- Tip #3: Never Insinuate That Being a Parent Makes Professors Less Valuable or Productive
Good stuff. HT: Karen Nakamura.
lifting the crimson curtain: Manufacturing Morals: The Values of Silence in Business School Education
As a grad student, I always found crossing the bridge over the Charles River from Harvard University to the Harvard Business School (HBS) to be a bit like approaching Emerald (or more appropriately, Crimson) City. On the Allston side, the buildings seemed shinier (or, as shiny as New England vernacular architecture allows), and the grounds were undergoing constant replantings, thanks to a well-heeled donor. In addition, HBS has loomed large as an institution central to the dissemination of organizational theory and management practices, including Elton Mayo’s human relations.
HBS has certain peculiarities about teaching and learning, like the use of case studies which follow formulaic structures as the basis for directed class discussion.* Moreover, instructors follow a strict grading break-down: mandatory “III”s assigned to the lowest-performing students of classes – a source of concern, as students with too many IIIs must justify their performance before a board and possibly go on leave.** To help instructors with grading, hired scribes document student discussion comments.***
Such conditions raise questions about the links, as well as disconnects, between classroom and managerial leadership, so I was delighted to see a new ethnography about business school teaching at the UChicago Press book display at ASAs.
With his latest book, Michel Anteby lifts the crimson curtain from HBS with his new book Manufacturing Morals: The Values of Silence in Business School Education (University of Chicago Press, 2013).
Here’s the official blurb:
“Corporate accountability is never far from the front page, and as one of the world’s most elite business schools, Harvard Business School trains many of the future leaders of Fortune 500 companies. But how does HBS formally and informally ensure faculty and students embrace proper business standards? Relying on his first-hand experience as a Harvard Business School faculty member, Michel Anteby takes readers inside HBS in order to draw vivid parallels between the socialization of faculty and of students.
In an era when many organizations are focused on principles of responsibility, Harvard Business School has long tried to promote better business standards. Anteby’s rich account reveals the surprising role of silence and ambiguity in HBS’s process of codifying morals and business values. As Anteby describes, at HBS specifics are often left unspoken; for example, teaching notes given to faculty provide much guidance on how to teach but are largely silent on what to teach. Manufacturing Morals demonstrates how faculty and students are exposed to a system that operates on open-ended directives that require significant decision-making on the part of those involved, with little overt guidance from the hierarchy. Anteby suggests that this model-which tolerates moral complexity-is perhaps one of the few that can adapt and endure over time.”
Check it out! And while you’re at it, have a look at Anteby’s previous book, Moral Gray Zones (2008, Princeton University Press).
One of my beliefs, born out by research like Arum and Roska’s, is that people don’t learn or retain much from college. There are many reasons why, but one is that colleges don’t believe in “overlearning,” which means that you study a topic so much that it becomes automatic.
Consider the typical college class. They meet two or three times a week. Students either skip the readings, skim them, or quickly forget them. Unless it’s part of the grade, students are often absent from class. The exams typically cover the material, but then you move on to new stuff. Many students are allowed to move on with marginal grades. The opposite of “overlearning.” Colleges offer “barelylearning.”
If colleges were serious about learning, the entire system of lectures and semesters would be dumped. Occasional passive lectures and marginal grades would be abolished. Instead, we’d probably have very short “modules” where students did nothing but math, or writing, all day, every day for a few weeks or a months. Complete immersion so people could get completely absorbed into the subject and learn it so it becomes second hand. It’s the way that learning is done in institutions where mastery matters, like medical schools (e.g., rotations) or the military (e.g., the system of “special schools” – immersion).
A few weeks ago, I expressed dismay at the multiple R&R, multi-year revision process that now takes place at our flagship journal. I picked on them in particular, but it’s really a demand for all journals (inc. AJS, SF, SP) in general to stabilize the review process and adopt some concrete rules. You should only R&R if you think there’s a reasonable chance of success. You really shouldn’t assign new reviewers in most cases. And, please, cut the multi-R&Rs unless it is a de-facto admission that a manuscript will almost certainly be published. This is the norm in economics – many R&R’s, but the R&R means that the paper will be published.
So my question is this: is there any sign at all this was taken seriously? I recently was asked to review at ASR and I expressed my concerns. I got a polite email back, but little indication otherwise. I agreed to review the paper (1st R&R and I was an original reviewer) but warned that I will not participate in 2nd or 3rd R&Rs.
I spent relatively little time at ASA, so I don’t know what people thought about this issue, or if our editors are thinking about getting control over the process. Your thoughts? What is the buzz on the street?
Writing is like raising children.* You spend endless time on it, cultivating and fussing over the details.
Sometimes writing is a joy, and you can’t believe you have the privilege of doing this for a living. In this state, you can repeat tasks like rewriting sections over and over, all because you believe in it and think you have something to share with the world. At this point, writing feels a lot like this:
When a deadline hits and/or you feel you’re done enough work to share with others, you may feel a bit anxious about releasing the kids into the wild, but you reassure yourself that they can hold up on their own. But, at some point, submitted manuscripts return home like boomer-rang adult children with several “needs more work” recommendations safety-pinned to their shirts.
In some cases, you bite your lip, as you supported or even encouraged this child’s majoring in pomo-such-and-such studies. However, sometimes agents of the cruel world (i.e., reviewers and editors) disagree about whether it needs another one of these and what can be done to improve chances for independent living. You are grateful for the feedback, but it’s not always clear how you can implement changes, especially when recommendations conflict. In the meantime, the child is lying aimlessly on your couch with earbuds in, leaving dirty dishes and empty candy wrappers everywhere, and muttering monosyllabic responses to your increasingly alarmed inquiries about future steps towards independence.
During these times, the rewriting process feels more like this:
More comparisons after the jump…
Want to see Big Data in action? More Tweets/More Votes will be presented on Monday, 8:30 am in the session on voting and elections.
Also, if anyone wants to chat, I can do Monday breakfast, 10:30 – 1pm-ish. I will also attend the book release party for guest blogger emeritus Hilary Levy Friedman on Monday. Her book, Playing to Win, will soon be released by the University of California Press. Email me if you want to meet up.