Archive for the ‘academia’ Category
Writing is like raising children.* You spend endless time on it, cultivating and fussing over the details.
Sometimes writing is a joy, and you can’t believe you have the privilege of doing this for a living. In this state, you can repeat tasks like rewriting sections over and over, all because you believe in it and think you have something to share with the world. At this point, writing feels a lot like this:
When a deadline hits and/or you feel you’re done enough work to share with others, you may feel a bit anxious about releasing the kids into the wild, but you reassure yourself that they can hold up on their own. But, at some point, submitted manuscripts return home like boomer-rang adult children with several “needs more work” recommendations safety-pinned to their shirts.
In some cases, you bite your lip, as you supported or even encouraged this child’s majoring in pomo-such-and-such studies. However, sometimes agents of the cruel world (i.e., reviewers and editors) disagree about whether it needs another one of these and what can be done to improve chances for independent living. You are grateful for the feedback, but it’s not always clear how you can implement changes, especially when recommendations conflict. In the meantime, the child is lying aimlessly on your couch with earbuds in, leaving dirty dishes and empty candy wrappers everywhere, and muttering monosyllabic responses to your increasingly alarmed inquiries about future steps towards independence.
During these times, the rewriting process feels more like this:
More comparisons after the jump…
Want to see Big Data in action? More Tweets/More Votes will be presented on Monday, 8:30 am in the session on voting and elections.
Also, if anyone wants to chat, I can do Monday breakfast, 10:30 – 1pm-ish. I will also attend the book release party for guest blogger emeritus Hilary Levy Friedman on Monday. Her book, Playing to Win, will soon be released by the University of California Press. Email me if you want to meet up.
Recently on Facebook, a friend commented they had received more opportunities in the last few weeks than in the last few years of a distinguished, but rocky, career. I was reminded that the same had happened to me. For the first 5 years after grad school, I ate alone at ASA most of the time. Emails would go unanswered. Dont’ pity me. It wasn’t *that* bad. A few buddies would invite me for talks.
Then, sometime around 2008 it changed overnight. People just started contacting me for various reasons. Some were blog readers, others were in my specialty. Others contacted me for teaching, professional, or publishing purposes. I call it the academic phase transition.
My guess is that this occurs in most professions. Just by getting older and managing to get a few things done, you become visible. And, though its predictable in some sense, we experience it as discontinuous effect on our self-image.
Dirk vom Lehn is a lecturer in the Department of Management at King’s College London. His research focuses on ethnomethodology in organizational settings. He asked if I could post this response to Christakis’ NY Times article on the need to update the social sciences.
Stagnating the Social Sciences? A response to Nicholas Christakis?
In his recent piece “Let’s Shake Up the Social Sciences” published in the New York Times on July 19th, Nicholas Christakis calls for interdisciplinary research that creatively links the social sciences to other disciplines, in particular the natural sciences. I very much welcome his efforts to open a debate about the future of the social sciences. All too often scientists create separate enclaves of knowledge that, if joint up with others, could lead to important new academic, technological and political developments. There however are a few problems with Christakis’ argument. I wish to briefly address three of these problems here:
I am surprised Christakis puts forward the argument that “the social sciences have stagnated” over the past years. He gives no empirical evidence for such a stagnation of the social scientific disciplines and I wonder what the basis for this argument is. If he was to attend the Annual Conference of the American Sociological Association (ASA) in New York in August he will see how sociology has changed over the past few decades, and he will be able to identify specific areas where sociologists have impacted developments in policy, technology, medicine, the sciences, the arts and elsewhere.
His argument ignores also the long-standing cooperation between social scientists, technology developers, computer scientists, medics and health services providers, policy makers, etc. etc. etc. For example, for several decades social scientists, computer scientists and engineers have collaborated at research labs of PARCs, Microsoft and elsewhere, jointly working to develop new products and services.
Loyal reader and Fabsterista asked me to post this job announcement:
The Department of Sociology at UC San Diego (http://sociology.ucsd.edu/) is committed to academic excellence and diversity within the faculty, staff, and student body. The department invites applications for the newly endowed Daniel Yankelovich Chair in Social Thought beginning July 1, 2014. The substantive areas of the chair-holder’s research are open. However, the holder of the Yankelovich chair should be a senior scholar whose research and teaching clearly demonstrate the ability to transcend the boundaries of their discipline in understanding important issues and problems; to place their research and thinking in the larger context of society; and to communicate cogently and clearly, with a view to exercising influence in both the academy and the world beyond the academy. Interested individuals are asked to submit a CV and samples of their written work. We also ask for separate statements concerning the candidate’s research agenda and their contributions, or potential for contributions, to diversity. All application materials should be submitted electronically via UCSD’s Academic Personnel On-Line RECRUIT (https://apol-recruit.ucsd.edu/). Please select the following job title: SOCIOLOGY Yankelovich Chair (10-XXX) JPF00XXX.
The review of applications will begin Friday, November 1, 2013 and will continue until the position is filled. Salary commensurate with qualifications and based on UC pay scale.
Check it out.
Update: Olderwoman reminds me that she did not single out the ASR in her original post. I have revised this post to reflect that. Regardless, we agree that journal norms are broken, especially at our beloved flagship journal.
This morning, I read olderwoman’s blog post about problems with journals that request too many revisions, or that invite revisions too easily (“inflated R&Rs”). This issue has arisen with respect to the American Sociological Review, the flagship journal of the American Sociological Association. The ASR has been giving R&R’s to many submitted articles, much more than average, and they are soliciting many reviews per article. It has also been sending articles through multiple rounds of revisions, leading to articles being held at the journal for years. Since they seem to accept to same number of articles per year (about 40), that implies that the multiple rounds of revision do not lead to publication for many authors. Here is my response to that post:
I am asking the American Sociological Review to curtail this practice. In writing this, I have no personal stake in this matter. I do not have any papers under review, nor has the ASR accepted my previous submissions. I only write as a member of the profession, senior faculty at a top 20 program, a former managing editor of an ASA journal (Sociological Methodology), former associate editor of the American Journal of Sociology, occasional board member for various journals, author, and reviewer.
The inflated R&R policy is damaging sociology in a few ways. First, by continually R&R’ing papers that have little chance of publication, the ASR is “trapping” papers that may be perfectly suitable for specialty journals or other outlets. Thus, inflated R&Rs keep good research out of the public eye for years. You are suppressing science.
Second, inflated R&Rs damage the reputation of the ASR itself. The goal of a flagship journal is to be very picky. When people hear that a paper has been invited for revision, they believe that the editors think that the paper is of great merit and wide relevance. Inflated R&Rs undermine that perception.
Third, you are damaging people’s careers. By trapping papers, you preventing papers from being resubmitted to other journals that can help their careers. Also, R&R invitations are often seen as signs of intellectual progress, especially for doctoral students and junior faculty. By lumping together strong and weak papers, you are debasing the “currency” of the R&R. When people see “R&R at American Sociological Review,” they no longer know what to think and that pollutes the junior level job market.
Fourth, you are wasting precious time. Reviewers are usually full time faculty who teach, mentor graduate and undergraduate students, do administrative work, conduct research, and have full family lives. Thus, when you ask for a fourth reviewer, or a invite a paper for a third round of R&R, you are taking up many, many scarce resources.
If a typical professor earns $50/hour, and it takes about 3 hours to read and write comments, then three rounds of R&R with four reviewers each, creates a cost of $50 * 3 * 4 *3 = $1800 for each paper . By doing that for hundreds of papers, you’ve burned up almost a half-million dollars in faculty time. I did not to mention the ill feeling generated when reviewers see yet another request for a review.
So, please, implement policies then ensure an efficient, reliable, and highly selective e review process. It’s the right thing to do.
In the movie The Holy Grail, one of the most insightful scenes is when Sir Lancelot charges a castle to save a maiden in distress. What makes it funny is that he charges across this open field for a few minutes and he is completely ignored by the guards. When he finally reaches the castle door, the guards act totally surprised. But of course, they should have seen it coming.
Sociology is having that moment right now. Right now, the territory of the social sciences is under pressure to expand and reshape itself. And we’ve seen this coming for a while. The forces are many. Increasing knowledge of gene-behavior links. The appearance of “Big Data,” which we’ve argued about on this blog. The demand for experiments from the policy world.
There are a few responses. We can simply ignore these trends and continue as usual. That’s what Nicholas Christakis was arguing against in his column. Or, we can uncritically accept them, a position which has some advocates. The response I’d prefer to see is a more thorough engagement, an integration of these issues into the core social sciences. Otherwise, we’ll become the discipline of 20th century theory and methods, not the place that comprehensively looks at social life.
Neil Gross cements his position as the leading sociologist of American intellectuals with his new book Why are Professors Liberal and Why do Conservatives Care?* This book collects into one text a series of arguments about the American professoriate that Gross and his collaborators have presented in a series of articles. Essentially, Gross argues that American academia, on the average, is liberal because of self-selection on the part of conservatives. The specific issue is that academia, for a number of historically specific reasons, has acquired an aura of extreme liberalism. Thus, conservative students say “Why bother? Academia is for liberals. What’s the point?”
What is impressive about Gross and his confederates is that they test all kinds of alternative hypotheses. For example, one might think that academic skills explain conservatives lower enrollments in PhD programs. But it doesn’t. Differences in values don’t explain much either. In other words, Gross et al systematically test all kinds of hypotheses and show that they are simply not true or that they only explain a small proportion of the differences between conservatives and others.
Eventually, using historical evidence and interview data, Gross makes a good case for self-selection. Sociology is a good example. In principle, there’s lots of places for non-liberal sociologists. For example, one could work on non-ideological aspects of sociology, like research methods. Or, as many conservatives have done, they could work in areas of interest like family sociology, where in some cases (like studies of negative divorce effects on kids), they could work on topics that are consistent with their ideology. But if you sit down and ask a typical conservative undergrad why they didn’t take many soc courses, they’ll tell you an image of evil ultra-liberals who are bent on political correctness.
Now, where I would criticize this book is the study of conservatives. For example, Gross argues that there isn’t much evidence of bias against conservatives. He uses the example of a study he conducted with Jeremy Fresse and Ethan Fosse where they contacted graduate directors with email from fake students. Some emails mentioned working for a GOP candidate, some a Democrat, and other none at all. Gross et al find no differences in how graduate directors responded.
First, there’s the issue, which Gross acknowledges, that graduate directors probably write a lot of boiler plate emails. But there’s a deeper criticism – why didn’t Gross interview people at risk for discrimination from liberal colleagues? For example, why not interview liberal (Keynesian) and conservative economists (monetarists or Austrians)? Or, why not interview Rawlsian philosophers (liberals) and compare their careers with Nozickians (libertarians) or Burkeans (conservatives)? Or, even better, why not collect materials from people who submitted books or articles on conservative topics but were rejected?
I think that Gross is right – anti-conservative bias is not nearly as bad as people think, if it exists at all – but the treatment of conservatives is not nearly as nuanced as the treatment of liberals. This probably speaks to the development of the project, which started with analyzing massive data (like the GSS) that trues to tease out conservative/liberal differences. Developing a theory or map of conservative intellectuals probably came late in the game.
Regardless, this book is massive progress on a central issue in the study of American intellectuals and the academy. This will be required reading for anyone interested in this topic.
* And I’m not saying that because he said nice things about me in the book. But he did. Oh yeah, and I’m not just saying it because he edited another cool forthcoming book about academia with a chapter by moi. But he did. Ok, maybe he buttered up a little. But just a little!
Two online discussions motivate this post. First, there is the discussion of women in academia, prompted by a Slate article that reports a baby penalty for female scholars. Second, there was a recent twitter discussion about some sociologists who are leaving academia to work for Silicon Valley.
The underlying issue is that academic careers are poorly structured. They essentially require that people take low pay and job insecurity for at least ten years – assuming that you don’t do the post-doc route, that the PhD is only 5 years and you get voted for tenure at the beginning of year 6. In other words, academia requires that individuals shoulder a great deal of risk compared to other professionals. This is obviously hard for women. It is also a bad deal for people who can work in exciting fields outside of academia, such as people with strong programming skills.
The result is that academia is suffering a brain drain. We are losing all kinds of people – women, people with good technical skills, and so forth. Collectively, people just shrug their shoulders and do nothing. And it makes sense – we don’t get rewarded for improving the discipline. We only get rewarded for publication.
But, still, there are some concrete steps that we can do. For example, to help retain women, we should actively make it easier to have children and raise them earlier in the life course. Parental leave is good, but also we should pull back on service work for pre-tenure faculty. For programmers, we should think about structuring PhD programs in ways that don’t sprawl into decade long endeavors and this astronomically increase the opportunity cost of academia.
I am now at the age where I actually have PhD students working with me. In other words, I need to apply the grad skool rulz to my own life. In the spirit of discussion, I outline my philosophy as a teacher of PhD students:
- Be firm but nice. No need to make people cry.
- My discipline has norms and standards for research that can be taught. I will teach this “normal science” to my students.
- I will be flexible. Though most students have to master the “meat and potatoes” of research, some can work on more idiosyncratic projects.
- I will be in my office a lot. Students can drop in or make appointments for the short term.
- I will provide concrete directions when possible.
- I will provide specific detailed advice on professional issues, like article writing, the job market, and teaching.
- I am hands on – I want people to contact me a lot.
- I will help students develop projects they can complete in a timely fashion. No need to produce that 100 page dissertation proposal, a shorter one will do.
- I will not tell you what to research, but I will give you lots of advice on how to execute it.
- I will give you ample opportunities to co-author.
- I will get paperwork done on time.
- I will accept any student, unless they have shown gross academic incompetence or they are working on a topic that I simply can’t help you with.
Consider this an open thread on graduate student mentoring.
I boil down a few arguments, my own and from the last round of comments, in favor of three essays as the default for academia:
1. In all sciences, most professional fields, and most social sciences, articles are standard. There are even humanities areas, like philosophy, where articles are standard. Even for book writers, articles are important. Most book writers do an article or two before jumping to the book.
2. The purpose of the dissertation is to show the ability to conduct research. Creativity is great – and should be rewarded with a degree – but the standard is normal science and competence.
3. The purpose of the graduate program is professional training – not an extended multi-year post-doc.
4. Standards actually protect students. It is too easy for faculty to hold students to unattainable standards, and drag them out for years. Moving goal posts is a real problem in graduate education. I’ve seen it happen too often. If there is a concrete standard, both students and faculty will know when “enough is enough.” There is a basis for appeal if professors are being unreasonable.
5. Three essays is a default, not a requirement. But still, in sociology, for example, the overwhelming majority should probably start with that unless they do ethnography. In other words, try a few articles. If your ideas *really* require more space, ok. But try the basics first.
Bottom line: Doctoral programs are about professional training and most academic professions focus on articles. That doesn’t mean that we should not allow more ambitious dissertations, but that should be reserved for a small minority of cases.
One of the problems of graduate education in many fields is that the requirements for the dissertation are vague. Another issue is that the dissertation is a book length treatment, even in fields where articles are standard. This leads students spend years writing overly long documents that have little value. For that reason, I encourage all my students to use the “three essays” format as the default. It’s simple, it works, and they’ll get done. If they have a good reason for deviating, then we can talk about it. But most folks should really stick to “three essays.”
There is now more systematic research showing that this advice is correct. A recent AER paper authored by Wendy Stock and John Siegfried shows that economists who use the “three essays” format do better in terms of academic job placement and subsequent publication. The abstract says it all:
Dissertations in economics have changed dramatically over the past forty years, from primarily treatise-length books to sets of essays on related topics. We document trends in essay-style dissertations across several metrics, using data on dissertation format, PhD program characteristics, demographics, job market outcomes, and early career research productivity for two large samples of US PhDs graduating in 1996-1997 or 2001-2002. Students at higher ranked PhD programs, citizens outside the United States, and microeconomics students have been at the forefront of this trend. Economics PhD graduates who take jobs as academics are more likely to have written essay-style dissertations, while those who take government jobs are more likely to have written a treatise. Finally, most of the evidence suggests that essay-style dissertations enhance economists’ early career research productivity.
My take home message? We should drop the pretense of the sprawling dissertation. All departments should require or strongly encourage the three essay format as the default. If the student wants something else, they need to make the argument.
Hat tip to our evil twin, Organizations and Markets.
It has been a while since I have posted on orgtheory.net and sadly I am jumping back into the fray to announce the death of one of the great men of organizational sociology. Michel Crozier died last night in Paris. He was 91.
I moved to Paris two years ago to join the research center that Crozier founded, the centre de sociolology des organisations. The CSO is associated with the Institut d’Études Politiques de Paris (Sciences Po). Crozier also taught at Sciences Po for many years.
Crozier’s intellectual journey began, as mine did, with a study of the United States labor movement. But it was his 1964 book, The Bureaucratic Phenomenon, that established him as a major voice in our field. That book challenged (or maybe it is better to say, evolved) the Weberian view of bureaucracy. Before him, organizational theory focused largely on what we could see in an organizational chart. What went on behind that chart — the interpersonal relationships in which were embedded multiple, often contradictory systems of power — was seen as a distraction or, worse, something to be suppressed. Along with his contemporary, Alvin Gouldner, Michel Crozier brought these kinds of relationships into the light. This led Crozier to conclude that organizations limited actors as much as they enabled them; that organizations were not simply solutions to problems, they were problems to be solved too. Myriad schools of thought within our field have followed from this.
Moreover, as I have come to understand, the distinction that many of us Americans hold on to between “objective” social science and the messier “real” world of administrative control (and reform) holds much less sway here in France. Crozier was not “just” an academic. He was a critic and a crusader for changes in French society and beyond. It was from this side of his work that his student and collaborator, Erhard Freidberg, set the intellectual tone for Sciences Po’s Master of Public Affairs, of which I am now the Director. So I owe him not only an intellectual debt of gratitude, but an organizational one as well.
Bon voyage Monsieur Crozier. Reposez en paix.
In a past installment of grad skool rulz, I offered advice for choosing a dissertation adviser. The idea is simple. Nobody is perfect, but you want someone who has at least a few good traits and no horrible traits. As I was thinking about this post, I wondered – how do graduate students actually choose their advisers? Do people actually methodically try to find a match or do they just “fall” into it? Why do people get stuck with horrible (or good) advisers?
In my own case, I just fell into it and it worked out. I worked with to faculty based on similar research interests (education) and style (both normal science types). But what about people who choose poorly? Part of the issue is that there simply isn’t enough information. Unless you are in a large program, most faculty won’t have more than one or two students in their career. In other cases, students don’t have much choice. For example, if you want to study sociology of science at Indiana, there’s really only one choice. Yet, I still see some students choose advisers who have well developed reputations for being difficult, or advisers who have really slim track records in placing students. My guess is that students believe that they’ll be the exception to the rule.
Consider this post an open thread on how to effectively find an adviser in graduate school. What are you considering as you choose an adviser?
Last Fri., I attended a talk by Sarah Babb of Boston College. In her talk, titled “Beyond the Horror Stories: Non-Experimental Social Researchers’ Encounters with Institutional Review Boards (IRB),” Babb revealed findings that included misconceptions about federal guidelines for human subjects. Contrary to what some IRB review boards demand from principal investigators (PIs) undertaking qualitative research, the federal guidelines do not require:
- signed consent from a low risk population
- an institutional research permission slip
To repeat, the above two are “not in federal regulations at all.”
Babb noted that at larger institutions, IRB boards often involve nonprofessionals – that is, those who don’t have appropriate professional expertise – in the decision-making processes about proposals. Moreover, qualitative research don’t fit well into the one-size-fits-all medical template often used to vet research proposals. Compounding these challenges is the lack of accountability in terms of IRB review boards’ responsibilities to PIs. Only 20% of IRBs that Babb examined had an appeals procedure that would allow PIs to contest decisions.
Not surprisingly, this talk evoked spirited discussion of the myriad problems encountered by researchers going through the IRB process at their institutions, as well as the unintended consequences of a review process ostensibly intended to protect human subjects. The audience noted the following unintended and undesired consequences: (1) normalized deviance,* (2) chilling effect upon the types of research undertaken, and (3) mission creep in which IRB review boards critique the suitability or worth of the research design, rather than evaluating risk to human subjects. In particular, senior researchers worried that tenure-track faculty and graduate students face great uncertainty about whether their project proposals will successfully navigate the IRB process in a timely fashion.
Audience members asked whether the sociologists’ professional association, the American Sociological Association (ASA), had taken an official position on IRB guidelines. None present were aware of any such activities (if you know of anything brewing from this or other associations, do write them in the comments). Attendees noted that because a tenured faculty member may be more able to surmount IRB issues on his/her own (or not need to go through the IRB process because of the type of research conducted), fashioning IRB standards that are more appropriate for a wider variety of research methods is a collective action problem.
I opined that these identified problems need to be considered a commons issue. Those with more power should consider it a professional responsibility to help budding researchers – undergraduate students, graduate students, junior faculty – go through an IRB process that is appropriate to their research methods and questions, especially if researchers hope to have future generations of audiences and colleagues. Unfortunately, dark humor may not be sufficient to get the point across – when a psychology colleague sent his IRB board a proposal to reproduce the Stanley Milgram experiment on April Fool’s Day, an IRB staffer called to inquire if the proposal was serious.
* One of my past posts discussing the IRB draws a steady stream of traffic from those searching for the answer to one of the quiz questions on the online Collaborative Institutional Training Initiative (CITI), a certification program mandatory for researchers and students at some institiutions.
I am one of those people who thinks that we should not encourage people to enter the academic profession unless they are extremely committed to scholarship and they show exceptional promise. This advice often triggers a reaction that is summarized as: “You are evil! You want to exclude poor people/minorities/women/others from academia!”
My response: encouraging an expansion of graduate education does not address most aspects of inequality and might make it worse in many cases. For example, there is a large scale gap between whites and blacks in terms of education, income, and wealth. Sending people to graduate school will not address this gap. There are many reasons: lots of people don’t finish the degree; huge opportunity costs; low paid adjunct work after graduation; accumulation of burdensome of debt; and the tenure track pays modestly compared to other professionals with similar qualifications. These trends suppress mobility.
In contrast, there lots of other professions that are much more likely to lead to good income and mobility. If we want to genuinely shrink the income gap between people of color and whites, for example, we are much wiser to encourage engineering and health science careers. You’ll get the degree in a few years and almost immediately jump higher in the income distribution. Way, way, way easier than going for that anthropology PhD and hoping for a tenure track job 12 years later.
If we want to address inequality within academia (ie., increasing representation on the faculty), we should reserve our efforts for getting people through the PhD pipeline and into jobs. We shouldn’t cram more graduate students into the pipeline. We should actually ask the logical question: What can we do to ensure that students acquire the right skills in academia? How can we make sure that they develop the right networks, that lead to publication in the “right” journals, and thus lead to the “right” jobs?
Sadly, very little effort goes into this side of things. It’s easier to count minorities and women and yell, “not fair! we need more!” It’s much harder to confront tenured faculty (like myself), and say: “Why haven’t you co-authored with women (or minorities) so that they may have a shot at a good tenure track job?” Let’s put the brakes on enrolling more students into doctoral programs and take up the less glamorous, but more important task, of making sure that the ones in the system will actually have the best careers possible.
I have a bleg. What do you think are the best organizational theory papers published in a sociology or management journal in 2012? I’m on a nominations committee and I don’t want to miss anything. Let me know what you think in the comments.
While looking up some literature on organizations, I found an American Sociologist 2003 article “The business of becoming a professional sociologist: Unpacking the informal training of graduate school” which might be of interest to fans of Fabio’s grad skool rulz. Similar to Fabio, David Shulman and Ira Silver discuss lessons they wish they had know while in grad school.
A few choice excerpts that might resonate with our readers and thread commenters regarding graduate training and professionalization:
On training for teaching-oriented vs. research-oriented institutions:
“To be sure, opportunities still exist for graduate students to become good teachers and to land faculty jobs that focus primarily on undergraduate teaching. Yet a highly ranked sociology department geared toward producing successful academic researchers is not the kind of place where graduate students are likely to acquire informal knowledge about how to tap these opportunities. One such piece of crucial knowledge is that regional networks seem to matter in landing teaching-oriented jobs in a manner that is not comparably true for research jobs.”
On work/life balance:
“While the academic world of graduate school can be an oasis of ideas and intellectual excitement, that oasis also can be a dark place. Graduate school can seem like a treadmill in which no matter how fast you run, you will not get where you are desperate to go. Graduate school cannot consume your life. Life goes on even here — you are an adult even while you are an apprentice. People get married, have kids, and take on outside projects and interests. Do not lose sight of your own life — these are the young years for many of us. Bitter Graduate Student Syndrome is to be avoided if possible. There is an outside world beyond your studies. But don’t spend too much of your time away, either. When graduate school voluntarily becomes purely a distant second fiddle to outside world pursuits, you can expect to add more years to your Ph.D. timetable, potentially unhappy ones.”
On “The Institutional Reluctance to See Sociology as a Business”:
“…some faculty may believe that informal socialization into the profession is to be earned only by virtue of a graduate student’s high talent level. There are at least two tiers of distributing informal professional knowledge: one in which students go through the program oblivious to the subterranean world of tips, and another where some students, anointed by their perceived ability, motivation, and a professor’s discretion, advance forward armed with crucial insights and connections. Thus, failing to openly distribute professional socialization can be an invisible and unstated form of hierarchical gatekeeping, meritocratic-based inequality in the midst of the appearance of egalitarian training.”
When I posted the Sociology Department Rankings for 2013 I joked that Indiana made it to the Top 10 “due solely to Fabio mobilizing a team of role-playing enthusiasts to relentlessly vote in the survey. (This is speculation on my part.)” Well, some further work with the dataset on the bus this morning suggests that the Fabio Effect is something to be reckoned with after all.
The dataset we collected has—as best we can tell—635 respondents. More precisely it has 635 unique anonymized IP addresses, so probably slightly fewer actual people, if we assume some people voted at work, then maybe again via their phone or from home. Our 635 respondents made 46,317 pairwise comparisons of departments. Now, in any reputational survey of this sort there is a temptation to enhance the score of one’s own institution, perhaps directly by voting for them whenever you can (if you are allowed) or more indirectly by voting down potential peers whenever you can. For this reason some reputational surveys (like the Philosophical Gourmet Report) prohibit respondents from voting for their employer or Ph.D-granting school. The All our Ideas framework has no such safeguards, but it does have a natural buffer when the number of paired comparisons is large. One has the opportunity to vote for one’s own department, but the number of possible pairs is large enough that it’s quite hard to influence the outcome.
It’s not impossible, however.
Update: I updated these analyses (fixing the double-counting problem). The results changed a little, so reload to see the new figures.
Last week we launched the OrgTheory/AAI 2013 Sociology Department Ranking Survey, taking advantage of Matt Salganik’s excellent All Our Ideas service to generate sociology rankings based on respondents making multiple pairwise comparisons between department. That is, questions of the form “In your judgment, which of the following is the better Sociology department?” followed by a choice between two departments. Amongst other advantages, this method tends to get you a lot of data quickly. People find it easier to make a pairwise choice between two alternatives than to assign a rating score or produce a complete ranking amongst many alternatives. They also get addicted to the process and keep making choices. In our survey, over 600 respondents made just over 46,000 pairwise comparisons. In the original version of this post I used the Session IDs supplied in the data, forgetting that the data file also provides non-identifying (hashed) IP addresses. I re-ran the analysis using voter-aggregated rather than session-aggregated data, so now there is no double-counting. The results are a little cleaner. Although the All Our Ideas site gives you the results itself, I was interested in getting some other information out of the data, particularly confidence intervals for departments. Here is a figure showing the rankings for the Top 50 departments, based on ability scores derived from a direct-comparison Bradley-Terry model.
The model doesn’t take account of any rater effects, but given the general state of the U.S. News ranking methodology I am not really bothered. As you can see, the gradation looks pretty smooth. The first real “hinge” in the rankings (in the sense of a pretty clean separation between a department and the one above it) comes between Toronto and Emory. You could make a case, if you squint a bit, that UT Austin and Duke are at a similar hinge-point with respect to the departments ranked above and below them. Indiana’s high ranking is due solely to Fabio mobilizing a team of role-playing enthusiasts to relentlessly vote in the survey. (This is speculation on my part.)
How should you pick a graduate school? Well, start by purchasing my book – The Grad Skool Rulz, the best grad school advice book you can get. But if $3 is too much, or you’re just lazy, here’s the way you pick graduate school. If you want a decent academic career, follow these steps in order:
- Rank – Don’t fuss over minor differences (#6 vs. #12), but a lot of your early academic career depends on your PhD institution. There are roughly 2.5 zones: the top five (“elite”); the top twenty; and some discipline give credit for the top 4o or so. If you don’t want an academic career, skip this step. In most private industry, PhD program rank doesn’t matter much.
- Toxicity – A lot of PhD programs burn out students. This is extremely important to know. I have successfully recruited to Indiana from higher ranked schools with a speech that starts: “What good is a Chicago PhD if you never get it?”*
- Intellectual fit – If you get into a decent ranked school and students actually graduate with decent jobs, then you can ask about fit.
- Financial support – You aren’t in this business to make money, but be sure you will leave without debt.
Remember, follow these in order!
* I pick on Chicago because it is my beloved alma mater. I love you guys, but you are known as the “PhD graveyard” in some parts – and for good reason!
While we’re running our Crowdsourced Sociology Rankings, people have been looking a little more closely at the U.S. News and World Report rankings. Over at Scatterplot, Neal Caren points out that U.S. News’s methods page has some details on the survey sample size and response rates. They’re bad:
Surveys were conducted in fall 2012 by Ipsos Public Affairs … Questionnaires were sent to department heads and directors of graduate studies (or, alternatively, a senior faculty member who teaches graduate students) at schools that had granted a total of five or more doctorates in each discipline during the five-year period from 2005 through 2009, as indicated by the 2010 "Survey of Earned Doctorates." … The surveys asked about Ph.D. programs in criminology (response rate: 90 percent), economics (25 percent), English (21 percent), history (19 percent), political science (30 percent), psychology (16 percent), and sociology (31 percent). … The number of schools surveyed in fall 2012 were: economics—132, English—156, history—151, political science—119, psychology—246, and sociology—117. In fall 2008, 36 schools were surveyed for criminology.
So, following Neal, this tells us the Sociology rankings are based on a survey of 117 Heads and Directors with a response rate of 31 percent, which is thirty six people in total. For Economics you have 33 people, for History 29 people, for Political Science 36 people, for Psychology 40 people, and for English 33 people. The methods page also notes that they calculate the scores using a trimmed mean, so they throw out two observations each time (the highest and the lowest). The upshot is that the average score of a department is likely to have rather wide confidence intervals.
But, don’t let all that get in the way of contemplating the magic numbers. The press releases from strongly-ranked departments are already coming thick and fast.
Update: These numbers are too low. Read on.
I guess it’s possible that U.S. News *might* mean that the *effective* N of, e.g., the Sociology survey is 117, and that’s the result of a larger initial survey which yielded a 31 percent response rate. On that interpretation they they initially contacted 378 departments (or thereabouts). That would be a non-standard way of describing what you did. Normally, if you give a raw number for the sample size and tell us the response rate, the raw number is the N you began with, not the N you ended up with. A quick check of the Survey of Earned Doctorates suggests that there were 167 Ph.D granting Sociology programs in the United States in 2010, which suggests that 117 is about right for the number who had awarded five or more in the past five years. Same goes for Economics, which has 179 Ph.D programs in the 2010 SED. Then again, the wording in the methods can also be read as saying every department might have received two surveys (“Questionnaires were sent to department heads and directors of graduate studies … at schools that had granted a total of five or more doctorates … during the five-year period from 2005 through 2009″). Looking again at the available SED data for 2006 to 2010 (one year off the USNWR dates, unfortunately), I found that 115 Sociology Departments met the stated criteria of having awarded five our more doctorates in the previous five years. If both the Dept Head and DGS in all those departmetns got a survey, this makes for an initial maximum N of 230, which is still quite far from the 378 or so needed, if 117 is supposed to mean the 31 percent who responded rather than the total number initially surveyed.
It seems like the most plausible interpretation is that for Sociology the number of schools surveyed is in fact 117, that every school received two copies of the questionnaire (one to the Head, one to the DGS or equivalent), but that the 31 percent response rate means “schools from which at least one response was received”, and so the total N surveys for Sociology is somewhere between 36 and 72 people, with a similar range of between 30 and 80 for the other departments.
Update: While I was offline dealing with other things, then looking at the SED data I’d downloaded, then writing the last few paragraphs above, I see others have come to the same conclusion as I do here by more direct and informed means.
As many of you are by now aware, U.S. News and World Report released the 2013 Edition of its Sociology Rankings this week. I find rankings fascinating, not least because of what you might call the “legitimacy ratchet” they implement. Winners insist rankings are absurd but point to their high placing on the list. Here’s a nice example of that from the University of Michigan. The message here is, “We’re not really playing, but of course if we were we’d be winning.” Losers, meanwhile, either remain silent (thus implicitly accepting their fate) or complain about the methods used, and leave themselves open to accusations of sour grapes or bad faith. They are constantly tempted to reject the enterprise and insist they should’ve been ranked higher, and so end up sounding like the apocryphal Borscht Belt couple complaining that the food here is terrible and the portions are tiny as well.
The best thing to do is to implement your own system, and do it better, if only to introduce confusion by way of additional measures. Omar Lizardo and Jessica Collett have already pointed out that U.S. News decided to cook the rankings by averaging the results from this year’s survey with the previous two rounds. They provide an estimate of what the de-averaged results probably looked like. Back in 20011, Steve Vaisey and I ran a poll using Matt Salganik’s excellent All Our Ideas website, which creates rankings from multiple pairwise comparisons. It’s easy to run and generates rankings with high face validity in a way that’s quicker, more fun, and much, much cheaper than the alternatives. So, we’re doing it again this year. Here is OrgTheory/AOI 2013 Sociology Department Ranking Survey. Go and vote! Chicago people will be happy to hear can vote as often as you like. So, participate in your own quantitative domination and get voting.
I’ve recently finished Joel Mokyr’s The Englightened Economy, an economic history of Britain during the industrial revolution. The book is an exhaustive argument about the role of Enlightenment ideas on economic development. I won’t go into detail here, but I’ll summarize it by merely saying that the book is a thorough review of the literature on Britain through the eyes of economists and historians.
Today, I want to make a comment on an observation of Mokyr. In his review of research in higher education during British industrialization, he notes the following:
- Higher education was very rare
- Innovators and industrial leaders were mostly uneducated
- Individuals with elite education (e.g., Oxbridge) were fairly rare among the ranks of the industrial leadership
Mokyr raises this point in service of the argument that Britain’s economic expansion can’t be attributed to rising quality of education since most people were not well educated until well after the industrial revolution. My point: This is somewhat analogous to economic expansion today. Leading Silicon Valley firms aren’t always, or even usually built, from people who have advanced degrees. I can think of only one such major firm (Google). Microsoft, Facebook, and Apple were founded by college drop outs, albeit elite drop outs. Groupon was founded by a policy school grad school drop out (not computer science). Twitter’s founder was a computer geek in high school but went to un-glamorous Missouri Tech, then later went to NYU, not known as a computer science hub.
The conclusion: You need an educated work force to carry out ideas, but the leadership doesn’t need a lot of education. Rapid economic expansion seems to hinge on having a mix of smart people who get their “training” from a wide variety of sources, not just college. Colleges are more about educating the masses who compose the rest of the organization.
Becoming Right: How Campuses Shape Young Conservatives, by Amy Binder and Kate Wood, is the latest entry into the growing scholarship on conservative politics in America. They ask a simple question: how do campus environments shape conservative political styles? This is an important question for two reasons. First, there is relatively little research on conservative students. Second, culture depends on organizational environment. How ideas are expressed is affected by where ideas are expressed. Definitely a worthy question for a sociologists.
So what do Binder and Wood discover? They focus on two campuses for their case study – big public West Coast and fancy private East Coast. They choose these campuses because thay have similar high achieving student bodies but the environments are way, way different. West Coast is a huge “multiversity” to use Clark Kerr’s terminology. East Coast is smaller and more intimate. The same type of students tend to be attracted to campus conservative politics (mainly white, fairly comfortable folks) but the environments encourage different expressions.
You might say that there are two habituses at work – the provocateur and the intellectual. In a big impersonal campus, it is very, very hard to project your voice except in a confrontational manner. Thus, West Coast conservative students rely on sensational tactics, like the affirmative action bake sale. Also, West Coast students feel little attachment to the community. Little is lost by being aggressive. In contrast, East Coast encourages all students to feel as if they have a place, even if they admit that most professors are fairly liberal. They don’t feel alienated or embattled, so they feel little hostility toward the campus. Thus, they resort to more intellectual forms of expression that don’t rely on shocking people. The book also has a nice discussion of the larger field of conservative politics and how that affects campus protest.
Overall, a solid book and one that’s essential to studies of campus politics. If I were to criticize the book, I think I’d think a little more about the differences between conservative students and the broader field of conservative intellectuals. This does get mentioned in a few passages that allude to Steve Teles’ book on conservarive legal academia, which we discussed in detail on this blog. The issue is that the world of conservative intellectuals that have influence is more defined by the East Coast intellectual types than the affirmative action shock jocks at West Coast. The consequences are important as we’ve seen with the Tea Party mobilization. Conservative grass roots politics is now dominated by shock jocks, not the well coiffed policy wonks of the Heritage Foundation. More needs to be said about the boundary and links between campus conservatives and this broader network of think thanks, interest groups, and electoral organizations.
The last comment I’ll make is about the inherent irony of much of this stuff. It can be argued that conservative politics at its best is incremental, stodgy, and resistant to radicalism – that it is essentially bourgeois. It retains the hard won lessons of tradition and skepticism of utopia. Then there is some irony that the cultural style of contemporary conservatives is at odds with this ideal. It is loud and obnoxious. It mocks one of society’s most ancient and enduring institutions, the university system, which has nurtured Western culture since the end of the Middle ages. It is skeptical and hostile toward those who are cultured and knowledge. It can’t disentangle potentially insightful criticisms of specific intellectual currents from a loathing of the academic system itself. Perhaps the ultimat lesson is that beneath the talk of tradition and values, there is a rank populism that leaves one ultimately disappointed.
On June 3, 2011, I said that I was ending the grad skool rulz. Totally wrong. People keep asking me about things I hadn’t thought of before, so I kept on writing! This week’s question: What should I get from the campus visit after I have been accepted to a PhD program?
Usually, the campus visit is a brief one or two day trip where you show up to campus and with current graduate students and faculty. The visits vary a great deal in quality. For example, when I visited Chicago, I had to pay my own way and it was very hard to make appointments to meet people. During one appointment, I asked about graduation rates and this senior professor simply said that such statistics weren’t important. Now you understand the genesis of the Rulz. In contrast, Indiana has one of the most highly organized graduate programs around. Students who visit meet with professors, grad students, and they go to seminars. And of course, we have a great record of placement and publication with students that we freely talk about.
So what should you expect or demand from your visit?
- Ask for money. A lot of graduate programs will provide funds for air fare and the like.
- Accommodations – Don’t pay for hotels, most programs will have a current student host you.
- It is normal for faculty to meet with potential students. If no one is around to meet you, it is a bad sign.
- Meet with the graduate chair. At the very least, you can get some information on the mechanics of the program. Also, ask for placement and graduation rates.
- Meet with current graduate students. Often there is a lunch attended only by students. The idea is that students can candidly talk about their experiences.
- Attend a class or seminar.
- Meet with senior faculty, the folks who mentor most graduate students. Ask them about current research and current students.
Now, how should you evaluate your visit? A few rules of thumb:
- You can safely ignore about 90% of what people say. The faculty all say that their program is the best, even if students fail to get jobs. It’s rare that graduate students openly admit how much they hate life and how their friends in older cohorts are being weeded out and failing to get jobs.
- You should closely pay attention to what people actually do. Did the faculty take the time to meet with graduate students, many of whom will not matriculate? If so, it shows commitment. Can your graduate student host point to a master’s paper or dissertation chapter that was promptly read? Or a paper that the faculty helped him/her publish?
- Pay very close attention to the total number of people that the program places in an average year. My rule of thumb is that a program is effective if # of tenure track jobs = 50% of incoming cohort size. The reason is that 50% of people won’t graduate for a variety of reasons. The issue is what happens to the 50% who manage to finish.
- It is a bad sign if the faculty will only talk about the one guy who made it to an Ivy league position. It is a good sign if they can point to multiple students who made it to R1′s, Liberal arts, and good regional universities. Don’t look at a biased sample.
Consider this an open thread on grad school visits. And of course – buy the GRAD SKOOL RULZ!!!
Howard Aldrich, a man who needs no introduction, has written a new book about entrepreneurship and evolutionary theory. He’s also written a blog post at the publisher’s website discussing some of the book’s key insights and detailing his own intellectual journey as a sociologist who has embraced entrepreneurship as a topic of study. It’s really interesting. Everyone should go read his blog post.
In addition to providing a really fascinating look into the mind of Howard Aldrich, in his post he offers some sage advice to young organizational scholars. It’s such good advice I thought I’d cross-post it here:
- Think in terms of long-term projects, especially if you are studying dynamic processes that take some time to unfold. Cross-sectional studies provide snapshots of the way things are at a moment in time, but most contemporary theorizing concerns mechanisms and emergent processes that must be studied over time. Many of my projects involved data collection that extended over 4 to 6 years, with analysis and writing requiring several more years. Luckily, I had a portfolio of projects, some of which came to fruition earlier than others and thus I never lacked things to do!
- Think in terms of cumulative work that builds one paper on top of another, as a project matures over its planned life. In this age of “salami-publishing” – chopping bigger projects into smaller chunks and then publishing the smaller bits as independent papers – scholars often forget that such behavior cannot go undetected. Independent observers of someone’s career take notice of suboptimal publishing patterns and are likely to discount a project’s worth, if its contributions are diluted by being parceled out in dribs and drabs. Instead, focus on establishing theoretical and empirical continuity across your work.
- Pay attention to what others are doing and find ways to link your work to theirs. With tools such as Google Scholar, citation alerts, table of content alerts, and other technologically-enhanced ways of keeping track of work in your field, you can enhance the impact of your own contributions by showing how it relates to the emerging state of the art.
- Most research projects in organization and management studies are multi-disciplinary, especially in entrepreneurship. Keep up with key work in other disciplines working on the same or similar issues, attend conferences, read their journals, and seek other people with diverse competencies to work with you on your long-term projects.
I really like his second point about the cumulative contribution of your work. One of the travesties of contemporary scholarly contribution metrics is that we have substituted quantity of publications for cumulative contribution. We assume that somebody with 5-6 publications in “A” journals has made a contribution, irrespective of the content of that work or how it aggregates into larger themes. Personally, I’d like to see more younger scholars who are actively laying out a theoretical and empirical agenda that builds on itself over time and who think less about how they can get their next AMJ paper published. Of course, making that a winning strategy is best done in a context where tenure committees actually read the work and make thoughtful assessments of quality rather than just counting lines on a CV.
Last week, a group of Africana faculty at Penn wrote a column called “Guess Who’s (Not) Coming to Dinner?” The issue is that Penn’s administration has not appointed a person of color to an administrative position in a long time. They will no longer attend diversity events sponsored by Penn President Amy Gutmann:
With the term for the dean of the School of Arts and Sciences soon ending and the newly appointed provost on hand, President Gutmann was asked during a heated exchange why she has never appointed a person of color to the position of dean during her long tenure at Penn.
Her response was that she would not just bring in someone who is not qualified, a comment implying that none of the people in the room were qualified to serve in these positions, even though many of them serve in administrative capacities in departments and centers. In her closing remarks, President Gutmann reiterated her dedication to diversity within Penn’s administration, admitting that “a show beats a tell.”
A few comments: I think the Penn Africana faculty have a good point. Leadership is built on networks. If you know anything about academia, most folks reach positions of leadership because they have been helped by colleagues. The fact that either (a) people of color did not apply for deanships or (b) people of color do not have the track record speaks to the fact that people around Penn have simply not reached out to faculty of color. People need to know that will be seriously considered if they apply. Similarly, people need to be considered for “starter” administrative jobs, like center director positions or department chairs. These don’t just appear and they often aren’t announced. You need the networks to make it happen. The fact that Penn has let this slide for this long speaks for itself.
Over a week ago, a colleague called to let me know that our advisor, Harvard Prof. J. Richard Hackman, had passed. For months, I knew that this news would eventually come, but it’s still painful to accept. I will miss hearing Richard’s booming voice, having my eyeglasses crushed to my face from a bear hug (Richard was well over 6 feet tall), or being gleefully gifted with a funny hand-written note imparting his sage advice on a matter.
Richard was a greatly respected work redesign and teams researcher. At Harvard, his classes included a highly regular and popular (despite its “early” morning time slot) course on teamwork. For those undergraduate and graduate students who have been lucky enough to take Richard’s course on teams, the course interweaves concept and practice as students must work in teams, something that most of us get very little practice with outside of organized sports or music.
In July 2012, Richard emailed several of his former teaching fellows asking us to join him in Cambridge and help him rework this course. On short notice, we assembled at the top floor of William James Hall and went over the materials, with Richard expertly leading us as a team, with clearly designated boundaries (those of us assembled for the task), a compelling direction (revising the material to attract students across disciplines), enabling structure (norms that valued contributions of team members, no matter their place in the academic hierarchy), and a supportive context (reward = tasty food, an incentive that always works on former graduate students, and good fellowship).
During this last meeting, Richard asked us about how we thought his course on teamwork could most impact individuals. I opined that his biggest impact wouldn’t be through just the students who took his course, but via those of us who would continue to teach teamwork and conduct research in other settings. This question may have been Richard’s gentle way of telling us that he was passing on the baton.
Here are several ways that I think Richard’s legacy lives on.
Read the rest of this entry »