Archive for the ‘academia’ Category
Raj Chetty, Emmanuel Saez, and László Sándor have an NBER paper on an experiment to improve journal review times. The experiment? Pay reviewers:
We evaluate policies to increase prosocial behavior using a field experiment with 1,500 referees at the Journal of Public Economics. We randomly assign referees to four groups: a control group with a six week deadline to submit a referee report, a group with a four week deadline, a cash incentive group rewarded with $100 for meeting the four week deadline, and a social incentive group in which referees were told that their turnaround times would be publicly posted. We obtain four sets of results. First, shorter deadlines reduce the time referees take to submit reports substantially. Second, cash incentives significantly improve speed, especially in the week before the deadline. Cash payments do not crowd out intrinsic motivation: after the cash treatment ends, referees who received cash incentives are no slower than those in the four-week deadline group. Third, social incentives have smaller but significant effects on review times and are especially effective among tenured professors, who are less sensitive to deadlines and cash incentives. Fourth, all the treatments have little or no effect on agreement rates, quality of reports, or review times at other journals. We conclude that small changes in journals’ policies could substantially expedite peer review at little cost. More generally, price incentives, nudges, and social pressure are effective and complementary methods of increasing prosocial behavior.
Love it. I wonder if the ASA Pub committee noticed?
When I wrote about universities and the ethics of donations the other day, the takeaway was that we should 1) protect academic freedom and 2) avoid resource dependence but 3) not give donors political litmus tests. Then I threw in a line at the end about the biggest funder of all: the government.
Of course, there’s a simpler solution to this – full public funding for public universities, so that we can all Just Say No to any money with strings attached. Given our current financial reality, though, I’m curious if others have ideas about where to draw the line.
A couple of people have rightly jumped on this — August in the comments, and Graham Peterson at his own blog. They point out, fairly enough, that being government-funded just makes you dependent on bureaucrats and their agendas rather than the Koches or Bill Gates or whomever.
I’m very attuned to this possibility. During the Cold War, the government was often the biggest threat to academic freedom. There were government-sponsored Communist witch-hunts (see here for a gripping account of threats made to Robert Bellah at Harvard). There was classified research with lots of academic-freedom-stifling strings attached (see, e.g., Kelly Moore’s Disrupting Science). Government-sponsored military research was, in fact, one of the things ripping universities apart in the late 1960s.
And yet. I still prefer the government money.
The Koch brothers are, of course, a favorite liberal bugaboo. And while they bankroll a wide range of right-wing institutions, more recently they’ve shifted their focus to the world of higher education. Most recently, the Koches made the news when UNCF (formerly the United Negro College Fund) accepted a $25 million grant to provide scholarships to students interested in entrepreneurship, economics and innovation—a decision that was followed by the union AFSCME cutting its own ties to UNCF.
Now, UNCF is a nonprofit, not a university. But the Koches support universities as well. George Mason is, perhaps unsurprisingly, the largest recipient of Koch largesse. Overall, in 2012, Koch foundations gave $12.1 million to 163 U.S. universities and colleges.
On the one hand, this is small potatoes. A single hedge fund manager gave Harvard $150 million this year. On the other, it raises important questions about when colleges should say no to money.
Nearly everyone agrees that letters of recommendation are a lousy system that provides little information, rewards high-status connections, and provides lots of opportunities for recommenders to inadvertently damn their recommendees (see past orgtheory discussions here, here, and here). Yet for now, at least, we’re stuck with them.
Job season is approaching, and grad students are getting ready to request letters. If you’re at a well-financed program, your department will have an administrative person responsible for making sure your letters reach their many destinations. If you’re at a program like, well, mine, your letter-writers will find themselves sending 40-60 (or more) letters in a variety of formats for each of several students on the market. Needless to say, this is a big administrative pain.
A graduate student about to go on the market (okay, it was the awesome Josh McCabe, hire him!) asked this week about the best way to manage all these letter requests. Here’s my thoughts:
1) If you have the money, using Interfolio would be simplest, safest, and easiest for your letter-writers. They can upload letter(s) once and you can send them wherever, and whenever, you want. This is what I did when I applied for a handful of fellowships last year (the political economy of letters doesn’t end once you have a job). But Interfolio costs $6 a pop, and I’m not comfortable asking grad students to pay that for each of those 40-60 applications, possibly over multiple years.
2) Otherwise, there are a couple of principles to remember, beyond the general stuff that you’d do even if your department has administrative staff to handle letters.
- Ask what you can do to facilitate the process. Different people like different things. Personally, I don’t like complicated job spreadsheets, which can be idiosyncratic and hard to read. What I like is a list of basic info in the email — contact person, email/website/snail mail address, deadline (in bold!), link to the ad, a phrase or two on the job (e.g. “organizations, quant preferred”), and whether you’d like the research or teaching version of your letter — sorted either by deadline or by type of submission (website, email, or hard copy). Others may differ.
- Batch, batch, batch. There is nothing worse than receiving those 50 requests one at a time. Aim to send your requests once a month, maybe once every two weeks during the busiest season. Yes, there will be jobs posted that may make this impossible sometimes, but to the extent possible, group your requests.
- Manage up. FIgure out how your letter-writers work, and what you need to do to stay on top of them. If they are totally organized, you may not need to follow up. Personally, I just reply with “Done” when I’ve sent a batch out so the student knows when it’s been taken care of. If you don’t completely trust your recommenders to be on top of deadlines, you may want to mention in your request that you’ll check in a week before the next deadline to confirm. In some cases, online application systems will tell you whether letters have been submitted, which will allow you to avoid excess emails. But this is not always possible, and it’s totally reasonable to ask for confirmation that letters have actually been sent.
- Stay organized yourself. There are a lot of bits and pieces to manage on the job market. Keep track of what you’ve done and what you need to do, so you aren’t inadvertently making multiple requests that a letter be sent to the same place.
Sending letters, while a pain, is part of the job of faculty and almost everyone recognizes that. Perhaps we’ll eventually get a centralized system that will eliminate this problem, or, even better, abandon letters. But until then, there are better and worse ways to manage the process. (Also, are we the only department out there where faculty send all the letters themselves, or is this fairly common?)
This coming Fall, I will be one of the program chairs of Soc Info 2014, the 6th annual conference in social informatics. It will be held in Barcelona, Spain at the Yahoo regional headquarters. It’s a great opportunity to meet the people on the cutting edge of computer science and social science. We’ve already got a stunning line up of key note speakers – Lada Adamic, Duncan Watts, Michael Macy, and Daniele Quercia. Submit a paper or attend!
Every once in a while, you get a free lunch. About a year and a half ago, sociology got a small free lunch. It was announced that the MCAT would now include sociology material. Awesome.
But there is a seriously huge free lunch coming up – the rise of “big data.” Ignore the nay sayers. Ignore the hand wringers who worry if Facebook is hurting our feelings. Look at the big picture. Silicon Valley has created a new social world that requires analysis. And not just the generic stuff you get from your local management consultant. They need analysis from people who understand human behavior and can build arguments. They don’t want data mining. They want theory and real research designs.
Consider this tweet from Elise Hu, a Washington Reporter, who quoted Joi Ito, director of the MIT media lab:
In other words, the world of computer science has stumbled into social science. As usual, many think that social science is garbage, but that is slowly changing. Many are being hired at Google and Facebook. Others are striking out on their own. Many within the social sciences are using computer science.
The big message? This is a huge opportunity. It can change the discipline – but only if we constructively interact with the computer science discipline. My recommendations:
- Reach out to your colleagues in computer science. Run a seminar or write a grant.
- Reach out to computer science students. Create courses for them, invite them to be on projects.
- Treat “big data” was we would other data. It has strengths and weaknesses, but in being critical we can use it in the correct way and raise the level of discussion.
- Submit to computer science conference. I’ll be honest, computer scientists are not statisticians. There are a lot of fascinating areas of computer science where the stats are very simple or the ideas are basic. We can add a lot of value.
The benefit? CS will get an infusion of good ideas to work through. Sociology will come into contact with some really cool people, create a bigger audience, and get more resources. We can also get answers to some great questions.
So don’t screw it up, people. This doesn’t happen very often.
Last week, I wrote about the strengths and weaknesses of the Indiana style of graduate training. In summary, Indiana succeeds by creating a very structured type of graduate education where we hammer people into the mainstream. Formally, we have a zillion requirements and informally, we do lots of 1-1 work with students. I also discussed the limits of the system. Specifically, I wrote about “Foucault kids,” graduate students who are aiming for unconventional careers. In the comments, someone asked for clarification. Why exactly would the Indiana model not work for these students?
First, let’s start with a discussion of the Foucault kids. In the way that I used it, I roughly mean ambitious graduate students who are doing work that crosses or combines various areas of study. Foucault, of course, was a Foucault kid. His training was in philosophy, but worked with George Canguilhem, who did work on the philosophy of science. In his career, Foucault did this mutant form of work that combined philosophy, history of ideas, and other stuff. Similarly, the Foucault kid is the young scholar who sees himself as some awesome sui generis scholar that breaks boundaries.
Who is a Foucault kid? Not you, probably. In fact, during my graduate career in Chicago, I only met two genuine Foucault kids, this guy (who combined anthropology, ethnography, and hermenuetics) and, I think, one of his students. Later, I’ve seen them here and there, mainly at other elite programs in sociology. You also see them in idiosyncratic programs, like the Committee on Social Thought. But still, overall, they’re rare. I’ve met lots of brilliant people, but they exist mainly within the confines of sociology or some other discipline.
So, what sort of training does such a person need? It is unclear to me since we have little data. Many Foucault kids end up flailing, they can’t complete their dissertations, and you never hear from them. The license to “be great” is often interpreted as a demand for perfectionism, or endless procrastination, or being so weird that no one will take them seriously.
I can offer two hypotheses about what might work for a Foucault kid: (a) no training, just let them wander and demand a dissertation at the end, or (b) demand high quality training but allow weird or unusual combinations of fields. The Indiana model doesn’t do either (a) or (b) well. In fact, our model is the opposite where we require people to be strongly grounded in a disciplinary mainstream. I don’t think it would hurt a Foucault kid, but it would probably frustrate them and probably waste their time. In the end, I think it mainly comes to having just a few faculty members who can tolerate the weirdness of the Foucault kid and teach them the academic survival skills so that they won’t become the legendary 12th year grad student whose dissertation went uncomplete and unread.
Yesterday, I described how Indiana sociology conducts graduate education. It’s a really good system. In fact, when prospective students visit, I describe the system and then say: “Look, if you get an offer from Princeton, take it! They have prestige, money, and an amazing placement record. But if you don’t have an offer from a place like that, you are probably making a mistake if you turn us down.”
Still, nothing is perfect and it is worth talking about the drawbacks of the Indiana model. First, the system only works because most faculty do research where it is easy to get people involved. We do a lot of survey research, interviews, health, education, and public opinion. Thus, it’s probably not the best place for doing certain types of research that are not large team based like ethnography or comparative historical. Our students do well in those areas, but there are other better options out there.
Second, we don’t do well with what I call the “Foucault” kids. These are students who have some insanely interesting project that spans disciplines and is very sui generis. These students need less structure, not more of it. They don’t need all the stuff that IU provides. All they need is one or two older scholars who can give some honest feedback and make sure they don’t take 12 years to finish. I encountered one such student during a visit. This student unleashed this insane ethnographic/multi-site study of health on me and I said: “You are clearly very good. Just go to Harvard. Tell Bob I said hello.
Mind you, a lot of people think they are Foucault, but they’re not. IU is built for people who want to do high quality work within the confines of normal social science – which means most of you! For a few people, or people in specialties where the model doesn’t fit, it may not be appropriate.
I am often asked: how does Indiana consistently place people so well? Just to be clear, in terms of elite placement, we do as well as other programs of our rank. Our ranks floats in the 10-15 range, and we can do a good R1 placement every other year. Every few years we place at or near the top. What is amazing is that we consistently place all of our students, not just the “stars.” We place students that routinely get discarded at other programs. How does that happen?
First, it starts with the faculty. In hiring at both the junior and senior levels, we have a preference for people who are committed to graduate education. Also, in terms of culture, our faculty have a very different attitude towards graduate work. We don’t work on the “star system” where a few people get most of the attention. We believe that with the right training, we can help most graduate students start a career in either teaching or research. Finally, in terms of the faculty, we also tend to attract people who are productive and collaborative. Not much dead wood.
Second, we choose graduate students very carefully. Sure, we’ll hand out acceptance letters to a few “stars” of the market every year. That’s because, like most graduate programs, we take GPA and GRE scores very seriously. But the difference is that Indiana doesn’t mark you down because you didn’t go to the “right” school. We’ve taken people from regional state schools, small liberal arts colleges, lower ranked MA programs, and other places. We read the entire application, not just the name of the BA school or the letter of recommendation. We’ve picked up some fantastic people who’ve gone on to great careers by looking for diamond in the rough.
Third, we have an insane amount of structure. It doesn’t work for everyone, but most people do well in a system that makes you jump through a lot of hoops. Like most soc programs, we have theory and methods in the first year. But we also have a summer research program, required writing workshops, a required minor, a required teaching workshop for TAs, and a bunch of other stuff. Annoying? Yes. But do people have the OLS model hammered into their brains at the end of the sequence? You bet.
Fourth, we have a deep “culture.” Our expectation is that all students should be able to complete the PhD. We also support students on multiple career tracks. We even have a special program for people interested in teaching intensive institutions. And nearly all faculty collaborate and believe in intensive 1 on 1 interaction. More than one visiting scholar has been shocked by this system.
Fifth, we have reasonable expectations and a high level of professionalism. We don’t pretend MA or PhD theses are master pieces. We want them to be good and we want them to be done. We also push people, but we (mostly!) don’t yell at people or treat grad students like children. We tell you what to do and then we try to help you. We’ll get back to you in a few weeks on your dissertation, not a few semesters.
Sixth is funding. While far from perfect, we’ve developed a system where most students can rely on 4-5 years of funding. Then, we have various mechanisms for helping most students. The pay is low, but we don’t play games. You get funding. You will get help. No games.
The issue with many programs is that they drop the ball on one or more of these issues. During my time at Chicago, for example, they had a nearly structureless program and very poor funding. Other programs will chase famous faculty without considering how well they place students. It’s too easy to slide into the star system of graduate training. Considering all that, it’s a real testament to this department that we can go head to head with programs that have a fancier brand and a much bigger budget.
Org theorists know a thing or two about what happens when you rate things. People change their behavior. In this case, that’s the point — Arne Duncan et al. are hoping that the ratings will create incentives for colleges to graduate more students with less debt and higher post-graduation incomes.
Now, those are obviously not objectionable goals. There are some clear challenges in adjusting for the expected performance of different student bodies, and worries about disincentives to go into low-paying fields like teaching or social work, but who doesn’t want college to be more affordable, somehow?*
The big problem is the outcome that is missing in there: students who have learned things. If you create a system that measures access, completion, debt, and eventual income, and it has any teeth at all, you will get colleges that aim for those things. Unfortunately, those things have a limited relationship to actual learning. Where one conflicts with the other, learning will lose.
Of course, I’m kind of hesitant to say that, because heaven knows what would happen if we started trying to measure learning outcomes at the federal level. No Young Adult Left Behind, I guess. Coursera can sell us the curriculum.
* Another problem worth mentioning is that many adults without degrees don’t see graduation rates and average student debt levels as relevant to their college decision — they think it depends on them, not the school.
Last week the Wall Street Journal reported that UCLA’s Anderson Graduate School of Management issued an internal report that found that the school is “inhospitable to women faculty.” The report contains both comparative data and anecdotal evidence suggesting that women faculty have experienced impediments to advancement.
Women made up 20% of tenure-track faculty at Anderson and 14.3% of those with tenure in the 2012-2013 academic year, including Dr. Olian, according to school figures. By comparison, an analysis of 16 peer institutions—including the business schools at the University of Virginia, Stanford University and University of Michigan—found that, on average, about 30% of tenure-track and 19.5% of tenured faculty were women in the 2012-2013 year…
The internal report states that women have high rates of job satisfaction when beginning careers at the school, but face a “lack of respect” regarding their work and “unevenly applied” standards on decisions about pay and promotions.
Twice in the past three years, the university’s governing academic body took the relatively rare step of overruling Dr. Olian, who had recommended against the promotion of one woman and against giving tenure to another, according to four Anderson professors.
In one case, the university found that policies allowing faculty to take parental leave without falling behind on the tenure track had been incorrectly applied to the candidate. In that same period, they said, a male candidate for promotion passed through the Anderson review, but didn’t get clearance from the university.
Even though UCLA’s business school stands out, the numbers reported in the article show that gender inequity plagues most top business schools. In 2010 45% of tenure-line faculty in psychology departments were women. In sociology, more than 50% of assistant professors are women, and roughly half of associate professors are women. Women in psychology and sociology are doing much better in attaining tenured positions than are women in business schools.
So why are women not more represented on business school faculty? One possible reason is that business schools are still dominated and/or highly influenced by economics, in which the gender composition is heavily slanted toward men. According to a Wall Street Journal article from last year, women only get 32% of PhDs in economics (compared to 58% in the other social sciences).
In 2012, women accounted for 28.3% of untenured assistant professors, 40% of untenured associate professors, 21.6% of tenured associate professors and just 11.6% of full tenured professors.
In other words, women in economics are more likely to end up in untenured adjunct positions than they are in tenured faculty positions. This gender inequity in economics seeps into business schools since this is the discipline that most influences our research and teaching.
So I was toying around with the “future of org theory” line of thought, and started thinking about the past of org theory instead, because that’s so much easier.
In my mind ASQ straddles sociology and business schools, or at least has, historically. I thought that ASQ used to publish a fair number of sociologists and now publishes fewer. I figured that was part of the decline-of-org-theory-in-soc story.
But when I took a look, it turned out (based on a limited, totally nonscientific sample) I had the story totally wrong. There were hardly any sociologists publishing in ASQ 20 years ago, either.
A little data, based on the author bio pages: The last four issues of ASQ had, collectively, 45 authors. One, Olav Sorensen, has a courtesy appointment in sociology. Three — Sorenson, Amanda Sharkey, and Brayden King — have soc PhDs but B-school appointments. That’s it for sociologists. Not all the rest are at B-schools, but they’re not in soc departments either.
But. Ten years ago, in 2003-04, ASQ had 34 authors. Not one was appointed solely to a soc department. Two had a joint appointment in sociology and something else, and one a courtesy appointment in soc. Six (including the joint/courtesy appointments) held sociology PhDs.
Okay, I thought. I’m just not going far back enough. The decline of sociologists took place earlier, maybe in the late 90s. So I looked at 1993-94.
Nope. No dice. 36 authors. 1 with a sociology appointment, 1 with a joint appointment in sociology. Three soc PhDs.
That’s where I stopped, since it was getting time-consuming, though I’m curious if another decade would have made a difference.
I suppose on the one hand this shouldn’t be so surprising. I mean, “Administrative Science” kind of gives it away: not a sociology journal. But why would I have had the impression that there used to be more sociologists publishing in ASQ? Has org theory as done in business schools moved further from sociology in other ways?
Wired recently produced a nifty graphic that showed were the major tech firms recruit their employees. The messages are obvious:
- Physical proximity – this is West Coast/Canada intensive.
IBM is the exception, in that it recruits from India. But still, it recruits from the big Indian engineering programs.
The other message that I get is from the absences. 1. The Midwest engineering powerhouses (Ohio, Kansas, Michigan, Illinois) are under represented due to geography. Path dependence is cruel. 2. The Ivy League and elite liberal arts are sparsely represented, probably due to a lot recruitment by finance and smaller engineering departments. So in terms of the upper strata of the economy, West Coast is for innovation, East Coast is elite training, and the Midwest is for building cars and stuff.
This year, I’ve been on sabbatical, removed from the ordinary cares of teaching and departmental affairs. But in one short month, I’ll be returning to Albany and to “normal” life. I’ll also be returning to a three-year term as grad director.
When I was a grad student, I was only vaguely aware that we had a grad director. I certainly couldn’t have told you what the grad director does.
My sense of that is better now, but I still wonder what my goals should be as I take on this role, and how I should manage the inevitable challenges of the job.
I can imagine two basic strategies, probably not mutually exclusive. One is bottom-up: Talk to lots of grad students, figure out what they see as their problems and issues, and do what I can to solve or ameliorate those.
The other is top-down: Think about our strengths and weaknesses as a department, what our niche is both in sociology and within the university, and where we want our grad program to be in five years, and develop a plan from there.
Okay, maybe there’s a third — muddle through — but let’s assume I’m going to be a little more proactive. Also, let’s assume I have roughly zero control over financial resources. And I already know that one of my priorities is going to be better data collection on student trajectory and placement, a project that the current grad director has started but that we still have way too little information on.
What do you think? If you’ve been a grad director, what accomplishments are you proud of, and what were the challenges? If you’re a grad student, what does your department do well? Or what do you wish your department would do differently, and think a grad director might be able to change?
This is a month old, so I know I’m running the risk everyone saw it the first time around. But I just ran into it, and I’m a sucker for a good time use survey — especially when it’s about professors.
Boise State University anthropologist John Ziker has spent much of his career studying the cultural practices of indigenous people in Siberia (sample paper title: “‘Horseradish Is No Sweeter than Turnips': Entitlements and Sustainability in the Taimyr Autonomous Region, Northern Russia”). Now chair of his department, he’s started doing fieldwork on something a bit closer to home: the practices of academics at Boise State.
His research suggests that his colleagues work fairly long hours (61 hours a week, on average), and that they spend, on average, only about 35 percent of their work weeks teaching.
The rest of the piece rounds up other surveys of faculty time use, showing work weeks in the 54 to 61 hour-per-week range. (Although they missed Jerry Jacobs’ work on this.) I was sad, although not surprised, to learn that I have five more hours a week of service to look forward to now that I have tenure. (I’ve been on sabbatical this year, so it hasn’t hit yet.)
I was also reminded of being on the job market with a one-year-old and not enough childcare. I had to be very efficient to squeeze out even a 40-hour week, and I started tracking my work hours on a calendar printout. I’m pretty sure that in my sleep-deprived, application-stressed state, I sent one of those sheets to the University of British Columbia. I hope the search committee enjoyed the peek into the nuts and bolts of my work life.
The future of organizational sociology may be uncertain, but organizational thinking has diffused widely in sociology. Just look at the most recent ASR. (Less so the current AJS, but we can cut them some slack since Fabio coauthored their one organizational piece.)
But how did we get here? In the comments, I suggested one reason. Org theory’s main research programs — institutional theory, networks, field theory, population ecology — aren’t about “organizations” anymore and as productive as those may have been, they don’t encourage the reproduction of “organizations” as a distinct subfield. It turns out Brayden and Teppo wrote a whole article on this with David Whetten — very much worth a read.
There’s another factor, too, though, that I’m surprised hasn’t come up yet: economic sociology. Economic sociology usually dates itself to Mark Granovetter’s 1985 article on embeddedness, but no one called themselves an economic sociologist circa 1990. They were, mostly, org theorists.
(Viviana Zelizer has a great piece that talks about how, to her surprise, she found her work being redefined as economic sociology. Fligstein and Dauter’s 2007 ARS piece made a similar move, calling performativity a branch of economic sociology — which must have come as a shock to Michel Callon.)
The earliest you can reasonably call econ soc a subfield is 1994, when Smelser and Swedberg published the first Handbook. And really, the year 2000, when econ soc became an ASA section-in-formation, is a more appropriate date.
Econ soc channeled a lot of the intellectual energy that had been focused on organizations (broadly speaking) in a slightly different direction. The section’s organizing committee included Nicole Woolsey Biggart, Neil Fligstein, Mark Granovetter, Brian Uzzi, Fernanda Wanderley, and Harrison White. Most ASA sections have grown in the last decade. But OOW has been flat, while Econ Soc has grown by 55%.
One result was that a new generation of students who might have studied organizations instead did econ soc. I took my comp exam in organizations in 2001 partly because no one had ever taken one in econ soc. Two years later, that would not have been the case.
The effect is that many sociologists who would have studied organizations ended up studying econ soc instead. The ones who stayed in orgs were more likely to be B-school oriented and to take jobs outside soc departments, and that means that today there are few younger scholars who seem themselves as primarily organizational sociologists.
Now, maybe this is just how disciplines evolve. I do consider myself an economic sociologist too, and I think the emergence of econ soc has been enormously generative for the discipline. But sociology is not the most cumulative of disciplines. And there is lots of important stuff that is taught in orgs classes but not in econ soc classes. My fear is that we just lose all that stuff as students trained in other, orgs-influenced subfields stop learning it. Then we’ll have to wait another 20 or 30 years for another generation of scholars to “bring the organization back in.”
A couple of weeks ago, Brayden asked about the growing disconnect between org theory and sociology. He argued that 1) sociology is now heavily focused on inequality, and organization theory isn’t, and 2) org theory has become abstract and detached from a Selznick-style effort to make organizations better. Lively discussion ensued.
I have also felt this disconnect, but from the other side of the coin. My training is as an organizational sociologist, and I was hired to teach organizations in a sociology department. My work is mostly framed in terms of organizational fields and institutional logics. But, increasingly, it seems I have trouble finding much to engage with in org theory. (The field, not the blog.)
For a while I thought this was just a natural evolution of interests. My current research, which looks at how economists and the intellectual tools of economics gained a central place in U.S. policy making, doesn’t sound very organizational at all.
But what this misses is how invaluable an organizational approach is for thinking about this problem. For example, if a group of experts wants to influence policy, one way they can try is by providing advice to policymakers—at Congressional hearings, through advisory groups, and so on. But policymakers are bombarded by information, and use it mostly to justify decisions made for other reasons.
For experts, it works much better, though it’s less direct, to establish organizational footholds for yourself. This can mean the creation of new offices that your type of expert runs. So the U.S. Antitrust Division created an Economic Policy Office in the 1970s that became a stronghold for economists. It provided a base for some of the early champions of deregulation, and it gained control over technical decisions affecting which corporate behaviors the Antitrust Division would challenge.
Or it can mean creating entirely new organizations in which you play a major role. Economists were key players in the new schools of public policy founded around 1970, which looked very different from the older public administration departments. They taught a different style of thinking that then made its way into Washington via graduates of those programs.
Thinking organizationally is also important to understanding how such footholds are sustained. The Congressional Budget Office, dominated by economists, was created in 1975. Its mandate wasn’t originally so clear. To become permanent and influential, it had to figure out how to provide services that members of Congress from both parties found useful—while continuing to maintain as much scope as possible to pursue the internally generated studies it cared about. Classic resource dependence.
My point here is that there are many research topics in sociology that don’t look so organizational, but that benefit from an organizational perspective. And lots of scholars who identify primarily with other subfields are drawing heavily on organizational theories and concepts. I think org theory is diffusing, not disappearing, in sociology. Why, and whether that’s a problem, are questions I’ll leave for another day.
Gordon Gee, former president of Ohio State, made more than $6 million in FY 2013, including the $1.5 million “release payment” he got in exchange for
not letting the door hit him agreeing not to sue the university on his way out. Now the New York Times is reporting that the 25 public universities with the highest-paid presidents have greater increases in student debt and numbers of adjuncts than other publics.
I had a story for this, an organizational story. Ah, I thought. The NYT is implying that the high pay is taking away money that would be going to the other stuff. But really, this just reflects a new model for flagship publics: limit faculty costs (hence the adjuncts), increase the proportion of out-of-state students paying high tuition (hence the debt), and pursue corporate-style CEOs who can lead us into this brave new world (hence the salaries). The non-flagships can’t pursue this strategy successfully, so we’re seeing a divergence between the two groups.
But it turns out that the data don’t, in fact, support that story. They don’t really support any story. The NYT article is based on a report from the Institute for Policy Studies, a progessive think tank. And as I read it, things didn’t seem quite right. IPS reports on the number of adjunct faculty at these institutions, but I haven’t seen good data anywhere on the number of adjuncts. And administrative spending at publics increased 65% between FYs 2006 and 2012, as states slashed budgets?
Yeah, basically the IPS report is just a mess. IPEDS made some major redefinitions of terms in the middle — like who falls under “Part-time/Instruction, Research and Public Service,” what IPS is calling “Adjunct Labor” — so the years aren’t comparable with each other, and AFT appears to have mislabeled some of the years entirely. The University of Minnesota’s impressively fast PR office has a debunking report up, and while I haven’t checked all the numbers, my impression is that it’s right on target.
That doesn’t disprove my theory that there will be increasing divergence between the model for flagships and the path taken by the rest of the publics. And it’s entirely possible that universities with highly paid presidents have underwhelming outcomes in other areas. But if we’re going to argue over what to do about it, it would be nice if it were based on numbers that actually mean something.
Exciting news, dear readers! Something to look forward to as we barrel towards the end of the spring semester… SUNY Albany’s Elizabeth “Beth” P. Berman has agreed to guest blog for orgtheory, inspired by Brayden‘s and other posts about upcoming discussions of the waxing/waning relationship between sociology and orgtheory. Berman is the author of the multi-award-winning Creating the Market University: How Academic Science became an Economic Engine (Princeton University Press). See her other pubs here.
Two weeks ago, I suggested that some racial disparity in the professorial ranks might be due to low rates of co-authorship between the m faculty of PhD programs and minority graduate students. The theory is that these collaborations provide a stream of publications that sustain people through the job market, mid term review and tenure review while their own research takes time to get started. Thus, if faculty aren’t offering co-authorship opportunities, it would create systemic differences in academic labor market outcomes.
The discussion focused on this theory. Now, I’d like to solicit personal experiences to help me assess the extent (or lack of extent) of this problem. I am encouraging PhD faculty and minority graduate students to discuss their co-authoring experiences. In my own case, I was made two faculty co-authorship offers in graduate school. One professor became quite ill. However, recently we’ve reconnected and I think we can restart the project. The other partner was highly contentious so that didn’t work out. Thus, at the end of graduate school I had zero co-authored articles with faculty.
During my ten years at Indiana, I can remember making four co-authorship offers to minority students. One was successful. You can read a short article at the Journal of Social Structure about our work on data visualization and there is a longer piece in the works. A second offer was followed up on, but it required methods expertise that I couldn’t offer and the student reverted to normal survey based research. So that’s my bad. A third was worked on a little before the student went into post-doc land and was never seen again, while a fourth offer was simply never responded to.
It is unclear to me if my experience is typical or atypical. That is why I think it is important to have people provide their own experiences.
NPR has a wonderful interactive graph that allows you see what % of students were majoring in a particular topic from 1970 to the present. A few lessons are clear. The NPR article notes the rise of business and the decline of education. Health is also on the rise. A few other trends are also worth noting. Nearly all social science and humanities have declined in both relative and absolute numbers. Sociology is typical. In 1970, 34,000 graduates accounted for about 4% of the total. The number has declined to 30,000 for about 1.8% of the total.
Some exceptions are easy to understand. Computer science increased eight fold in relative size. Other exceptions are puzzling. In a world of shrinking journalism, how are communications and journalism getting more students? Is it driven by the modern media environment?
Concluding note: This is brutal news for the graduate education in the arts and sciences. They’ve built themselves on a model of hiring many PhD students so they could teach massive lectures. At the big public flagships, where most doctoral education happens, this is sustainable. The problem is graduation – the jobs are simply not there as people have collectively shifted from arts and sciences to vocational majors.
A number of analyses have shown that (a) college tuition has outpaced inflation, (b) administrators have increased in number and in cost, and (c) graduate student and faculty pay have remained flat. This isn’t to say that the only force behind higher tuition is administrative growth, but it’s certainly one important factor. The way that this normally interpreted is that you have greedy administrators who are just voting themselves raises which, in the absence of competition, go unrestrained.
Here is a slightly different framing. Increased college costs are a collective pay increase for faculty. How? It helps to realize that faculty salaries are fairly constrained. In the arts and sciences, salaries top out at, about $95k, for full professors at most colleges. Research profs can add about $10k, liberal arts can subtract $10k. Not bad, but still modest compared to top professionals in other fields. It also helps to realize that people start hitting the full professor rank in their forties, which means you could have 20-30 years of work with few pay increases in real terms.
The solution? Stop being a professor. Switch to a more fluid labor market for executives. Unlike professor jobs, your skills are portable and there is actual demand. Luckily, there has been a recent increase in college cash flows, so the budget is bigger. If you believe this story, then the escalation of tuition and costs is simply society’s way of paying more to people who used to get “stuck” at the full professor level. They’re just called associate deans now.
It’s that time of year, when students and faculty alike sport long faces about the end-of-year crunch of deadlines. Unfortunately, in academia, the lists of to-dos never disappear, as new requests and calls for responsibilities sprout constantly like dandelions after a spring rain. One round-the-year task is planning, particularly for courses to ensure that major areas, student needs’ and/or faculty’s interests are covered.
Stanford magazine recently featured a tongue-in-check illustration of student schedules to go along with a professor’s reflection upon administrative demands for accountability:
Prof. Appelbaum recounts a previous employer’s (Mississippi State University) attempts to get him to publicly account for every moment of his time within a 8am-5pm block:
I taught at Mississippi State University for three years before joining the Stanford faculty in 2000. I found MSU to be an organization dedicated to intercollegiate athletics, but sometimes less inspired when it came to academic and scholarly attainment. One of the things that irked me was the idea that you had to account for your time but not your achievement.
At the start of each semester we found blank schedules tacked to cork bulletin boards on our office doors. I filled mine out on my first day. I had an enormous course load, abundant office hours, copious committee meetings, rehearsals, a bevy of independent studies and a weekly faculty meeting. The generic sheet only accommodated Monday through Friday from 8 a.m. to 5 p.m. even though, as music professors, we had ensemble rehearsals in the evenings and on weekends, not to mention concerts by our students almost every night of the week. I figured that people understood that we were working more than 60 hours a week, so surely we wouldn’t need to account for every minute of our time.
Upon completion, my schedule looked very thick to me—and this was without any reference to time for composing music, recording CDs, writing articles, designing and constructing new instruments, practicing the piano, writing grants, attending professional conferences, giving guest lectures and all the other enterprises that characterize my research. It didn’t account for course design, class preparation or grading. It made no mention of the student electronic music studio I maintained. So, naïvely, I sat back with a feeling that this schedule conveyed that I was fully committed to my new job.
But my department chair immediately informed me that I must complete my schedule—there couldn’t be any empty spaces left on my sheet. I thought he was joking. With a look of compassionate embarrassment, a “welcome to Mississippi State University” glance, he apologized. Still, he insisted I fill in every blank lest the dean wander down the hall and see that someone had free time.
Shocked, and a bit miffed, I put “lunch” down from noon to 1 as I was told to do. (In Mississippi, everyone eats from noon to 1; if you enter a restaurant at 1:05 you can have your pick of any table.) Even so, I still had four empty spaces: 90 minutes on Monday, an hour on Tuesday, an hour on Thursday and a three-hour block on Friday. I filled them in as follows: Barry Manilow Research Project; Office Nap; Eating Bugs; and, for the three-hour block, Stapling.
My chair never asked me to “complete” my schedule again.
When thinking about increasing the presence of under represented minorities in the professoriate, I think of the pipeline process model. Roughly speaking, a pipeline process suggests that something happens in multiple stages. The immediate consequence of the model is that if you want X to happen you have to make sure that all the stages that make X are working properly. In terms of faculty diversity, that means recruitment to graduate school, professional training, job placement, career development, and the tenure process.
A while ago I reviewed evidence from ASA reports showing that the pipeline is leaky. On the one hand, graduate programs seem to recruit a fair number of minority students. Then, once training is complete people seem to do well getting the jobs. Then, there is a massive drop in the pipeline as people go up for promotion.
Now that I’ve been on the job for a while, I think the following is happening: the core faculty of the PhD programs are not working with minority PhD students. They are admitting students, awarding degrees, and writing letters of recommendation, but they are not collaborating with students in ways that lead to publications and grants. In other words, most successful students work with faculty who “get them started” while their own research takes a little time to develop. My hypothesis is that if you looked at PhD minority students they are way less likely to co-author with faculty and that they are less likely to receive an offer of co-authorship. I’d also hypothesize that this gap is largest for top tier journal publications. This will be small or non-existent in areas focused on race and ethnicity. In other words, when faculty build teams to shoot for that ASR or AJS publication, the minority students come last for invitations, except in race & ethnicity areas. I didn’t think this is conscious, but this might be happening and explains the drastic leaking throughout the later stages of the pipeline.
Am I right? If you are a faculty member at a top 20 or 30 program in your field, the test is simple. Look at your list of co-authors for your big papers. Look at your list of minority students. Look at the overlap. Use the comments section.
A classic result in the social analysis of science is that most papers are poorly cited. For example, the classic deSolla Price paper in Science (1965) found that the modal citation count in his sample was zero. Low mean and modal citation counts remain the standard in contemporary studies of scientific behavior. So, what gives?
Scientific research is a type of creative pursuit. By definition, journal articles are supposed to report on what is new or novel. Once you buy that, the low citation rates in science make sense. First, creativity (or importance) is a scarce commodity. Anyone trained in a psychology graduate program can do an experiment, but few can do a novel experiment. Second, new results are themselves scarce. Fields quickly get covered and only obscure points remain. Third, even if you have a creative scientist who found a genuinely important problem, they might not have an audience. Perhaps people are focused on other issues, or the scientist is low status or publishing in a low status journal.
In principle, we should expect that few articles will deserve more than token citation. But still, why can’t journals just stick to important stuff? The answer is imperfect knowledge. Once in a while we encounter obvious innovation, but usually we have a limited ability to predict what will be important. It is better to over publish and let history be the judge. Considering that the cost of journal publishing is low (but not the subscription!), we should be ok with a world of many uncited and lonely articles.
I was having dinner with a Team Fabio affiliate who was making the choice between two really excellent sociology programs. In discussing his choice, we got into the issue of who is now on top in terms of status. In Ye Olden Days, elite sociology meant the following: the Chicago/Columbia/Berkeley axis + massive public flagship schools (UNC, Wisconsin, UCLA, Michigan, Ohio State, Penn State, Indiana). Now, the landscape has changed a bit. The major change seems to be the rise of smaller private schools. While these schools have always been the home of good scholars, it is only recently that they’ve boosted their status by gathering critical masses of elite scholars, consistent publication in top presses and journals, and consistent placement of PhD students in competitive programs. Here the examples are well known – Princeton, Harvard, and Duke in the top ten. Slightly lower down the ranking would be Northwestern, NYU, and Cornell. Certainly well known, but not considered powerhouses of sociology 20 or 3o years ago. Similarly, there’s been sliding among the elites with Chicago and Columbia no longer at the top. The (flawed) 2011 NRC ranks also bumped some prominent flagships (Madison, Bloomington).
Why the change? There are many factors. There’s always complacency and in-fighting. But I think the change is more profound. First, the big flagships had the comparative advantage because 20th century American sociology was built on big surveys. No longer the case. Second, some programs “woke up.” My impression in reading history books is that elite private schools weren’t terribly interested in sociology. Deans were content to let a sociology program be dominated by one or two “big names,” but not invest in the infrastructure needed for high visibility sociology. For some reason, things just changed. Supporting sociology was on the agenda at these schools. Third, along the same line, my sense is that there’s been a real change in training. Princeton for example seems to fit the model. No graduate has ever described it as a fun, cuddly place, but almost every grad has reported that they have enough financial support, almost all students have an adviser, and there is *lots* of prof/student co-authorship. Not much falling through the cracks. That translates into jobs and high visibility.
I encourage older faculty to comment. Does this match your perception? Counter evidence? Alternative explanations?
While catching up on some reading during spring break, I ran across an Journal of Organizational Ethnography article by organizational ethnographer Gideon Kunda. In this article, Kunda’s reflections about his development as an organizational ethnographer seem pertinent to the on-going orgtheory discussion of ethnography. Kunda not only describes how he became drawn to organizational studies (hint: questioning a figure of authority about the differential treatment of patients based on class), but also how he arrived at his topic and research site, generating the now iconic study Engineering Culture.
During his training, Kunda worked on several projects using other data collection methods (i.e., surveys), during which Goffman’s work on Asylums was instructive:
Here once again was a science that starts with ready-made theories, selectively uses them in accordance with interests unrelated to (or even opposed to) the logic and spirit of scientific inquiry, collects data using a method that assumes it knows what and how to ask before encountering the world of its subjects, and disrespects or ignores their complex realities, or for that matter, their feelings about who is studying them and why. What factors effect quality is a legitimate question, if one takes the managerial perspective (although this is not the only perspective that could and should be taken). But in order to answer it, in fact in order to even know how to go about studying it, I began to realize, one has to find ways to collect valid data. And the data, if that was what the facts of life should be called, were found in the richness of the stories I heard and the complexity of the interactions I observed, in people’s sense of who they were and what they were up to, and in their willingness to convey it to an interested outsider. Whether or not all this could or should be ultimately reduced to numbers and statistically analyzed seemed much less important than finding ways to collect, understand and interpret evidence that was respectful of its complex nature. If this was the case, it seemed to me, then the scientific system I was enmeshed in, even by its own standards – the norms of science that demand respect for the empirical world – was woefully inadequate. And worse – its procedures and output were embarrassingly boring, to me at least, when compared to the richness of the world it set out to comprehend.
In conclusion, Kunda states:
Over the years I have continuously noted and wondered about the extent researchers in the early stages of their careers, and graduate students in particular, feel, or are made to feel, that while they are granted the methodological license, and sometimes looseness, of “qualitative methods” (a phrase that often replaces or refers to a watered down version of ethnography), the academic authority system (in terms of funding, supervision, publication requirements and career options) compels them to limit their questions, choice of theory and writing style to those that enhance the chances of approval, funding and quick publication. I encounter again and again the ways that this commitment comes at the expense of a willingness to let fly their own sociological imagination, to cultivate and trust their own interpretive resources and analytic instincts, to respect and develop their innate language and authorial voice, or, for that matter, to risk long-term ethnographic fieldwork.
The issue then is not, or not only, one of competing methods, and to overstate such distinctions is, I believe, to miss my point. Rather, I see my story as an invitation to acknowledge and explore the shared conditions of all scientific claims to knowing and depicting social reality, organizational and otherwise, under whatever theoretical and methodological guise, that together place limits on the depth, insightfulness and indeed the validity of interpretation: the endless complexity of data, the incurable subjectivity of the observer, the fundamental flimsiness of formal method and the prevalence of unsubtle yet often disguised institutional pressures to confirm to standards and ways of thinking outside and often against the pure logic of scientific inquiry.
If I am to formulate a conclusion, then, it is this: the continuing need to devise personal and collective ways – and I have suggested and illustrated some of mine – to release “discipline” from its misguided equation with an institutionally enforced a priori commitment to hegemonic theoretical discourse and methodological frameworks, and to apply it instead to its legitimate targets, the questions for which there can never be a final, authoritative answer, only continuing exploration and debate: What is data, what is a valid and worthwhile interpretation, how does it come about, what are and how to cultivate the personal sources of imagination that make it possible, how to report it and, not least, to what end.
Another major take-away for budding researchers is that peers can offer support. That is, scholarly development is not necessarily a hierarchical transmission of information from mentors to mentees, but the co-production of knowledge with peers.
This semester, I agreed to teach a PhD-level course on organizational theory when I realized that fewer and fewer colleagues who are trained in organizational research remain in sociology departments. Apparently, I am not the only organizational researcher who is wondering about the implications of the de-centralization of organizational sociology.
Mark your calendars for Aug.! Liz Gorman has planned the following Organizations, Occupations, and Work (OOW) session for the ASA annual meeting this Aug. in San Francisco. The line-up includes some of our regular commenters and readers:
Title: Section on Organizations, Occupation and Work Invited Session. Does Organizational Sociology Have a Future?Description: Few sociologists today consider themselves primarily scholars of organizations. Sociologists who study different types of organizations within their primary fields–such as economic sociology, science, social movements, political sociology, and urban sociology–are often not in conversation with each other. Many sociologically-trained scholars have migrated to business schools and become absorbed by the large interdisciplinary field of organization studies, which tends to have a managerial orientation. Little attention is directed to the broader impact of organizations on society. This invited session will consider these and other trends in the study of organizations within the discipline of sociology. It will ask whether “organizations” still constitutes a coherent subfield, whether it can or should be revitalized, and what its future direction might look like.Participants:Organizer: Elizabeth Gorman, University of VirginiaPanelists:Howard Aldrich, University of North Carolina – Chapel HillElisabeth Clemens, University of ChicagoHarland Prechel, Texas A&M UniversityMartin Ruef, Duke UniversityEzra Zuckerman, MIT Sloan School
Topics: Organizations, Formal and Complex
This guest post on Federal government’s classification of sociology is written by Bogdan State, a doctoral student in sociology at Stanford University.
According to the Department of Homeland Security (DHS), Sociology is not a true science. Among its many attributions, the Department of Homeland Security is in charge of separating, for immigration purposes, the imposter from the “real” sciences. Seemingly, our discipline does not pass muster.
The story is – by now – a familiar one. The DHS divides academic disciplines into two categories: STEM (Science, Technology, Engineering and Math) and non-STEM. The former get a lot of attention and dominate the immigration debate while the latter are relegated to marginality. The official list is available here [http://www.ice.gov/doclib/sevis/pdf/stem-list.pdf]. Needless to say, the very idea of such a blunt distinction between science and non-science is problematic and misguided. Nonetheless, it’s a distinction that has very important consequences, which I am currently sorting through myself.
I am a doctoral student in a Sociology PhD program. About a year ago I decided to give industry a try and I was lucky enough to be offered a job at a major tech company, headquartered in the US. For someone who thrives on data and short publication cycles the job is a dream come true. And even though my title says I do “data science” (already derided by some naysayers as “not a science”), even though my days are spent defending the idea that Sociology can and should be a science at least as rigorous as Biology, Homeland Security seems to have a clear message: no way.
My problem is a common one for international students. I need permission to work outside of my University while in the US. Since my landing here for the first time in 2005 I have become ever more painfully aware of the difficulties involved in staying in the country post-graduation.
International students have twelve months during which they can work in the US in a job related to their specialty under what is called Optional Practical Training. Past those twelve months their options for continued employment in the US usually revolve around the H1B visa, which allows them to work for a US company while seeking a green card through a lengthy and costly process of “labor certification” (which is supposed to ascertain the wholly-undecidable claim that the “alien” is not taking an American’s job). H1B visas are hugely controversial and their issuance has been capped at 85000 per year for most of recent memory (20000 of which are reserved for people holding graduate degrees). Last year the cap translated into the DHS refusing to process (and thus practically denying) about a third of H1B applications filed. This year the ratio may be closer to one in two.
Compared to what comes after, Optional Practical Training is a relatively benign period during which the “alien” can focus on doing their job rather than on learning the regulatory alphabet soup inflicted on them by contradictory and sometimes outright hostile acts of Congress. The Government itself recognized the self-defeating nature of forcing international students – otherwise content to stay and contribute to the US economy – out of the US after American entities had invested huge amounts in their education. As a stopgap measure, foreign STEM graduates of American higher education institutions were granted a one-time, 17-month extension to their Optional Practical Training.
Sociology falls on the wrong side of the arbitrary divide imposed by the DHS (examples of some disciplines considered to be sciences by DHS: Archeology, Social Psychology, Management Science). Interestingly, the NSF does consider Sociology to be STEM. This would be funny were it not the source of a lot headaches, dislocation, uncertainty and plain misery.
In my own case, this policy has meant that I have not been able to access these extra 17 months of headache-free OPT extension that typically serve as a bridge to the much-desired (and irredeemably broken) H1B visa. It is part of why I have to leave the US and go pay taxes somewhere else. But our discipline’s location outside the STEM divide may have far more important consequences in the future.
Specifically, there has been a lot of talk about “stapling” green cards to STEM degrees, or of other important facilities afforded to the immigration of STEM graduates. Presumably, Congress will eventually pass an immigration law, and Sociology will be left on the outside of an admittedly artificial divide.
Let me emphasize that I do not believe for a moment in the validity of a division of the academic world made by government bureaucrats. But while fighting the idea of this division would be quixotic (given the current fixation on STEM), I believe that there is a sufficient number of Sociologists who do not have US citizenship or permanent residency and who would be affected by this omission in the future.
The ASA has come up against this issue before (http://www.asanet.org/footnotes/feb13/vp_0213.html), but it does not look like they have ever addressed it on the immigration front. This is of course more than a matter of immigration policy: it also concerns our discipline’s being recognized as a bona fide science. As Sociologists we often deride the shortcomings of our methods, and that is certainly a healthy attitude. But we cannot let cocktail-party observations about “true” and “fake” sciences be enshrined into government policy.
My colleague Johan Bollen was featured in Nature because of his proposal for a new funding model for science:
What got you thinking about funding models?
A lot of people are unhappy with the current system. When you submit a proposal, you are like a contractor, but science does not work like that — it works best by generating ideas and gifting them to society and other scientists.
How did your idea take shape?
Some friends and colleagues had a Christmas party in 2012, and as soon as alcohol started to flow, so did commiseration. Guests talked about reviewer comments on proposals, marvelling that one person can have that much power. The disgruntlement is a by-product of how the review system works. I started by saying, “Why not just take all that money and distribute it evenly?”. The goal was to see if we could, with as little administration as possible, distribute funding so that researchers have the freedom to explore the topics that they think matter most.
Briefly, what is your plan for science funding?
All scientists would receive a base amount — for example, US$100,000, which roughly corresponds to the US National Science Foundation’s 2010 budget divided by the number of senior researchers funded that year. Each scientist would be required to distribute a predetermined percentage of their funding to the researchers whom they believed would make best use of the money.
Read the whole thing.
I’m quite excited about this promising new technology of eating a tasty lunch and conversing with colleagues online at the same time.
On the Facebook group, Jerry finally admitted that PLoS One was not the journal of the cheeto eating antichrist. It has highly cited articles. It has good papers. It has a high impact factor. In other words, it’s gonna be fine. But Jerry did raise one legitimate issue – how to curate the massive stream of PLoS One papers? There will obviously be many papers of low quality in the PLoS One model.
At first, I thought it was a problem. Then, I realized it wasn’t a problem at all. There are fairly easy ways to curate:
- Self-curation: People can publicize their own work.
- Crowd sourcing: Papers acquire reputation from informal networks. It’s happening on twitter right now.
- Citation count: Papers that the community cites get highlighted.
- Media attention: Papers attracting the media get highlighted.
- Prizes: PLoS – or any other group – can award prizes for excellence.
- Editorial/professional curation: People select good papers within their area of expertise. E.g., “Best PLoS Papers in Nuclear Fission 2014.”
Here’s the ironic thing – ASQ – Jerry’s journal – already curates papers for people who won’t read the whole journal. There is the ASQ award. The ASQ staff reports media mentions for specific papers. The ASQ blog summarizes papers for a larger audience. I couldn’t find it on the current website, but I think ASQ editors used to list papers from recent years fitting with a certain topic. ASQ isn’t alone. Other publishers use similar methods. For example, SSRN lists articles by “most downloaded.” Curation already exists and it works. In other words, Jerry should encourage the PLoS One community to emulate ASQ’s curation practices. It would be generous and help PLoS One reach the next stage in its development.
This guest post is written by Nicolette Manglos-Weber. She is a research assistant professor at the University of Notre Dame. Her work has appeared in Social Forces, Sociological Perspectives, and Sociology of Religion.
In many of the discussions I hear and read about preparing grad students for the brutal academic job market in sociology, one key point often gets missed or ignored: it’s a very different thing to be prepared in a specialty area with dozens of jobs being advertised each cycle (i.e. criminology, medical/health) than it is to be prepared when the advertisements in your area come in a trickle (i.e. religion, culture). Perhaps it seems so obvious that it doesn’t need to be said, but it’s incredibly important, and something I think more grad students should know about much, much earlier in their programs when they are choosing their thesis topics (or, even better, when they are applying to grad school in the first place).
As was typical, at least in my cohort, I chose my topic purely on the basis of what I found most fascinating and who among the faculty I seemed to be simpatico with. I was certainly informed that focusing on religion in sub-Saharan Africa might make it more difficult to publish, but then in my third year I published my M.A. thesis in a good specialty journal, landed a publication in a top ASA journal as first author, and had an R&R as sole author at a solid mid-tier generalist journal. At the time, I thought to myself, “Phew. So that’s taken care of!” I was doing what I was told to do, getting better and better at it each day, and enjoying myself. I had high hopes of avoiding the post-doc market completely and landing a TT job on my first year out, mainly because as an unpartnered young person I didn’t want to bounce around the country alone for several years.
When advising PhD students, I try to dispel a misleading idea – all the “good” jobs go quickly and you are a complete failure if you can’t find employment by the Fall of your final grad skool year. This is simply incorrect. The sociology job market actually has three distinct phases. Once you appreciate this, it will help you out a lot:
- Round 1: The classic arts & sciences positions. In sociology, the research intensive programs usually advertise in summer, accept applications by October, interview in November, and extend offers by December (or earlier). The most competitive liberal arts colleges seem to recruit in round 1.
- Round 2: January-March – teaching intensive, professional schools, and post-docs. Winter break provides a nice cut point; many programs choose to go in the early Winter. In sociology, b-schools and ed schools will often interview in the Winter. A lot of high status, well funded post-docs, such as the recently deceased RWJ program, go at this time.
- Round 3: March-early summer. Pot luck – a diverse group of positions, including short term post-docs, very teaching intensive schools, private sector jobs, government, policy, and jobs at R1s that opened up due to last minute shifts in budgets. Some jobs may still be open if they were *really* slow in processing applications, or they had a long string of interviews that didn’t pan out. I’ve seen people get some very high quality jobs as late as April or May, because candidates 1-4 turned a department down.
I am not saying that there are a lot of jobs. It is still the case that academia is very competitive and some very good people won’t find jobs. What I am saying is that sociologists have a lot of options that are spread across the academic year. Don’t panic if things don’t immediately work out. It is in your interest to keep your eyes open and keep applying.
A guest post by Jerry Davis. He is the Wilbur K. Pierpont Collegiate Professor of Management at the Ross School of Business at the University of Michigan.
By this point everyone in the academy is familiar with the arguments of Nicholas Kristof and his many, many critics regarding the value of academics writing for the broader public. This weekend provided a crypto-quasi-experiment that illustrated why aiming to do research that is accessible to the public may not be a great use of our time. It also showed how the “open access” model can create bad incentives for social science to write articles that are the nutritional equivalent of Cheetos.
Balazs Kovacs and Amanda Sharkey have a really nice article in the March issue of ASQ called “The Paradox of Publicity: How Awards Can Negatively Affect the Evaluation of Quality.” (You can read it here: http://asq.sagepub.com/content/59/1/1.abstract) The paper starts with the intriguing observation that when books win awards, their sales go up but their evaluations go down on average. One can think of lots of reasons why this should not be true, and several reasons why it should, all implying different mechanisms at work. The authors do an extremely sophisticated and meticulous job of figuring out which mechanism was ultimately responsible. (Matched sample of winning and non-winning books on the short list; difference-in-difference regression; model predicting reviewers’ ratings based on their prior reviews; several smart robustness checks; and transparency about the sample to enhance replicability.) As is traditional at ASQ, the authors faced smart and skeptical reviewers who put them through the wringer, and a harsh and generally negative editor (me). This is a really good paper, and you should read it immediately to find out whodunit.
The paper has gotten a fair bit of press, including write-ups in the New York Times and The Guardian (http://www.theguardian.com/books/2014/feb/21/literary-prizes-make-books-less-popular-booker). And what one discovers in the comments section of these write-ups is that (1) there is no reading comprehension test to get on the Internet, and (2) everyone is a methodologist. Wrote one Guardian reader:
The methodology of this research sounds really flawed. Are people who post on Goodreads representative of the general reading public and/or book market? Did they control for other factors when ‘pairing’ books of winners with non-winners? Did they take into account conditioning factors such as cultural bias (UK readers are surely different from US, and so on). How big was their sample? Unless they can answer these questions convincingly, I would say this article is based on fluff.
Actually, answers to some of these questions are in The Guardian’s write-up: the authors had “compared 38,817 reader reviews on GoodReads.com of 32 pairs of books. One book in each pair had won an award, such as the Man Booker prize, or America’s National Book Award. The other had been shortlisted for the same prize in the same year, but had not gone on to win.” And the authors DID answer these questions convincingly, through multiple rounds of rigorous review; that’s why it was published in ASQ. The Guardian included a link to the original study, where the budding methodologist-wannabe could read through tables of difference-in-difference regressions, robustness checks, data appendices, and more. But that would require two clicks of a functioning mouse, and an attention span greater than that of a 12-year-old.
This is a non story based on very iffy research. Like is not compared with like. A positive review in the New York Times is compared with a less complimentary reader review on GoodReads…I’ll wait to fully read the actual research in case it’s been badly reported or incorrectly written up
Evidently this person could not even be troubled to read The Guardian’s brief story, much less the original article, and I’m a bit skeptical that she will “wait to fully read the actual research” (where her detailed knowledge of Heckman selection models might come in handy). After this kind of response, one can understand why academics might prefer to write for colleagues with training and a background in the literature.
Now, on to the “experimental” condition of our crypto-quasi-experiment. The Times reported another study this weekend, this one published in PLoS One (of course), which found that people who walked down a hallway while texting on their phone walked slower, in a more stilted fashion, with shorter steps, and less straight than those who were not texting (http://well.blogs.nytimes.com/2014/02/20/the-difficult-balancing-act-of-texting-while-walking/). Shockingly, this study did not attract wannabe methodologists, but a flood of comments about how pedestrians who text are stupid and deserve what they get. Evidently the meticulousness of the research shone through the Times write-up.
One lesson from this weekend is that when it comes to research, the public prefers Cheetos to a healthy salad. A simple bite-sized chunk of topical knowledge goes down easy with the general public. (Recent findings that are frequently downloaded on PLoS One: racist white people love guns; time spent on Facebook makes young adults unhappy; personality and sex influence the words people use; and a tiny cabal of banks controls the global economy.)
A second lesson is that there are great potential downsides to the field embracing open access journals like PLoS One, no matter how enthusiastic Fabio is. Students enjoy seeing their professors cited in the news media, and deans like to see happy students and faculty who “translate their research.” This favors the simple over the meticulous, the insta-publication over work that emerges from engagement with skeptical experts in the field (a.k.a. reviewers). It will not be a good thing if the field starts gravitating toward media-friendly Cheeto-style work.