Archive for the ‘fabio’ Category
What the heck, let’s do anarchism week. Let’s start with the following conversation I had at the end of my social theory class a few semesters ago. A student approached me and asked why I didn’t teach anarchism in the course. There’s a few good reasons, but not so strong that you couldn’t include it if you really wanted to.
First, the goal of my social theory class is to have people read original texts written by seminal social thinkers. This doubles as a sort of Western civ (since IU doesn’t require it) and people need to understand the core arguments of sociology. So we hit the “classics,” the interactionists, feminists, French theory,* and a little evolutionary psych. The course also needs to prepare a handful of students who will continue in soc, poli sci, or other fields at the graduate level.
Second, I teach things that really drive discussion in contemporary sociology, which means that that many topics, including those dear to my heart, must get cut. Since there are very few anarchist sociologists, or research that uses an anarchist perspective, it means that it simply isn’t a priority.
But that doesn’t mean that anarchism isn’t a real social theory or that it should be actively excluded. In contrast, there’s now a body of anarchist themed social writings, mainly in fields other than sociology. For example, anthropologist David Graeber’s writings should count. James Scott, the political scientist, has written about statelessness at length. There are the classic anarchists, like Prodhoun, and feminist anarchists like Emma Goldman. You have right wing anarchists like economist Murray Rothbard or philosopher Michael Huemer. Then you have empirical studies of statelessness like Pete Leeson’s pirate book.
In other words, you have more than enough material and it’s high quality material. But it’s definitely not central to sociology (yet?), so you don’t feel guilty cutting it. But the social theory course isn’t set in stone. I am already tiring of French theory and other topics, so it may be time to rotate some new material in.
* Remember, I don’t teach postmodernism anymore.
A few days ago, Ju Hong heckled President Obama at a speech. He asked the President to sign an executive order to stop deportations. The President said that he did not have the power to do so and that Congress would have to change the law. This is just plain wrong. While it is certainly true that Congress writes the law, the executive branch has a lot of freedom in choosing which laws to enforce and how to enforce them. For example, the state and local police don’t give tickets to every single person on the highway who drives 61 miles per hour or faster. The police make all kinds of judgments about when the infraction should be punished. And this is a standard feature of being a prosecutor. You actually have discretion.
At the Federal level, it is very clear that the modern presidency has accumulated a great deal of discretion in how to enforce the law. For example:
- Signing statements – apparently, lots of presidents have gotten away with ignoring laws they find inconvenient.
- Pardons – if a law is deemed to be wildly unjust, the President can just pardon people en mass. For example, President Carter pardoned a couple of million people who evaded the draft.
- Executive order – Obama could easily produce a legal argument that deporting someone causes great economic harm and separates them from their family, and thus constitutes harsh punishment for the administrative violation of coming to America without the right paper work. Then, he could instruct the federal department (DHS) to simply suspend deportations, especially of minors, because it is unconstitutional.
In other words, a legal system that allows presidents to kidnap people and send them to Guantanamo forever could easily be mustered to prevent the deportation of the guy with the leaf blower. It ain’t that hard.
True story: In 2012, I reviewed a paper for a journal. I thought it was a good paper. With some modest revision, it could probably be accepted at a top journal. In summer 2013, I was asked to review the revision. At this point, I had learned that the journal had a notorious reputation for sending papers through three or four rounds of review and rejecting them after years of lengthy revisions.
So, I wrote to the managing editor and said that I was a bit worried about the multiple R&R policy. I didn’t want to be part of an extremely long R&R process unless there was a high probability that it would lead to publication. What is the point of me offering guidance when it is all thrown away as the authors try to make a third or fourth round of reviewers happy? It is unfair to everyone.
The managing editor offered a diplomatic answer. In general, they can’t discuss the state of a manuscript that is under review. Aside from that, the manager noted the paper was only on the first round as indicated by the “R1.” Fair enough.
I agree to review the paper because I don’t want the author to be stuck with a completely new reviewer with new demands. So I tell the journal that I will help out. In an attempt to humorously convey my concerns, I wrote back: “Ok, but if we go into triple R&R territory, your bosses will receive aggressive email from me.” The response, in its entirety:
Thank you again for your thoughts concerning this manuscript. Unfortunately, we are unable to accept your offer of review in terms that would constitute prior restraints on the possible outcome of the review process.
Interesting. Expressing disagreement with a policy is viewed as a “constraint.” Go figure. The up side is that I now have more time for reviews at other journals. The down side is that the authors(s) will probably get a new reviewer who is almost certainly slower than me and will definitely ask for a whole new set of revisions. Since I can’t break confidentiality, I can only vaguely express a vaguely directed apology for the problems that the author will now have to deal with. And the possibility of three more R&Rs and a rejection at the end.
This happened in August and I haven’t received any more requests for reviews, when I used to get requests all the time. So if you ever wondered what it would take to get banned from a journal’s reviewer roster, all it takes is some criticism of the editors’ quadruple R&R rejection policy.
Sociologist and blogger Phil Cohen has an op-ed in the NY Times on gender inequality. Here’s a key clip:
The assumption of continuous progress has become so ingrained that critics now write as if the feminist steamroller has already reached its destination. The journalists Hanna Rosin (“The End of Men”) and Liza Mundy (“The Richer Sex”) proclaimed women’s impending dominance. The conservative authors Kay S. Hymowitz (“Manning Up”) and Christina Hoff Sommers (“The War Against Boys”) worried that feminist progress was undermining masculinity and steering men toward ruin.
But in fact, the movement toward equality stopped. The labor force hit 46 percent female in 1994, and it hasn’t changed much since. Women’s full-time annual earnings were 76 percent of men’s in 2001, and 77 percent in 2011. Although women do earn a majority of academic degrees, their specialties pay less, so that earnings even for women with doctorate degrees working full time are 77 percent of men’s. Attitudinal changes also stalled. In two decades there has been little change in the level of agreement with the statement, “It is much better for everyone involved if the man is the achiever outside the home and the woman takes care of the home and family.”
After two steps forward, we were unprepared for the abrupt slowdown on the road to gender equality. We can make sense of the current predicament, however — and gain a better sense of how to resume our forward motion — if we can grasp the forces that drove the change in the first place.
Read the whole thing.
One of the most frustrating aspects of social science reviewing is the slow review time. Gabriel Rossman says that we are the problem. Rather than focus on what can be easily fixed or provide up or down decisions, reviewers take too long, offer contradictory recommendations, and encourage bloated papers. If I were to summarize Gabriel’s post, I’d say that:
- Keep your review short. Don’t write that 6 page single space commentary. One page or so probably enough in most cases
- Don’t whine about what the authors should have written about. Evaluate what they actually wrote about.
- Be decisive. Yes or no.
- Don’t ask for endless citations, commentaries, extra analyses, etc.
- All suggestions should be constructive, not busy work.
- Let it go: after a while, it becomes counter productive. If you hate, just say so. If you like it, just say so. No more revisions. It’s done.
I also like Gabriel’s suggestion that reviewers should show some spine. In the summer, I was asked to review a 3rd R&R. My entire response was “Dude, seriously? Three R&R’s? Just accept it.” Result: paper accepted.
Desperate for a workshop speaker? Send me an email. My topics:
- The politics of the antiwar movement after 9/11
- Black Power/Black Studies
- More tweets, more votes: social media as a measurement of public opinion
- Knowledge and practice in infection control – new project on the organizational behavior of hospitals
I’ll do it for free if I can drive there. If you pick up transportation costs, I come cheap. If anyone in NYC wants me to visit in Mar/April/May, I will work for tips.
PS. I have two topics for grad student groups: grad skool rulz and public sociology. Undergrads may enjoy a discussion of my manuscript in progress on social theory.
I was recently listening to the podcast, Bad at Sports, which covers the contemporary art world. This episode is a long interview with dealer, writer, and provacteur Matt Gleason. A lot of good stuff, but this caught my ear. Gleason claims that one of the major reasons that Jeffrey Deitch was disruptive as director of LAMOCA was that he pursued “post-curator art.” What does that mean? My translation:
Over the last 50 years, the art world has institutionalized. Museums are run by professionals, artists get MFA, and the art market is centralizing around art fairs. What is so disruptive about Dietch was that rejected the institutionalization of the curator – the people who pick art, stage exhibitions, and manage collections.
In other words, in a world of professionalization, Dietch said: “Screw it, my kid can do this.” And he did it. Dietch fired one of the main curators, had celebrities do shows, and curated many shows himself. Very “post.”
I once asked an art professional what he learned from interacting with Dietch, and he said something like, “I learned that you can hand over an art gallery to teenagers and it’ll work.” Metaphor perhaps, but it captures the spirit. People with degrees don’t have a monopoly over good taste. Gleason notes that this is self-serving. A museum with poor finances, like LAMOCA, might not have the cash for carefully curated shows and it would be easy to have some SoCal celebrity show work. But still, the comment is telling. The art world has institutionalized, but it rests on jello foundations.
The editor of Social Problems, Becky Pettit, recently posted a review of submission practices and trends, with a focus on gender. Comments,* in no particular order:
- 8% accept? Holy canoli! I knew it was competitive, but that’s in the realm of ASR/AJS. ASR accept rate was 6%. AJS accepts 10%.
- Thankfully, SP does a lot of desk rejection.** About 30%.
- Even with desk rejection, it does seem to take a while – a mean time of 135 days. That’s about 4.5 months. So many papers take 5, 6 or 7 months. After dealing with the lightning fast world of biomedical journals, this is snail like.
- Senior profs review less than juniors. Female assistants review the most.
- Men are *way* more likely to appeal. As Phil Cohen notes, it would be good to know if it’s just that women have more accepts or if men just whine more. Ie, we want the appeal/reject ratio.
Bottom line: Social Problems is a de-facto top general journal in soc, it behaves like a typical social science journal in terms of turn around and some other factors, and there is definitely gender inequality in reviewer and author behavior.
* Disclosure: I have a soon to be rejected paper under review at Social Problems.
** Yes, I know – “deflection!”
A lot of sociologists buy into the theory of “sponsored mobility,” which means that elites pick who gets the mobility. So I think there should be a lot of sympathy for recent research showing that mentorship (communicating with more advanced people) does not have an effect on career advancement but sponsors (people who pick you, push you, and get benefit from it) do have an effect. Robin Hanson reviews a book by economist Sylvia Ann Hewett that makes this claim:
In a new book, economist Sylvia Ann Hewlett uses data to show that mentorship, in its classic wise-elder-advises-younger-employee form, doesn’t produce statistically significant career gains. What does however, her research found, is something she has termed “sponsorship”—a type of strategic workplace partnering between those with potential and those with power. … -
And there is an important implication for the study of gender and inequality:
Women are only half as likely as men to have a sponsor—a senior champion at work who will basically take a bet on them, tap them on the shoulder, and really give them a shot at leadership. Women have always had mentors, friendly figures who give lots of advice. They’re great. They’re good for your self-esteem; they’re good for your personal development. But no one’s ever been able to show that they do anything to help you actually move up. …
We find that women in particular often choose the wrong people. … They seek out a senior person they’re very comfortable with. … For a sponsor, you should go after the person with power, because you need someone who has a voice at those decision-making tables. You need to respect that person, you need to believe that person is a fabulous leader and going places, but you don’t need to like them. You don’t need to want to emulate them.
If true, this forces me to modify my views. I have always believed that sponsored mobility is important in academia, but I believe that mentorship matters as well. If Hewett is right, my belief is misplaced. It’s really about sponsored mobility. So, if you care about women or minorities advancing in some career track (like academia), then forget the nice lunches. Administrators should double down on matching people with power players. A bit rude, but it might be one concrete way to chip away at inequality in the leadership of the academy.
Dissertation topic for up and coming orgheads: Facebook’s complete dominance over the field of friendship based social networking creates an interesting opportunity for the study of organizational identity. Usually, when a firm comes to completely rule an industry, a few firms pick up the scraps and the rest just go under.
But there is another, less explored path. Losers can change their identity. Social networking is a great example. Friendster just gave up its original business model and is now marketed as a gaming web site. MySpace also abandoned its role as a serious player in social networking and reverted to its original goal of serving musicians that reach out to their fans.
Here’s some questions I would ask: 1. What % of loser firms change identity? 2. What conditions enable identity change in firms? 3. What conditions enable successful identity change, in the sense that the firm now accomplishes its goal because of its new identity? My hunch is that corporate culture is going to be a big factor. To pull this off, you’ll need a group of people who can be managed in a way that they won’t bail on the org as it redefines, or have management that won’t just sell the firm for spare parts rather than find a new home for it. Please use the comments to prove/disprove the hypothesis.
A few weeks ago, we all laughed when MIT was praised for its well known (but nonexistent) sociology department. But a serious question went unasked: why doesn’t MIT have a degree granting sociology unit? At first, you think the answer is obvious. MIT is an engineering and science school. We shouldn’t expect it to offer any sociology aside from a few courses for general education of engineering students.
But hold on! MIT offers lots of non-STEM degrees. For example, it has a highly regarded business school and an architecture school. Ok, you say, maybe it’ll offer nuts and bolts professional programs that are closely allied with STEM fields. Yet, that argument doesn’t hold water. MIT also allows students to major and/or concentrate in music. It’s also got well known PhD programs in humanities fields like philosophy, social sciences like political science and economics, and a sort of catch-all program that combines history, anthropology, and science studies. Heck, you can even get the ultimate fluffy major – creative writing.
It’s even more baffling when you realize that it is amazingly easy to create a BS or PhD degree focusing on the quantitative side of sociology (e.g., applied regression, networks, demography, stochastic process models, soc psych/experimental, survey analysis, simulation/agent based models, rational choice/game theory, etc.)
My hypothesis is that the typical MIT faculty or alumni relies on the reputation of sociology, not what the field is actually about. Like a lot of folks, the field is written off as a hopeless quagmire of post-modernism, even though, ironically, most sociologists are not post-modernists. The reality is that the field is a fairly traditional positivist scholarly area with normal, cumulative research. Even qualitative research is often presented in ways that most normal science types would recognize. It’s really too bad. Sociology could use a healthy dose of ideas from the hard sciences, and MIT could be the place where that could happen.
The devastation is massive in the Philippines. What organizations need is money, so aid workers can be paid and supplies moved to the disaster area. Buzzfeed has a list of reputable organizations that are collecting funds. Thanks.
From my guy in Ann Arbor:
2014 Junior Theorists Symposium
15 August 2014
SUBMISSION DEADLINE: 15 FEBRUARY 2014
We invite submissions for extended abstracts for the 8th Junior Theorists Symposium (JTS), to be held in Berkeley, CA on August 15th, 2014, the day before the annual meeting of the American Sociological Association (ASA). The JTS is a one-day conference featuring the work of up-and-coming theorists, sponsored in part by the Theory Section of the ASA. Since 2005, the conference has brought together early career-stage sociologists who engage in theoretical work.
We are pleased to announce that Marion Fourcade (University of California – Berkeley), Saskia Sassen (Columbia University), and George Steinmetz (University of Michigan) will serve as discussants for this year’s symposium.
In addition, we are pleased to announce an after-panel on “The Boundaries of Theory” featuring Stefan Bargheer (UCLA), Claudio Benzecry (University of Connecticut), Margaret Frye (Harvard University), Julian Go (Boston University), and Rhacel Parreñas (USC) . The panel will examine such questions as what comprises sociological theory, and what differentiates “empirical” from “theoretical” work.
We invite all ABD graduate students, postdocs, and assistant professors who received their PhDs from 2010 onwards to submit a three-page précis (800-1000 words). The précis should include the key theoretical contribution of the paper and a general outline of the argument. Be sure also to include (i) a paper title, (ii) author’s name, title and contact information, and (iii) three or more descriptive keywords. As in previous years, in order to encourage a wide range of submissions, we do not have a pre-specified theme for the conference. Instead, papers will be grouped into sessions based on emergent themes.
Please send submissions to the organizers, Daniel Hirschman (University of Michigan) and Jordanna Matlon (Institute for Advanced Study in Toulouse), at email@example.com with the phrase “JTS submission” in the subject line. The deadline is February 15, 2014. We will extend up to 12 invitations to present by March 15. Please plan to share a full paper by July 21, 2014.
A group of sex workers in New York city has openly criticized Sudhir Venkatesh’ recent ethnography of New York sex workers. There are many criticisms, one stands out for me. An article from the Museum of Sex blog relates how SWOP-NYC and SWANK, two sex worker groups thought that Venkatesh’ work increased the risk to prostitutes by reporting that clients could opt out of condoms for a 25% surcharge:
His conclusions, for example about large numbers sex workers advertising on Facebook, were easily shown by other researchers and commentators to be incorrect. Other conclusions such as the fiction that “there’s usually a 25% surcharge” to have sex without a condom not only bore no relationship to reality but also endangered sex workers and public health programs working with them.
We were so concerned by what we uncovered that in October 2011we wrote a letter to the Columbia IRB to the Columbia University Institutional Review Board (IRB) and to the Sociology Department asking for some clarity about Sudhir Venkatesh’s research. Specifically, we asked for the research project titles, dates of research, and IRB approval numbers for each of the years he claimed to have conducted research while at Columbia University. We also wished to make Columbia University’s IRB and the Sociology Department aware of that the research appeared to create additional harms and risks for sex workers in the New York area. Our action is an example of the degree to which communities of sex workers have organized and the degree to which we will question research that we find harmful. We are no longer a “gift that keeps on giving” for Venkatesh, we are a community that speaks for itself.
For me, the IRB issue sticks out for legalistic reason. How exactly does a third party appeal to an IRB board? It’s obvious if the aggrieved person is a research subject. But what about third parties? Let’s say that SWOP & SWANK are correct that this book/article increases risk, what responsibility (if any) does an IRB board have?
The issue is unclear because IRB’s themselves are muddled institutions. They don’t operate through statute or contract. It’s an ad hoc administrative unit set up by universities to make sure research complies with federal guidelines. At most, they can inTterfere in research if you cross them. But they aren’t penal institutions – there’s no IRB police. There’s no “human subjects 9-11.” Even though I am sympathetic to the claim that ethnographic publications may endanger at risk groups, it is unclear to me how third parties may leverage genuine concern into an actionable complaint.
In my graduate seminar, we had a really good discussion about the ever changing health care industry. The one issue that appeared was the disappearance of the “house call” – doctors who visit you at home when you are sick. I don’t know the history of why this practice disappeared in healthcare, but I do know that it is very hard to bring it back. There are occasionally stories here and there of physicians who try to revive the practice, yet, it doesn’t stick.
A few hypotheses about the continued absence of house calls:
- Physician resistance: you can pack in more patients at the office to increase income.
- Physician culture: The ideal of personally cultivating close relationships in this way is now longer common.
- Regulation: Medicare and insurance make it hard to spend the extra time to visit people. You simply need to stay in the office.
- Technology: You need to go to an office for the equipment.
- Status: As physicians shifted from low status to high status during the 20th century, the patients had to go to them.
One of the biggest differences in graduate training is that most quantitative social scientists learn OLS, while economists learn OLS and time series in their basic course sequence. Why is that?
In a nutshell, economists have really good time series data, while most social scientists have very boring time series data. For example, economists look at stock prices, market indices, and other measures of economic performance. These variables can display a great deal of volatility. There is something to be explained. In contrast, a lot of time series in sociology and political science is highly path dependent. For example, party ID is very stable over the life course. Marital status changes infrequently. Public opinion usually moves real, real slow. We do get data that shows real variance over time, like career data, it doesn’t come in a nice scale – it’s messy.
Bottom line: the methods follow the data, as it should be!
Control Point Group, a political consultancy firm, asked my opinion on a recent Pew study of public opinion and twitter. I’ll quote Politico reporter Dylan Byers, who summarized the Pew study:
Sixteen percent of U.S. adults use Twitter and just half that many use it as a news source, making it an unreliable proxy for public opinion, according to a new survey from the Pew Research Center and tyhe John S. and James L. Knight Foundation.
Take last year’s Republican primary, for example: “During the 2012 presidential race, Republican candidate Ron Paul easily won the Twitter primary — 55% of the conversation about him was positive, with only 15% negative,” Pew writes. “Voters rendered a very different verdict.”
Or the Newtown school shooting: “After the Newtown tragedy, 64% of the Twitter conversation supported stricter gun controls, while 21% opposed them. A Pew Research Center survey in the same period produced a far more mixed verdict, with 49% saying it is more important to control gun ownership and 42% saying it is more important to protect gun rights.”
That’s worth keeping in mind next time you see the reaction-on-Twitter piece in the wake of any major national news event. However, Twitter may be a more reliable indicator of youth sentiment.
This is a subtle point. Pew is doing what computer scientists call a sentiment analysis. Roughly speaking, you write a program that guesses whether some text, in this case a tweet, reflect a positive or negative sentiment. The literature (including the Pew study cited) shows very mixed results. The take away point for me is that sentiment is either tricky to measure feelings properly or that emotional context of text doesn’t correlate well with behaviors that we care about.
In contrast, our research (and that done by others) shows that relative shares of mentions, regardless of sentiment, do show a positive correlation with some political behaviors, like voting. My hypothesis is that the relative volume of talk is simply a proxy for buzz, name recognition, popularity, or some other variable. Regardless, the correlation is there.
At last week’s PLEAD conference on social media and political processes, Alex Hanna tweeted a summary of a talk by Mark Huberty of UC Berkeley political science, which raised some questions about using social media data to forecast electoral results. Alex suggested that we could have a good discussion about Mark’s talk. In these comments, I rely on Alex’s summary. If I mis-characterized a point, please email me or correct me in the comments.
1. Huberty noted, correctly, that incumbency highly correlates with electoral wins. The implication is that social media data is not valuable, or important, or accurate, because incumbency accounts for a lot of the variance in electoral outcomes.
Well, it depends on what your goals are. If you are making a claim that “A causes B”, then finding out that C account for much of the variance is extremely important. It shows that A isn’t causing B. However, if your claim is that “A is a decent measurement of B,” then finding out that C is a strong correlate of B is simply irrelevant. The claim isn’t about what is some fundamental cause of B, just what tracks with B.
Different claim, different standard of proof. That’s we care about polls. Incumbency predicts elections better than polls, but as long as we don’t claim that polls cause election outcomes, we remain satisfied with the well documented correlation between voter surveys and final votes.
Also, incumbency is not a reasonable variable to benchmark against because incumbency is simply a word for “the person who won last time in the same election with a very similar group of voters.” As good social scientists know, a lot of human behavior is seriously auto-correlated. What I ate yesterday is the best predictor of what I’ll eat tomorrow. Politics is no different.
Thus, in a lot of social science, we aren’t interested in these sorts of time series because we know that answer already. X_t is almost certainly strongly correlated with X_t-1. The interesting question is why the time series is X_1, X2,… and not Y_1, Y_2, … Similarly, we might interested in “extracting a signal” from some new source of data to help us measure X_i or build a causal explanation that doesn’t fall back on trivial auto-correlated time series explanations. In other words, “The guy is an incumbent because there are a lot Black voters” is a much more meaningful statement than “The guy won this time because he won last time.”
That is ultimately why I remain interested in social media and electoral outcomes. Social media is a record of what people think that is different than polls and traditional print or broadcast media. It deserves a serious examination as a signal. And given the work by Huberty himself, Tusmajan, Juengher, Beuchamp, the Indiana group, and others, the “social media as measurement of political sentiment” hypothesis is important and, as far as I can tell, supported to varying degrees by the Twitter data. Incumbency is a non-issue as long as researchers and political professionals avoid claims of causation.
2. Alex also indicated that Mark Huberty was concerned about how social media data is created. Here, I also agree. Transparency is important. All data is imperfect – people lie on polls, surveys has selection biases, etc. There is a discussion about the properties of the samples that Twitter produces for researchers that might lead one to think that there might be an issue. The more we know about the way social media samples are generated, the better.
Still, the issue is *how much* of a problem this is. On this point, I urge Mr. Huberty to be bluntly empirical.The blunt empiricist, I would argue, would just put it to the test. The empiricist would look for natural experiments in the data (transparent data vs. others) or well chosen comparisons to see how much it affects the social media-vote correlation. Rather than point to possible problems, research would actually identify them. It might not matter, or it might be a big deal. Let’s figure it out!
As you well know, I think the PhD program is a terrible choice for most students. Quite simply, the PhD program is risky (only 50% completion rate), costly (5+ years), and many disciplines have poor job prospects (e.g., most of the humanities, many biological sciences and many social sciences). Furthermore, a lot of students think it is a credential that is needed for non-academic jobs, which is not generally true.
But still, maybe you weren’t phased by the “don’t go to grad school speech.” Maybe you really have a passion for teaching, or interpreting Foucault. Or maybe you simply don’t care about the negatives associated with academic careers. I welcome you to academia. I pity you as well.
So, then, what sort of PhD should you get? Here’s an argument for the sociology Ph.D.:
- Low barrier to entry – you just need a solid academic record, not extended training in math, foreign language, or other rare skills.
- You learn solid research skills like survey design, regression models, and interview technique that have non-academic labor market value.
- You can study a wide range of topics and do so almost immediately. No need to engage in endless post-docs.
- Policy relevance.
- Decent academic job prospects compared to most other fields. The sociology market is tight, but soc PhDs frequently get jobs in lots of other programs like education, business, policy, social work, and occasionally in adjacent areas like American studies, ethnic studies, political science, and anthropology.
- Broadly defined topic – if you have a real passion for a topic that is genuinely social in some way, you can probably find a way to write a dissertation on it.
The one big downside is that sociology programs adhere to the humanities model of long time to PhD. There is no need for this. If you focus on a dissertation topic early on, choose your dissertation chair wisely, and insist on getting published at least once, there is no need for your degree to take longer than 4 or 5 years.
A recent Washington Post op-ed describes recent research showing that interviews are poor predictors of future job performance. The idea is old, but the results elaborate in new ways. From Daniel Willingham, a psychologist at the University of Virginia:
You do end feeling as though you have a richer impression of the person than that gleaned from the stark facts on a resume. But there’s no evidence that interviews prompt better decisions (e.g., Huffcutt & Arthur, 1994).
A new study (Dana, Dawes, & Peterson, 2013) gives us some understanding of why.
The information on a resume is limited but mostly valuable: it reliably predicts future job performance. The information in an interview is abundant–too abundant actually. Some of it will have to be ignored. So the question is whether people ignore irrelevant information and pick out the useful. The hypothesis that they don’t is called dilution. The useful information is diluted by noise.
Dana and colleagues also examined a second possible mechanism. Given people’s general propensity for sense-making, they thought that interviewers might have a tendency to try to weave all information into a coherent story, rather than to discard what was quirky or incoherent.
Three experiments supported both hypothesized mechanisms.
In other words, interviews encourage people to see patterns in the data where none exist. They also distract us with irrelevant information. Toss this in the file of “we have evidence it don’t work, but people will do it anyway.”
The Right will remember Obama as a Godless Muslim Socialist.* The Left will remember him as He Who Brought Us Healthcare. Overseas, he may come to be known as the Great Snoop, or perhaps, Death from Above. But there are many in this country who will remember Obama as El Deportador.
According to sociologist Tanya Golash-Boza, Obama has headed one of the most intense waves of deportation in the history of the United States and the onus heavily falls on non-whites. Golash-Boza describes this in a recent Houston Chronicle op-ed (ungated version here):
The deportation of legal permanent residents has hit black immigrants particularly hard. Using data from the Department of Homeland Security and the U.S. Census Bureau, I calculated that one of every 12 Jamaican and Dominican male legal permanent residents has been deported since 1996.
The United States currently detains upwards of 30,000 immigrants per day, much as it imprisoned more than 120,000 people of Japanese origin during World War II without trials or other court processes. The Department of Homeland Security has broad discretion to arrest and detain any person they suspect does not have the legal right to be in the United States. People held under such detention do not have the same rights and safeguards as criminal suspects. They do not have the right to a speedy hearing before a judge nor do they have the right to appointed counsel.
in 2012, more than 400,000 people were deported. Nearly 100,000 of them were parents of U.S. citizens. Tens of thousands of these children will grow up in the United States knowing that the U.S. government has taken away their right to grow up with one or both of their parents.
Numerous commentators note that Obama’s administration has deported more people in five years than where deported in all eight years of Bush II and more than all previous administrations going back to 1892 (!!) :
According to current figures from Immigration and Customs Enforcement — the federal agency responsible for deportations — Obama has removed 1.4 million people during his 42 months in office so far. Technically, that’s fewer than under George W. Bush, whose cumulative total was 2 million. But Bush’s number covers eight full years, which doesn’t allow an apples-to-apples comparison.
If you instead compare the two presidents’ monthly averages, it works out to 32,886 for Obama and 20,964 for Bush, putting Obama clearly in the lead. Bill Clinton is far behind with 869,676 total and 9,059 per month. All previous occupants of the White House going back to 1892 fell well short of the level of the three most recent presidents.
* Yes, I know. The right isn’t known for its logical consistency.
A follow up from Monday’s discussion of productivity: Publishing too much is definitely a first world problem. In fact, it is so remarkably rare that in 10 years as IU faculty member have I seen one job applicant penalized for publishing too much. Normally, people are penalized for (a) not publishing, (b) publishing the “wrong stuff” (edited volumes vs. journal articles) or (c) not publishing in elite journals.
But once in a while, some people do publish too much. Why?
- If you are in an elite program, you *only* get credit for either top general journals or top field journals. So volume distracts you from getting the “right” hit.
- “Scatter”: Some programs want faculty to have a “coherent” publication output.
- Dilution: Some programs want a small number of high impact pieces.
- Credit: Sometimes a large volume requires many co-authors, which makes it look like you didn’t contribute much.
So think about it: How many of you are tenure track in top 5 programs? Or work in fields where you are expected to have one or two big impact pieces? Didn’t think so. In most cases, volume is not an issue, as long as it is peer reviewed and is of overall good quality.
A few months ago, Neal Caren posted a citation analysis of sociology journals. The idea is simple – you can map sociology by looking at clusters of citations. Pretty cool, right? You know what’s cooler – using the same technique you can come up with a new ranking of soc programs. The method is simple:
- Start with a cluster analysis of journal cites. Stick to the last five years or so.
- Within each cluster, award a department credit for each article that makes, say, the top 20 in that cluster. Exclude dead or retired authors. Exclude authors who have moved to a new campus.
- Weight the credit by co-authorship – but keep track of where they teach. E.g., Princeton sociology gets 1/2 for DiMaggio and Powell (1983). Stanford soc does NOT get credit because Woody Powell teaches in the education school. Courtesy appointments do not count.
- You can then rank within a cluster (e.g., top 5 institutions/movements depts) or create an overall ranking based on adding up scores in all clusters.
Disadvantages: This method excludes cites in books. For example, most of the cites to my Black power book are by historians, who mainly write books. This also points to another problem. It emphasizes in-discipline cites. So, if you do education research, and they love you in the AERJ, this won’t pick you up. Another issue is that if you are spread around clusters, your count is ignored.
Advantages: Based on behavior and not susceptible to halo effects because it is not a reputation survey. Also, it’s a measure of what people think is important, not what gets into specific journals. However, we would expect the typical highly central person in the cluster to appear because of a well cited article in a top journal. Another advantage is the transparency. No bizarre formulas, aside from standard network measures. Finally, it is easy to measure robustness. For example, if you think that fractional weighting for co-authors is misleading, it’s easy to drop and redo the analysis in a way that you think is correct.
Next step: Neal Caren should set up a wiki where we can quickly execute this and replace the misguided NRC/US News rankings.