Archive for the ‘fabio’ Category
One of the most frustrating aspects of social science reviewing is the slow review time. Gabriel Rossman says that we are the problem. Rather than focus on what can be easily fixed or provide up or down decisions, reviewers take too long, offer contradictory recommendations, and encourage bloated papers. If I were to summarize Gabriel’s post, I’d say that:
- Keep your review short. Don’t write that 6 page single space commentary. One page or so probably enough in most cases
- Don’t whine about what the authors should have written about. Evaluate what they actually wrote about.
- Be decisive. Yes or no.
- Don’t ask for endless citations, commentaries, extra analyses, etc.
- All suggestions should be constructive, not busy work.
- Let it go: after a while, it becomes counter productive. If you hate, just say so. If you like it, just say so. No more revisions. It’s done.
I also like Gabriel’s suggestion that reviewers should show some spine. In the summer, I was asked to review a 3rd R&R. My entire response was “Dude, seriously? Three R&R’s? Just accept it.” Result: paper accepted.
Desperate for a workshop speaker? Send me an email. My topics:
- The politics of the antiwar movement after 9/11
- Black Power/Black Studies
- More tweets, more votes: social media as a measurement of public opinion
- Knowledge and practice in infection control – new project on the organizational behavior of hospitals
I’ll do it for free if I can drive there. If you pick up transportation costs, I come cheap. If anyone in NYC wants me to visit in Mar/April/May, I will work for tips.
PS. I have two topics for grad student groups: grad skool rulz and public sociology. Undergrads may enjoy a discussion of my manuscript in progress on social theory.
I was recently listening to the podcast, Bad at Sports, which covers the contemporary art world. This episode is a long interview with dealer, writer, and provacteur Matt Gleason. A lot of good stuff, but this caught my ear. Gleason claims that one of the major reasons that Jeffrey Deitch was disruptive as director of LAMOCA was that he pursued “post-curator art.” What does that mean? My translation:
Over the last 50 years, the art world has institutionalized. Museums are run by professionals, artists get MFA, and the art market is centralizing around art fairs. What is so disruptive about Dietch was that rejected the institutionalization of the curator – the people who pick art, stage exhibitions, and manage collections.
In other words, in a world of professionalization, Dietch said: “Screw it, my kid can do this.” And he did it. Dietch fired one of the main curators, had celebrities do shows, and curated many shows himself. Very “post.”
I once asked an art professional what he learned from interacting with Dietch, and he said something like, “I learned that you can hand over an art gallery to teenagers and it’ll work.” Metaphor perhaps, but it captures the spirit. People with degrees don’t have a monopoly over good taste. Gleason notes that this is self-serving. A museum with poor finances, like LAMOCA, might not have the cash for carefully curated shows and it would be easy to have some SoCal celebrity show work. But still, the comment is telling. The art world has institutionalized, but it rests on jello foundations.
The editor of Social Problems, Becky Pettit, recently posted a review of submission practices and trends, with a focus on gender. Comments,* in no particular order:
- 8% accept? Holy canoli! I knew it was competitive, but that’s in the realm of ASR/AJS. ASR accept rate was 6%. AJS accepts 10%.
- Thankfully, SP does a lot of desk rejection.** About 30%.
- Even with desk rejection, it does seem to take a while – a mean time of 135 days. That’s about 4.5 months. So many papers take 5, 6 or 7 months. After dealing with the lightning fast world of biomedical journals, this is snail like.
- Senior profs review less than juniors. Female assistants review the most.
- Men are *way* more likely to appeal. As Phil Cohen notes, it would be good to know if it’s just that women have more accepts or if men just whine more. Ie, we want the appeal/reject ratio.
Bottom line: Social Problems is a de-facto top general journal in soc, it behaves like a typical social science journal in terms of turn around and some other factors, and there is definitely gender inequality in reviewer and author behavior.
* Disclosure: I have a soon to be rejected paper under review at Social Problems.
** Yes, I know – “deflection!”
A lot of sociologists buy into the theory of “sponsored mobility,” which means that elites pick who gets the mobility. So I think there should be a lot of sympathy for recent research showing that mentorship (communicating with more advanced people) does not have an effect on career advancement but sponsors (people who pick you, push you, and get benefit from it) do have an effect. Robin Hanson reviews a book by economist Sylvia Ann Hewett that makes this claim:
In a new book, economist Sylvia Ann Hewlett uses data to show that mentorship, in its classic wise-elder-advises-younger-employee form, doesn’t produce statistically significant career gains. What does however, her research found, is something she has termed “sponsorship”—a type of strategic workplace partnering between those with potential and those with power. … -
And there is an important implication for the study of gender and inequality:
Women are only half as likely as men to have a sponsor—a senior champion at work who will basically take a bet on them, tap them on the shoulder, and really give them a shot at leadership. Women have always had mentors, friendly figures who give lots of advice. They’re great. They’re good for your self-esteem; they’re good for your personal development. But no one’s ever been able to show that they do anything to help you actually move up. …
We find that women in particular often choose the wrong people. … They seek out a senior person they’re very comfortable with. … For a sponsor, you should go after the person with power, because you need someone who has a voice at those decision-making tables. You need to respect that person, you need to believe that person is a fabulous leader and going places, but you don’t need to like them. You don’t need to want to emulate them.
If true, this forces me to modify my views. I have always believed that sponsored mobility is important in academia, but I believe that mentorship matters as well. If Hewett is right, my belief is misplaced. It’s really about sponsored mobility. So, if you care about women or minorities advancing in some career track (like academia), then forget the nice lunches. Administrators should double down on matching people with power players. A bit rude, but it might be one concrete way to chip away at inequality in the leadership of the academy.
Dissertation topic for up and coming orgheads: Facebook’s complete dominance over the field of friendship based social networking creates an interesting opportunity for the study of organizational identity. Usually, when a firm comes to completely rule an industry, a few firms pick up the scraps and the rest just go under.
But there is another, less explored path. Losers can change their identity. Social networking is a great example. Friendster just gave up its original business model and is now marketed as a gaming web site. MySpace also abandoned its role as a serious player in social networking and reverted to its original goal of serving musicians that reach out to their fans.
Here’s some questions I would ask: 1. What % of loser firms change identity? 2. What conditions enable identity change in firms? 3. What conditions enable successful identity change, in the sense that the firm now accomplishes its goal because of its new identity? My hunch is that corporate culture is going to be a big factor. To pull this off, you’ll need a group of people who can be managed in a way that they won’t bail on the org as it redefines, or have management that won’t just sell the firm for spare parts rather than find a new home for it. Please use the comments to prove/disprove the hypothesis.
A few weeks ago, we all laughed when MIT was praised for its well known (but nonexistent) sociology department. But a serious question went unasked: why doesn’t MIT have a degree granting sociology unit? At first, you think the answer is obvious. MIT is an engineering and science school. We shouldn’t expect it to offer any sociology aside from a few courses for general education of engineering students.
But hold on! MIT offers lots of non-STEM degrees. For example, it has a highly regarded business school and an architecture school. Ok, you say, maybe it’ll offer nuts and bolts professional programs that are closely allied with STEM fields. Yet, that argument doesn’t hold water. MIT also allows students to major and/or concentrate in music. It’s also got well known PhD programs in humanities fields like philosophy, social sciences like political science and economics, and a sort of catch-all program that combines history, anthropology, and science studies. Heck, you can even get the ultimate fluffy major – creative writing.
It’s even more baffling when you realize that it is amazingly easy to create a BS or PhD degree focusing on the quantitative side of sociology (e.g., applied regression, networks, demography, stochastic process models, soc psych/experimental, survey analysis, simulation/agent based models, rational choice/game theory, etc.)
My hypothesis is that the typical MIT faculty or alumni relies on the reputation of sociology, not what the field is actually about. Like a lot of folks, the field is written off as a hopeless quagmire of post-modernism, even though, ironically, most sociologists are not post-modernists. The reality is that the field is a fairly traditional positivist scholarly area with normal, cumulative research. Even qualitative research is often presented in ways that most normal science types would recognize. It’s really too bad. Sociology could use a healthy dose of ideas from the hard sciences, and MIT could be the place where that could happen.
The devastation is massive in the Philippines. What organizations need is money, so aid workers can be paid and supplies moved to the disaster area. Buzzfeed has a list of reputable organizations that are collecting funds. Thanks.
From my guy in Ann Arbor:
2014 Junior Theorists Symposium
15 August 2014
SUBMISSION DEADLINE: 15 FEBRUARY 2014
We invite submissions for extended abstracts for the 8th Junior Theorists Symposium (JTS), to be held in Berkeley, CA on August 15th, 2014, the day before the annual meeting of the American Sociological Association (ASA). The JTS is a one-day conference featuring the work of up-and-coming theorists, sponsored in part by the Theory Section of the ASA. Since 2005, the conference has brought together early career-stage sociologists who engage in theoretical work.
We are pleased to announce that Marion Fourcade (University of California – Berkeley), Saskia Sassen (Columbia University), and George Steinmetz (University of Michigan) will serve as discussants for this year’s symposium.
In addition, we are pleased to announce an after-panel on “The Boundaries of Theory” featuring Stefan Bargheer (UCLA), Claudio Benzecry (University of Connecticut), Margaret Frye (Harvard University), Julian Go (Boston University), and Rhacel Parreñas (USC) . The panel will examine such questions as what comprises sociological theory, and what differentiates “empirical” from “theoretical” work.
We invite all ABD graduate students, postdocs, and assistant professors who received their PhDs from 2010 onwards to submit a three-page précis (800-1000 words). The précis should include the key theoretical contribution of the paper and a general outline of the argument. Be sure also to include (i) a paper title, (ii) author’s name, title and contact information, and (iii) three or more descriptive keywords. As in previous years, in order to encourage a wide range of submissions, we do not have a pre-specified theme for the conference. Instead, papers will be grouped into sessions based on emergent themes.
Please send submissions to the organizers, Daniel Hirschman (University of Michigan) and Jordanna Matlon (Institute for Advanced Study in Toulouse), at firstname.lastname@example.org with the phrase “JTS submission” in the subject line. The deadline is February 15, 2014. We will extend up to 12 invitations to present by March 15. Please plan to share a full paper by July 21, 2014.
A group of sex workers in New York city has openly criticized Sudhir Venkatesh’ recent ethnography of New York sex workers. There are many criticisms, one stands out for me. An article from the Museum of Sex blog relates how SWOP-NYC and SWANK, two sex worker groups thought that Venkatesh’ work increased the risk to prostitutes by reporting that clients could opt out of condoms for a 25% surcharge:
His conclusions, for example about large numbers sex workers advertising on Facebook, were easily shown by other researchers and commentators to be incorrect. Other conclusions such as the fiction that “there’s usually a 25% surcharge” to have sex without a condom not only bore no relationship to reality but also endangered sex workers and public health programs working with them.
We were so concerned by what we uncovered that in October 2011we wrote a letter to the Columbia IRB to the Columbia University Institutional Review Board (IRB) and to the Sociology Department asking for some clarity about Sudhir Venkatesh’s research. Specifically, we asked for the research project titles, dates of research, and IRB approval numbers for each of the years he claimed to have conducted research while at Columbia University. We also wished to make Columbia University’s IRB and the Sociology Department aware of that the research appeared to create additional harms and risks for sex workers in the New York area. Our action is an example of the degree to which communities of sex workers have organized and the degree to which we will question research that we find harmful. We are no longer a “gift that keeps on giving” for Venkatesh, we are a community that speaks for itself.
For me, the IRB issue sticks out for legalistic reason. How exactly does a third party appeal to an IRB board? It’s obvious if the aggrieved person is a research subject. But what about third parties? Let’s say that SWOP & SWANK are correct that this book/article increases risk, what responsibility (if any) does an IRB board have?
The issue is unclear because IRB’s themselves are muddled institutions. They don’t operate through statute or contract. It’s an ad hoc administrative unit set up by universities to make sure research complies with federal guidelines. At most, they can inTterfere in research if you cross them. But they aren’t penal institutions – there’s no IRB police. There’s no “human subjects 9-11.” Even though I am sympathetic to the claim that ethnographic publications may endanger at risk groups, it is unclear to me how third parties may leverage genuine concern into an actionable complaint.
In my graduate seminar, we had a really good discussion about the ever changing health care industry. The one issue that appeared was the disappearance of the “house call” – doctors who visit you at home when you are sick. I don’t know the history of why this practice disappeared in healthcare, but I do know that it is very hard to bring it back. There are occasionally stories here and there of physicians who try to revive the practice, yet, it doesn’t stick.
A few hypotheses about the continued absence of house calls:
- Physician resistance: you can pack in more patients at the office to increase income.
- Physician culture: The ideal of personally cultivating close relationships in this way is now longer common.
- Regulation: Medicare and insurance make it hard to spend the extra time to visit people. You simply need to stay in the office.
- Technology: You need to go to an office for the equipment.
- Status: As physicians shifted from low status to high status during the 20th century, the patients had to go to them.
One of the biggest differences in graduate training is that most quantitative social scientists learn OLS, while economists learn OLS and time series in their basic course sequence. Why is that?
In a nutshell, economists have really good time series data, while most social scientists have very boring time series data. For example, economists look at stock prices, market indices, and other measures of economic performance. These variables can display a great deal of volatility. There is something to be explained. In contrast, a lot of time series in sociology and political science is highly path dependent. For example, party ID is very stable over the life course. Marital status changes infrequently. Public opinion usually moves real, real slow. We do get data that shows real variance over time, like career data, it doesn’t come in a nice scale – it’s messy.
Bottom line: the methods follow the data, as it should be!
Control Point Group, a political consultancy firm, asked my opinion on a recent Pew study of public opinion and twitter. I’ll quote Politico reporter Dylan Byers, who summarized the Pew study:
Sixteen percent of U.S. adults use Twitter and just half that many use it as a news source, making it an unreliable proxy for public opinion, according to a new survey from the Pew Research Center and tyhe John S. and James L. Knight Foundation.
Take last year’s Republican primary, for example: “During the 2012 presidential race, Republican candidate Ron Paul easily won the Twitter primary — 55% of the conversation about him was positive, with only 15% negative,” Pew writes. “Voters rendered a very different verdict.”
Or the Newtown school shooting: “After the Newtown tragedy, 64% of the Twitter conversation supported stricter gun controls, while 21% opposed them. A Pew Research Center survey in the same period produced a far more mixed verdict, with 49% saying it is more important to control gun ownership and 42% saying it is more important to protect gun rights.”
That’s worth keeping in mind next time you see the reaction-on-Twitter piece in the wake of any major national news event. However, Twitter may be a more reliable indicator of youth sentiment.
This is a subtle point. Pew is doing what computer scientists call a sentiment analysis. Roughly speaking, you write a program that guesses whether some text, in this case a tweet, reflect a positive or negative sentiment. The literature (including the Pew study cited) shows very mixed results. The take away point for me is that sentiment is either tricky to measure feelings properly or that emotional context of text doesn’t correlate well with behaviors that we care about.
In contrast, our research (and that done by others) shows that relative shares of mentions, regardless of sentiment, do show a positive correlation with some political behaviors, like voting. My hypothesis is that the relative volume of talk is simply a proxy for buzz, name recognition, popularity, or some other variable. Regardless, the correlation is there.
At last week’s PLEAD conference on social media and political processes, Alex Hanna tweeted a summary of a talk by Mark Huberty of UC Berkeley political science, which raised some questions about using social media data to forecast electoral results. Alex suggested that we could have a good discussion about Mark’s talk. In these comments, I rely on Alex’s summary. If I mis-characterized a point, please email me or correct me in the comments.
1. Huberty noted, correctly, that incumbency highly correlates with electoral wins. The implication is that social media data is not valuable, or important, or accurate, because incumbency accounts for a lot of the variance in electoral outcomes.
Well, it depends on what your goals are. If you are making a claim that “A causes B”, then finding out that C account for much of the variance is extremely important. It shows that A isn’t causing B. However, if your claim is that “A is a decent measurement of B,” then finding out that C is a strong correlate of B is simply irrelevant. The claim isn’t about what is some fundamental cause of B, just what tracks with B.
Different claim, different standard of proof. That’s we care about polls. Incumbency predicts elections better than polls, but as long as we don’t claim that polls cause election outcomes, we remain satisfied with the well documented correlation between voter surveys and final votes.
Also, incumbency is not a reasonable variable to benchmark against because incumbency is simply a word for “the person who won last time in the same election with a very similar group of voters.” As good social scientists know, a lot of human behavior is seriously auto-correlated. What I ate yesterday is the best predictor of what I’ll eat tomorrow. Politics is no different.
Thus, in a lot of social science, we aren’t interested in these sorts of time series because we know that answer already. X_t is almost certainly strongly correlated with X_t-1. The interesting question is why the time series is X_1, X2,… and not Y_1, Y_2, … Similarly, we might interested in “extracting a signal” from some new source of data to help us measure X_i or build a causal explanation that doesn’t fall back on trivial auto-correlated time series explanations. In other words, “The guy is an incumbent because there are a lot Black voters” is a much more meaningful statement than “The guy won this time because he won last time.”
That is ultimately why I remain interested in social media and electoral outcomes. Social media is a record of what people think that is different than polls and traditional print or broadcast media. It deserves a serious examination as a signal. And given the work by Huberty himself, Tusmajan, Juengher, Beuchamp, the Indiana group, and others, the “social media as measurement of political sentiment” hypothesis is important and, as far as I can tell, supported to varying degrees by the Twitter data. Incumbency is a non-issue as long as researchers and political professionals avoid claims of causation.
2. Alex also indicated that Mark Huberty was concerned about how social media data is created. Here, I also agree. Transparency is important. All data is imperfect – people lie on polls, surveys has selection biases, etc. There is a discussion about the properties of the samples that Twitter produces for researchers that might lead one to think that there might be an issue. The more we know about the way social media samples are generated, the better.
Still, the issue is *how much* of a problem this is. On this point, I urge Mr. Huberty to be bluntly empirical.The blunt empiricist, I would argue, would just put it to the test. The empiricist would look for natural experiments in the data (transparent data vs. others) or well chosen comparisons to see how much it affects the social media-vote correlation. Rather than point to possible problems, research would actually identify them. It might not matter, or it might be a big deal. Let’s figure it out!
As you well know, I think the PhD program is a terrible choice for most students. Quite simply, the PhD program is risky (only 50% completion rate), costly (5+ years), and many disciplines have poor job prospects (e.g., most of the humanities, many biological sciences and many social sciences). Furthermore, a lot of students think it is a credential that is needed for non-academic jobs, which is not generally true.
But still, maybe you weren’t phased by the “don’t go to grad school speech.” Maybe you really have a passion for teaching, or interpreting Foucault. Or maybe you simply don’t care about the negatives associated with academic careers. I welcome you to academia. I pity you as well.
So, then, what sort of PhD should you get? Here’s an argument for the sociology Ph.D.:
- Low barrier to entry – you just need a solid academic record, not extended training in math, foreign language, or other rare skills.
- You learn solid research skills like survey design, regression models, and interview technique that have non-academic labor market value.
- You can study a wide range of topics and do so almost immediately. No need to engage in endless post-docs.
- Policy relevance.
- Decent academic job prospects compared to most other fields. The sociology market is tight, but soc PhDs frequently get jobs in lots of other programs like education, business, policy, social work, and occasionally in adjacent areas like American studies, ethnic studies, political science, and anthropology.
- Broadly defined topic – if you have a real passion for a topic that is genuinely social in some way, you can probably find a way to write a dissertation on it.
The one big downside is that sociology programs adhere to the humanities model of long time to PhD. There is no need for this. If you focus on a dissertation topic early on, choose your dissertation chair wisely, and insist on getting published at least once, there is no need for your degree to take longer than 4 or 5 years.
A recent Washington Post op-ed describes recent research showing that interviews are poor predictors of future job performance. The idea is old, but the results elaborate in new ways. From Daniel Willingham, a psychologist at the University of Virginia:
You do end feeling as though you have a richer impression of the person than that gleaned from the stark facts on a resume. But there’s no evidence that interviews prompt better decisions (e.g., Huffcutt & Arthur, 1994).
A new study (Dana, Dawes, & Peterson, 2013) gives us some understanding of why.
The information on a resume is limited but mostly valuable: it reliably predicts future job performance. The information in an interview is abundant–too abundant actually. Some of it will have to be ignored. So the question is whether people ignore irrelevant information and pick out the useful. The hypothesis that they don’t is called dilution. The useful information is diluted by noise.
Dana and colleagues also examined a second possible mechanism. Given people’s general propensity for sense-making, they thought that interviewers might have a tendency to try to weave all information into a coherent story, rather than to discard what was quirky or incoherent.
Three experiments supported both hypothesized mechanisms.
In other words, interviews encourage people to see patterns in the data where none exist. They also distract us with irrelevant information. Toss this in the file of “we have evidence it don’t work, but people will do it anyway.”
The Right will remember Obama as a Godless Muslim Socialist.* The Left will remember him as He Who Brought Us Healthcare. Overseas, he may come to be known as the Great Snoop, or perhaps, Death from Above. But there are many in this country who will remember Obama as El Deportador.
According to sociologist Tanya Golash-Boza, Obama has headed one of the most intense waves of deportation in the history of the United States and the onus heavily falls on non-whites. Golash-Boza describes this in a recent Houston Chronicle op-ed (ungated version here):
The deportation of legal permanent residents has hit black immigrants particularly hard. Using data from the Department of Homeland Security and the U.S. Census Bureau, I calculated that one of every 12 Jamaican and Dominican male legal permanent residents has been deported since 1996.
The United States currently detains upwards of 30,000 immigrants per day, much as it imprisoned more than 120,000 people of Japanese origin during World War II without trials or other court processes. The Department of Homeland Security has broad discretion to arrest and detain any person they suspect does not have the legal right to be in the United States. People held under such detention do not have the same rights and safeguards as criminal suspects. They do not have the right to a speedy hearing before a judge nor do they have the right to appointed counsel.
in 2012, more than 400,000 people were deported. Nearly 100,000 of them were parents of U.S. citizens. Tens of thousands of these children will grow up in the United States knowing that the U.S. government has taken away their right to grow up with one or both of their parents.
Numerous commentators note that Obama’s administration has deported more people in five years than where deported in all eight years of Bush II and more than all previous administrations going back to 1892 (!!) :
According to current figures from Immigration and Customs Enforcement — the federal agency responsible for deportations — Obama has removed 1.4 million people during his 42 months in office so far. Technically, that’s fewer than under George W. Bush, whose cumulative total was 2 million. But Bush’s number covers eight full years, which doesn’t allow an apples-to-apples comparison.
If you instead compare the two presidents’ monthly averages, it works out to 32,886 for Obama and 20,964 for Bush, putting Obama clearly in the lead. Bill Clinton is far behind with 869,676 total and 9,059 per month. All previous occupants of the White House going back to 1892 fell well short of the level of the three most recent presidents.
* Yes, I know. The right isn’t known for its logical consistency.
A follow up from Monday’s discussion of productivity: Publishing too much is definitely a first world problem. In fact, it is so remarkably rare that in 10 years as IU faculty member have I seen one job applicant penalized for publishing too much. Normally, people are penalized for (a) not publishing, (b) publishing the “wrong stuff” (edited volumes vs. journal articles) or (c) not publishing in elite journals.
But once in a while, some people do publish too much. Why?
- If you are in an elite program, you *only* get credit for either top general journals or top field journals. So volume distracts you from getting the “right” hit.
- “Scatter”: Some programs want faculty to have a “coherent” publication output.
- Dilution: Some programs want a small number of high impact pieces.
- Credit: Sometimes a large volume requires many co-authors, which makes it look like you didn’t contribute much.
So think about it: How many of you are tenure track in top 5 programs? Or work in fields where you are expected to have one or two big impact pieces? Didn’t think so. In most cases, volume is not an issue, as long as it is peer reviewed and is of overall good quality.
A few months ago, Neal Caren posted a citation analysis of sociology journals. The idea is simple – you can map sociology by looking at clusters of citations. Pretty cool, right? You know what’s cooler – using the same technique you can come up with a new ranking of soc programs. The method is simple:
- Start with a cluster analysis of journal cites. Stick to the last five years or so.
- Within each cluster, award a department credit for each article that makes, say, the top 20 in that cluster. Exclude dead or retired authors. Exclude authors who have moved to a new campus.
- Weight the credit by co-authorship – but keep track of where they teach. E.g., Princeton sociology gets 1/2 for DiMaggio and Powell (1983). Stanford soc does NOT get credit because Woody Powell teaches in the education school. Courtesy appointments do not count.
- You can then rank within a cluster (e.g., top 5 institutions/movements depts) or create an overall ranking based on adding up scores in all clusters.
Disadvantages: This method excludes cites in books. For example, most of the cites to my Black power book are by historians, who mainly write books. This also points to another problem. It emphasizes in-discipline cites. So, if you do education research, and they love you in the AERJ, this won’t pick you up. Another issue is that if you are spread around clusters, your count is ignored.
Advantages: Based on behavior and not susceptible to halo effects because it is not a reputation survey. Also, it’s a measure of what people think is important, not what gets into specific journals. However, we would expect the typical highly central person in the cluster to appear because of a well cited article in a top journal. Another advantage is the transparency. No bizarre formulas, aside from standard network measures. Finally, it is easy to measure robustness. For example, if you think that fractional weighting for co-authors is misleading, it’s easy to drop and redo the analysis in a way that you think is correct.
Next step: Neal Caren should set up a wiki where we can quickly execute this and replace the misguided NRC/US News rankings.
In my Washington Post column, I discussed the possibility that social media data might displace the traditional political poll. After writing the column, I thought that I might have gone overboard. But after reading some recent research, I realized that I am really onto something. Recent research shows that social media data, when modeled correctly, does provide very good measurements of public opinion trends.
Nick Beauchamp is political scientist at Northeastern University. He has a new working paper called “Predicting and Interpolating State-level Polling using Twitter Textual Data.” This paper is the vital intermediate step between noticing that tweets correlate with votes and using social media data by itself to forecast elections. The abstract:
Presidential, gubernatorial, and senatorial elections all require state-level polling, but continuous real-time polling of every state during a campaign remains prohibitively expensive, and quite neglected for less competitive states. This paper employs a new dataset of over 500GB of politics-related Tweets from the nal months of the 2012 presidential campaign to interpolate and predict state-level polling at the daily level. By modeling the correlations between existing state-level polls and the textual content of state-located Twitter data using a new combination of time-series cross-sectional methods plus bayesian shrinkage and model averaging, it is shown through forward-in-time out-of-sample testing that the textual content of Twitter data can predict changes in fully representative opinion polls with a precision currently unfeasible with existing polling data. This could potentially allow us to estimate polling not just in less-polled states, but in unpolled states, in sub-state regions, and even on time-scaled shorter than a day, given the immense density of Twitter usage. Substantively, we can also examine the words most associated with changes in vote intention to discern the rich psychology and speech associated with a rapidly shifting national campaign.
In other words, if you do some sensible model fits and combine with content analysis, social media time series mimic the trends produced by polls. The next step is obvious: combine election results and social media data, model the error, and if the results are reasonable, you will no longer need big polls.
A couple of weeks ago, Brayden commented on an essay by David Courpasson, which lamented the “culture of productivity.” The idea was that we often put too much emphasis on the production of articles, rather than the cultivation of ideas. At one level, I completely agree. The goal is to produce quality ideas. We aren’t paid by the word.
At another level, I am not terribly moved by Professor Courpasson’s essay. The complaint falls under the category of “first world problems.” The main problem, for most graduate students and faculty, isn’t that they are sucked up by an evil “culture of productivity.” The modal problem is that they aren’t producing anything at all. The underproduction of articles is highly correlated with not getting a job and not getting promoted. It is also a problem from a policy perspective. When we invest in students and faculty, we want them to be able to produce competent science, which is usually expressed in occasional publication.
But Professor Courpasson does have some important points that merit a response. One is that publication is adversarial, instead of cooperative. It’s about beating reviewers at some game. Here, I can only agree and add that the adversarial nature of reviews stems from limited resources. If ASQ, for example, will only publish the top 10% of papers, then the reviewers just need some excuse to “knock down” some good papers. If you want the recognition and rewards of the profession, then you need to master the game. Though I have never chosen a research topic to win some “game,” I openly admit that papers are written in sub-optimal (and often lamentable) ways just to avoid what I think are reviewer cheap shots. If Professor Courpasson wishes to avoid this game, I recommend that he closely follow two new(er) journals, PLOS ONE and Sociological Science. The former journal will publish all articles that follow scientific standards. The latter gives a simple “yes/no” decision, so there are no games with endless rounds of reviewers. Both formats reduced the “game” aspect of publishing.
Courpasson also complains about the lack of scholarship on power and related topics. And, I’m like, “DUDE!!! READ MY ARTICLE ABOUT POWER!!!! C’MON, BRO, PUMP UP MY CITES!!!!!” I’d also add that the reason that these topics are in retreat is that it is easy for reviewers to knock them down. For example, there is a very standard format for articles on, say, diffusion of innovation. But there is no standard for articles on power. Thus, it is harder to knock down a paper in the first genre. The “big” ideas that Professor Courpasson likes often generate controversy, and thus makes it really easy for a reviewer to write hand wringing reviews about how there are all these problems with the paper. So little ideas become easy to publish. Big ideas are left for the elder leaders of the profession.
Finally, I’ll address a related issue – over publication and the volume of research. Personally, I don’t think this is a real problem. While there a few great scholars who publish very little (Coase or Hirschman), most successful scholars tend to write a lot. Keith Sawyer’s book on creativity reports, for example, that in studies of novelists, famous authors wrote way more novels than authors from a random sample. Most of these novels weren’t great, but Sawyer makes the sensible observation that maybe you just need a lot of practice to write a great novel. Maybe better writers just make more ideas. Regardless, this suggests that we should be tolerant of volume. I do have sympathy for Professor Courpasson, though, since I’ve worked on journals. Big volume means a lot of work.
Though luck, and the gritty determination of our program leadership, Indiana has become a center for the sociology of Asia and Asian America. We now currently have five (!) faculty to work in this topic:
- Jennifer Lee – Asian American migration/adolescence/schooling
- Dina Okamoto (starts in Spring) – Asian American politics
- Weihua An – networks/youth in China
- Keera Allendorf - demography/family in India
- Ethan Michelson – law & society in China
If you’re interested in Asian or Asian American studies, you should check us out.
When you advocate Open Borders, you quickly run into a lot of opposition. Much of it is made in good faith. People do sincerely believe that society is a zero sum game. Let more people in, and someone has to suffer. I think view is in error, but it doesn’t strike me as particularly pernicious, and one can have a logical discussion about it.
However, there’s one type of immigration restrictionist that does not argue in good faith – the white nationalist. I encountered them when I set up the Open Borders contest website. People quickly signed up for the website and started posting racially charged images and comments. The tenor of discussion quickly went down hill – fake accounts and hysterical comment threads.
My interaction with these folks reminds me of a key argument in modern race and ethnicity scholarship. For many people, citizenship isn’t a legal category. It’s a social and racial category. “Real” Americans are white, for the most part. Non-whites, by definition, aren’t real citizens and don’t deserve the right to be part of the American community.
The blow up on the Open Borders logo contest site illustrates this well. A lot of restrictionists started posting images of white people killing themselves. Another popular image showed non-whites “invading” America; migration is “surrender.” The message is clear: racial mixing, via unrestricted migration, leads to racial death.
The deeper lesson for Open Borders advocates is that migration taps into both genuine policy issues and emotionally charged racial attitudes. On policy, I feel confident. Migration will not bankrupt us, or lead us into chaos. On emotions, open borders activists should try to bring racial attitudes out in the open so people can see that some opposition to migration is often rooted in less than honorable motives.
A question for my brothers and sisters in political theory: There are certain individuals who embody the role of the activist/intellectual. They are highly influential in movement politics and they write or speak about movements in a very theoretical way, offering justification for the movement’s goals and strategies. For socialist politics, this role is filled by Lenin, who provided an explanation of what the communist party is supposed to do. For non-violence, King and Gandhi fill this role.
My question: Is there an analog for mass political parties in democratic societies? In other words, who is the master politician who articulates the purpose and function of the party in modern democracies? Does this person talk about how the party should manage/exploit various constituencies, especially rowdy ones like protest movements?
The Open Borders website is hosting a logo contest. Here’s the link to the finalists. Take a look and tell us what you think. The prize, $200, goes to the person whose logo embodies the right of free movement. We favor entries that can easily reproduced on signs, posters, websites, and other materials.
In my undergraduate social network class, I tried to explain how social network analysis could be used to identify a certain “type” of person. I often use high schools as an example. One could ask students to identify friends and then use that data to map groups, cliques, and the like. At one point in the discussion, I then said, “for example, we could use network data to discover the most popular people, the MEAN GIRLS.” I then asked, “how would we discover mean girls?”
In our discussion, I think we settled on the following:
- Mean girls would have high centrality scores.
- With asymmetrical friendship network data, mean girls would not reciprocate.
- If people rated the content of the network tie, mean girls would receive a lot positives but send out negatives.
- Mean girls would cluster, or have structurally equivalent roles.
A student asked, “Fabio, were you a mean girl in high school?”
I said, “probably not, I was very shy and I rarely taunted kids or got in fights. In some ways, though, I am a mean nerd.”
The student responded, “Fabio, you are definitely a mean nerd. I read what you wrote about the critical realists.”