Archive for the ‘mere empirics’ Category
Hector Cordero-Guzman is a sociologist at CUNY who writes extensively on immigration, ethnicity, and related topics. In relation to our post on race agnosticism, Hector reminded me that he wrote a post on measuring race for the blog Latino Rebels. In the post, he describes his reaction and analysis to the claim that Latinos were increasingly self-identifying as white. From the post:
A draft presentation at the Population Association of America (PAA) chronicled by a Pew Research senior writer was then picked up by Nate Cohn, writing for The New York Times’ “Upshot” blog. In the eyes of Cohn, his editor David Leonhardt and the Times, and based on a report that the scientific community has not seen or evaluated, Latinos were becoming “whiter.”
Surrounding all the controversy and discussion about reporting on research that was not available for inspection or review by other academics, two explanations to the tentative result from the unavailable census study have emerged: that the people changed (Cohn, Leonhardt and The Times) or that the census questions changed (Manuel Pastor in the HuffPost).
He follows with an analysis that can be summarized as:
A second possibility is that the context where the question is asked matters and that asking about race in Puerto Rico is different than asking the same population about their race in New York City. The question is not changing and the people are not changing—what is changing is the context, the reference point, the broader racial classification schema and categories that are used, how they are interpreted, their subjective meaning, and their social and sociological role.
Cohn further argues that the reported change in the answers given to the race question suggest Hispanic assimilation into the U.S. and into its racial classification schema. If anything, comparing data from Puerto Rico and Puerto Ricans in New York City suggests that mainland Puerto Ricans develop a sense of “otherness” as they come into closer contact with the U.S. racial classification regime. In fact, it would be interesting to compare the data from Puerto Rico with data from Puerto Ricans throughout the U.S. (not just New York City), those residing in various regions, as well as looking at the more recent arrivals to see if the categories they pick are different from Puerto Ricans that have been living on the mainland for a longer period of time.
In other words, study context acts as important cue for creating interpretations of race on surveys. The whole post is highly recommended.
Earlier this week, Ann Morning of NYU sociology gave a talk at the Center for Research on Race and Ethnicity in Society. Her talk summarized her work on the meaning of race in varying scientific and educational contexts. In other words, rather than study what people think about other races (attitudes), she studies what people think race is. This is the topic of her book, The Nature of Race.
What she finds is that educated people hold widely varying views of race. Scientists, textbook writers, and college students seem to have completely independent views of what constitutes race. That by itself is a key finding, and raises numerous other questions. Here, I’ll focus on one aspect of the talk. Morning finds that experts do not agree on what race is. And by experts, she means Ph.D. holding faculty in the biological and social sciences that study human variation (biology, sociology, and anthropology). This finding shouldn’t be too surprising given the controversy of the subject.
What is interesting is the epistemic implication. Most educated people, including sociologists, have rather rigid views. Race is *obviously* a social convention, or race is *obviously* a well defined population of people. Morning’s finding suggests a third alternative: race agnosticism. In other words, if experts in human biology, genetics, and cultural studies themselves can’t agree and these disagreements are random (e.g., biologists themselves disagree quite a bit), then maybe other people should just back off and admit they don’t know.
This is not a comfortable position since fights over the nature of human diversity are usually proxies for political fights. Admitting race agnosticism is an admission that you don’t know what you’re talking about. Your entire side in the argument doesn’t know what it’s talking about. However, it should be natural for a committed sociologist. Social groups are messy and ill defined things. Statistical measures of clustering may suggest that the differences among people are clustered and nonrandom, but jumping from that observation to clearly defined groups is very hard in many cases. Even then, it doesn’t yield the racial categories that people use to construct their social worlds based on visual traits, social norms, and learned behaviors. In such a situation, “vulgar” constructionism and essentialism aren’t up to the task. When the world is that complicated and messy, a measure of epistemic humility is in order.
Over the weekend, I got into an exchange with UMD management student Robert Vesco over the computer science/sociology syllabus I posted last week. The issue, I think, is that he was surprised that the course narrowly focused on topic modelling – extracting meaning from text. Robert thought that maybe there should be a different focus. He proposed an alternative – teaching computer science via simulations. Two reactions:
First, topic modelling may seem esoteric to computer scientists but it lies at the heart of sociology. We have interviews, field notes, media – all kinds of text. And we can move beyond the current methods of having humans slowly code the data, which is often not reliable. Also, text is “real data.” You can easily link what you extract from a topic modelling exercise to traditional statistical analysis.
Second, simulations seem to have a historically limited role in sociology. I find this sad because my first publication was a simulation. I think the reason is that most sociologists work with simple linear models. If you examine nearly all quantitative work, you see that most statistical analyses use OLS and its relatives (logits, event history, Tobit. Heckman, etc). There’s always a linear model in there. Also, in the rare cases where sociologists use mathematical models for theory, they tend to use fairly simple models to express themselves.
Simulation is a form of numerical analysis – an estimate of the solutions of a system of equations that is obtained by random draws from the phase space. You would only need to do this if the models were too complicated to solve analytically, or the solution is too complex to describe in a simple fashion. In other words, if you have a lot of moving parts, it makes sense to do a simulation. Since sociological models tend to be very simple, there is little demand for simulations.
Robert asked about micro-macro transitions. This proves my point. A lot of micro-macro models in sociology tend to be fairly simple and stated verbally. For example, many versions of institutionalism predict diffusion driven by elites. Thus, downward causation is described by a simple model. More complex models are possible, but people seem not to care. Overall, simulation is cool, but it just isn’t in demand. Better to teach computer science with real data.
Loyal orgtheorista and sociologist Amy Binder has forwarded me this course syllabus for a course at UC San Diego. It is called Soc 211 Computational Methods in Social Science and was taught by Edward Hunter and Akos Rona-Tas. The authors are working on a textbook, the course was made open to a wide range of students, a and it was supported by the Dean at UCSD. I heard people had a nerdy good time. Click here to read the soc211_syllabus.
At Overcoming Bias, Robin Hanson observes that his fellow economists don’t always focus on the policies that have broad consensus, are easy to understand, and easy to implement. He uses the example of road pricing:
Heavy traffic is a problem every economist in the world knows how to solve: price road access, and charge high prices during rush hour. With technologies like E-ZPass and mobile apps, it’s easier than ever. That we don’t pick this low-hanging fruit is a pretty serious indictment of public policy. If we can’t address what is literally a principles-level textbook example of a negative spillover with a fairly easy fix, what hope do we have for effective public policy on other margins?
I agree. Think about status in economics – what sorts of work gets you the rewards? For a while, it was really, really hard math. Also, macro-economics, which is a notoriously hard field. Recently, insanely clever identification work. What do these have in common? They are hard. In contrast, how many Bates or Nobel prizes have been awarded for simple, high impact work, like road pricing? Nearly zero is my guess.
The same is true in sociology. Sociologists often imagine themselves coming up with marvelous approaches to solving deeply rooted social inequalities. For example, a few months ago, we discussed research on gender inequality and how it might be explained, partially, by the relative over- or under-confidence of men and women. In other words, it might be that women are overly cautious in terms of promotions.
One simple solution would be to require all eligible people to apply for promotions (e.g., require that all associate profs apply for full professorship after a few years). It is a simple rule and would almost certainly help. The response in the comments? The solution doesn’t remedy gender prejudice. Well, of course not, but that wasn’t the point. The point was to fix a specific issue – under representation of women in applicant pools. I have no idea how to eliminate the bias against women, but I can make sure they get promoted at work often – and it’s easy!
Bottom line: Social scientists have their priorities reversed. They get rewarded for trying to solve insanely hard problems, while leaving a lot of simple problems alone. That’s leaving cash on the table.