Archive for the ‘mere empirics’ Category
did bill clinton accelerate black mass incarceration? yes, but he did put a bunch of white people in prison to even it out
Pam Oliver has a fascinating post that empirically investigates incarceration trends during Clinton 1 era (1993-2001). It’s an impressive post. Professor Oliver pulls up a lot of data on overall incarceration rates and breaks it down by race. You should read it yourself, but here is my summary, diagrams are from her article:
- Imprisonment rates, overall, kept on increasing during the entire Clinton 1 presidency.
- By race, Black imprisonment rates increased till about 2000 and then plateaued. It started at 75/100,000 and then peaked around 200 per 100,000 and then stabilized. There are huge increases, in rates, for Whites, Hispanics, and Native Americans. Asian rates seem to be stable.
- The story of racial disparity is a bit more complex. Roughly speaking, the Blackness of the prison population peaked around 1995 (see below). Then the Black/White ratio in prisons began to decline.
My interpretation. First, you have to distinguish between between absolute and relative effects. To be blunt, Black mass incarceration in absolute terms unequivocally increased during the Clinton 1 years. Period. Perhaps the only qualifier is that it eventually stabilized, but the Black imprisonment rate never declined or even remotely went back to the levels of the 1980s or early 1990s. Mass incarceration was built in the 1980s and 1990s and it was here to stay.
The real question is why it stabilized. One hypothesis is that it was a policy effect. Perhaps in the late-1990s, there were policy changed that took effect circa 2000. A second hypothesis is that the prison system became saturated and there weren’t any more people to imprison from that population. Professor Oliver’s data are not enough to settle the question.
Second, the real story is in relative rates. Imprisonment became a much more equal system in the 1990s. In other words, prison shifted from being a Black institution to more of an all American institution. My hypothesis is that the drug war machine simply reached its limit in imprisoning Black and expanded by targeted low income white.
In this data, the American prison system appears as a hungry beast, ruthlessly scooping up low SES populations one at a time. After being built in the 1950s and 1960s by liberal reformers, the American justice system now had the power to quickly and swiftly punish people. In the 1970s and 1980s, Republican and Democratic administrations turned this machine on urban blacks and went unstopped until the early 2000s. The machine then turned to poor whites in the 1990s and a similar machine was built to imprison and deport Mexican and Central American migrants.
Francis Fukuyama wrote that we reached the end of history because liberal capitalism won over its socialist and fascist competitors. The sad truth is that the history must continue and the next chapter will be the struggle to liberate the world’s people from predatory prison states.
In this third installment of the Trump symposium, I want to talk about how social scientists should think about Trump. Let’s start with prediction – who foresaw Trump? We need to make a distinction.
- Some people, including myself, actually suggested years ago that we would have a populist take over of the Republican party. The argument was that a big chunk of the GOP electorate wants leaders to entertain them, which creates an opportunity for a Trump style candidate to emerge.
- Very few people thought that Trump would be the guy riding the populist wave. Indeed, I thought that Trump would go the way of Giuliani, Carson, Cain, Bachmann, and others who were a flash in the pan. Instead, I thought we’d see someone like Rick Santorum or Ted Cruz become the populist GOP president.
So what should a social scientist do?
- Start with the following mantra: Social science is about trends and averages, rarely about specific cases. If an outsider becomes a major party nominee once, then you can cling to the old theory. If you get three Trumps in a row, then it’s time to dump the Party Decides model, unless, of course, you see the party openly embrace Trump and he becomes the new establishment.
- Feel confident: One crazy case doesn’t mean that you dump all results. For example, polling still worked pretty well. Polls showed a Trump rise and, lo and behold, Trump won the nomination. Also, polls are in line with basic models of presidential elections where economic trends set the baseline. The economy is ok, which means the incumbent party has an advantage. Not surprisingly, polls show the Democratic nominee doing well.
- Special cases: Given that most things in this election are “normal,” it’s ok to make a special argument about an unusual event. Here, I’d say that Trump broke the “Party Decides” model because he is an exceptionally rare candidate who doesn’t need parties. Normally, political parties wield power because politicians don’t have money or name recognition. In contrast, Trump has tons of money and a great media presence. He is a rare candidate who can just bypass the whole system, especially when other candidates are weak.
What does the future hold? Some folks have been raising the alarms about a possible Trump win. So far, there is little data to back it up. In the rolling Real Clear Politics average of polls, Trump is consistently behind. In state polls, he is behind. He has no money. He has not deployed a “ground game.”In fact, the RCP average has had Clinton 2 ahead of Trump every single day since October with the exception of the GOP convention and about two of the worst days of the Democratic campaign. Is it possible that Trump will be rescued by a sudden wave of support from White voters? Maybe, but we haven’t seen any movement in this direction. A betting person would bet against Trump.
The Virginia Historical Society has a website that brings together many documents from the antebellum period of American history so that you can search for the names of African Americans who might otherwise be lost to history. From the website:
This database is the latest step by the Virginia Historical Society to increase access to its varied collections relating to Virginians of African descent. Since its founding in 1831, the VHS has collected unpublished manuscripts, a collection that now numbers more than 8 million processed items.
Within these documents are numerous accounts that collectively help tell the stories of African Americans who have lived in the state over the centuries. Our first effort to improve access to these stories came in 1995 with publication of our Guide to African American Manuscripts. A second edition appeared in 2002, and the online version is continually updated as new sources enter our catalog (http://www.vahistorical.org/aamcvhs/guide_intro.htm).
The next step we envisioned would be to create a database of the names of all the enslaved Virginians that appear in our unpublished documents. Thanks to a generous grant from Dominion Resources and the Dominion Foundation in January 2011, we launched the project that has resulted in this online resource. Named Unknown No Longer, the database seeks to lift from the obscurity of unpublished historical records as much biographical detail as remains of the enslaved Virginians named in those documents. In some cases there may only be a name on a list; in others more details survive, including family relationships, occupations, and life dates.
Check it out.
A few days ago, we had a discussion about the different meanings of the word “computational sociology.” A commenter wrote the following:
Are agent based models/simulations a dead end? Are smart people still using that technique? Have there been any important results? I didn’t realize it peaked in the 1980s.
I’m a current doctoral student considering pursuing ABM, but if it’s a dead end then maybe not.
I think that olderwoman’s response is on target. There is nothing out of style about ABM’s, but sociology is mainly a discipline of empiricists. You will find scholars who occasionally to ABMs but no one who ONLY does is very, very rare. Examples of people who have done simulations: Damon Centola, Kathleen Carley, Carter Butts. In my department, I can think of two people who have published simulations (Clem Brooks, Steve Benard, and myself) and those who do methods research often employ simulations. Olderwoman is also correct in that writing simulations helps you develop programming skills that are now required for “big data” work and for industry.
So don’t write an all simulation dissertation, but by all means, if you have good ideas, simulate them!
There is a lot of punditry about Bernie Sander’s inability to make a dent in the Black vote. This is crucial because a lot of Hillary Clinton’s delegate lead comes from massive blowouts in the Deep South. Even a small movement in the Black vote would have turned Sander’s near losses in Massachusetts, Illinois, and Missouri into narrow wins.
My approach to this issue – Sanders’ poor performance among Blacks is entirely predictable. Post-Civil Rights, the urban black population became heavily integrated into the mainstream of the Democratic party. The connection is so tight that some political scientists have used the African-American vote as a classic example of “voter capture” – a constituency so tightly linked to a party that there is no longer any credible threat of moving to another party and the party takes them for granted.
If you believe that, then you get a straight forward prediction – Black voters will overwhelmingly support the establishment candidate. Why? Black voters are the establishment in the Democratic party. As a major constituency, they are unlikely to vote against someone who already reflects their preferences. Here’s some evidence:
- 1976: No establishment candidate, but Williams and Wilson (1977) report that Carter solidly won the black vote in every primary state save one.
- 1980: Carter gets about 68% of Black votes overall and even squeaks out a 52% majority of the Black vote in New York, which swung hard to Kennedy.
- 1984: Mondale gets about 60% of the Black vote – against Jesse Jackson.
- 1992: Clinton I gets anywhere from 50% to 75% of the Southern Black vote and ties Jerry Brown with 40% of the Black vote in New York.
- 2000: Gore gets about 80% of the Black vote vs. Bill Bradley.
- 2004: Kerry gets over 80% of the Black primary vote.
- 2016: Clinton II gets over 80% of the Black vote in South Carolina and other states.
The pattern is exceptionally clear. Black voters overwhelmingly support establishment candidates. The only exceptions are when you have an African American candidate of extreme prominence, like Obama the wunderkind or Jesse Jackson the civil rights leader. And then there’s a tipping point where almost the entire voting block switches to a new candidate. So Bernie is actually hitting what a normal challenger hits in a Democratic primary but that simply isn’t enough to win.
Over at Econ Talk, Russ Roberts interviews James Heckman about censored data and other statistical issues. At one point, Roberts asks Heckman what he thinks of the current identification fad in economics (my phrasing). Heckman has a few insightful responses. One is that a lot of the “new methods” – experiments, instrumental variables, etc. are not new at all. Also, experiments need to be done with care and the results need to be properly contextualized. A lot of economists and identification obsessed folks think that “the facts speak for themselves.” Not true. Supposedly clean experiments can be understand in the wrong way.
For me, the most interesting section of the interview is when Heckman makes a distinction between statistics and econometrics. Here’s his example:
- Identification – statistics, not economics. The point of identification is to ensure that your correlation is not attributable to an unobserved variable. This is either a mathematical point (IV) or a feature of research design (RCT). There is nothing economic about identification in the sense that you need to understand human decision making in order to carry out identification.
In contrast, he thought that “real” econometrics was about using economics to guide statistical modelling or using statistical modelling to plausibly tell us how economic principles play out in real world situations. This, I think, is the spirit of structural econometrics, which demands the researcher define the economic relation between variables and use that as a constraint in statistical estimation. Heckman and Roberts discuss minimum wage studies, where the statistical point is clear (raising wages do not always decrease unemployment) but the economic point still needs to be teased out (moderate wage increases can be offset by firms in others ways) using theory and knowledge of labor markets.
The deeper point I took away from the exchange is that long term progress in knowledge is not generated by a single method, but rather through careful data collection and knowledge of social context. The academic profession may reward clever identification strategies and they are useful, but that can lead to bizarre papers when the authors shift from economic thinking to an obsession with unobserved variables.