orgtheory.net

Archive for the ‘formal models’ Category

are lawyers dead meat?

with one comment

A recent article at Futurism.com suggests that life may be grim for many lawyers:

Law firm Baker & Hostetler has announced that they are employing IBM’s AI Ross to handle their bankruptcy practice, which at the moment consists of nearly 50 lawyers. According to CEO and co-founder Andrew Arruda, other firms have also signed licenses with Ross, and they will also be making announcements shortly.

Ross, “the world’s first artificially intelligent attorney” built on IBM’s cognitive computer Watson, was designed to read and understand language, postulate hypotheses when asked questions, research, and then generate responses (along with references and citations) to back up its conclusions. Ross also learns from experience, gaining speed and knowledge the more you interact with it.

Ouch! Why is this a problem? Basically, many lawyers make their money either doing document review, case review, or routine law. Document review simply means taking a big batch of documents obtained through discovery and looking for key words. Case review simply means reading prior law and decisions to see what is relevant. Routine law is what it sounds like – writing documents or providing advice on simple legal matters, like parking tickets, wills for most people, and divorces for people with no children and few assets.

What these things have in common is that you don’t need a lot of judgment or skill to do them. In other words, a computer could easily handle a large proportion of routine law and basic legal work. That’s bad news for many lawyers, as a big part of the legal labor market is exactly this kind of work.

My conjecture is that in the future, working lawyers will be like surgeons,  a very high skill area. If you make a good living as a lawyer, you are probably in a very complicated area of the law, like corporate mergers, or you are in an area where people skills are crucial, like arbitration. You might also be serving high income people, who have very complex legal issues. But for the many attorney’s who do things like wills and DUIs for average people, your time may be limited.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($4.44 – cheap!!!!)/Theory for the Working Sociologist (discount code: ROJAS – 30% off!!)/From Black Power/Party in the Street / Read Contexts Magazine– It’s Awesome!

Advertisements

Written by fabiorojas

November 10, 2017 at 5:32 am

no complexity theory in economics

Roger E. Farmer has a blog post on why economists should not use complexity theory. At first, I though he was going to argue that complexity models have been dis-proven or they use unreasonable assumptions. Instead, he simply says we don’t have enough data:

The obvious question that Buzz asked was: are economic systems like this? The answer is: we have no way of knowing given current data limitations. Physicists can generate potentially infinite amounts of data by experiment. Macroeconomists have a few hundred data points at most. In finance we have daily data and potentially very large data sets, but the evidence there is disappointing. It’s been a while since I looked at that literature, but as I recall, there is no evidence of low dimensional chaos in financial data.

Where does that leave non-linear theory and chaos theory in economics? Is the economic world chaotic? Perhaps. But there is currently not enough data to tell a low dimensional chaotic system apart from a linear model hit by random shocks. Until we have better data, Occam’s razor argues for the linear stochastic model.

If someone can write down a three equation model that describes economic data as well as the Lorentz equations describe physical systems: I’m all on board. But in the absence of experimental data, lots and lots of experimental data, how would we know if the theory was correct?

On one level, this is a fair point. Macro-economics is notorious for having sparse data. We can’t re-run the US economy under different conditions a million times. We have quarterly unemployment rates and that’s it. On another level, this is an extremely lame criticism. One thing that we’ve learned is that we have access to all kinds of data. For example, could we have m-turker participate in an online market a  million times? Or, could we mine eBay sales data? In other words, Farmer’s post doesn’t undermine the case for complexity. Rather, it suggests that we might search harder and build bigger tools. And, in the end, isn’t that how science progresses?

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street 

Written by fabiorojas

October 7, 2016 at 12:39 am

bad reporting on bad science

This Guardian piece about bad incentives in science was getting a lot of Twitter mileage yesterday. “Cut-throat academia leads to natural selection of bad science,” the headline screams.

The article is reporting on a new paper by Paul Smaldino and Richard McElreath, and features quotes from the authors like, “As long as the incentives are in place that reward publishing novel, surprising results, often and in high-visibility journals above other, more nuanced aspects of science, shoddy practices that maximise one’s ability to do so will run rampant.”

Well. Can’t disagree with that.

But when I clicked through to read the journal article, the case didn’t seem nearly so strong. The article has two parts. The first is a review of review pieces published between 1962 and 2013 that examined the levels of statistical power reported in studies in a variety of academic fields. The second is a formal model of an evolutionary process through which incentives for publication quantity will drive the spread of low-quality methods (such as underpowered studies) that increase both productivity as well as the likelihood of false positives.

The formal model is kind of interesting, but just shows that the dynamics are plausible — something I (and everyone else in academia) was already pretty much convinced of. The headlines are really based on the first part of the paper, which purports to show that statistical power in the social and behavioral sciences hasn’t increased over the last fifty-plus years, despite repeated calls for it to do so.

Well, that part of the paper basically looks at all the papers that reviewed levels of statistical power in studies in a particular field, focusing especially on papers that reported small effect sizes. (The logic is that such small effects are not only most common in these fields, but also more likely to be false positives resulting from inadequate power.) There were 44 such reviews. The key point is that average reported statistical power has stayed stubbornly flat. The conclusion the authors draw is that bad methods are crowding out good ones, even though we know better, through some combination of poor incentives and selection that rewards researcher ignorance.

 

 

The problem is that the evidence presented in the paper is hardly strong support for this claim. This is not a random sample of papers in these fields, or anything like it. Nor is there other evidence to show that the reviewed papers are representative of papers in their fields more generally.

More damningly, though, the fields that are reviewed change rather dramatically over time. Nine of the first eleven studies (those before 1975) review papers from education or communications. The last eleven (those after 1995) include four from aviation, two from neuroscience, and one each from health psychology, software engineering, behavioral ecology, international business, and social and personality psychology. Why would we think that underpowering in the latter fields at all reflects what’s going on in the former fields in the last two decades? Maybe they’ve remained underpowered, maybe they haven’t. But statistical cultures across disciplines are wildly different. You just can’t generalize like that.

The news article goes on to paraphrase one of the authors as saying that “[s]ociology, economics, climate science and ecology” (in addition to psychology and biomedical science) are “other areas likely to be vulnerable to the propagation of bad practice.” But while these fields are singled out as particularly bad news, not one of the reviews covers the latter three fields (perhaps that’s why the phrasing is “other areas likely”?). And sociology, which had a single review in 1974, looks, ironically, surprisingly good — it’s that positive outlier in the graph above at 0.55. Guess that’s one benefit of using lots of secondary data and few experiments.

The killer is, I think the authors are pointing to a real and important problem here. I absolutely buy that the incentives are there to publish more — and equally important, cheaply — and that this undermines the quality of academic work. And I think that reviewing the reviews of statistical power, as this paper does, is worth doing, even if the fields being reviewed aren’t consistent over time. It’s also hard to untangle whether the authors actually said things that oversold the research or if the Guardian just reported it that way.

But at least in the way it’s covered here, this looks like a model of bad scientific practice, all right. Just not the kind of model that was intended.

[Edited: Smaldino points on Twitter to another paper that offers additional support for the claim that power hasn’t increased in psychology and cognitive neuroscience, at least.]

Written by epopp

September 22, 2016 at 12:28 pm

democracy is tougher than you think, wealthy ones at least

We often hear that democracy is under threat. But is that true? In 2005, Adam Przeworski wrote an article in Public Choice arguing that *wealthy* democracies are stable but poor ones are not. He starts with the following observation:

No democracy ever fell in a country with a per capita income higher than that of Argentina in 1975, $6055.1 This is a startling fact, given that throughout history about 70 democracies collapsed in poorer countries. In contrast, 35 democracies spent about 1000 years under more developed conditions and not one died.

Developed democracies survived wars, riots, scandals, economic and governmental crises, hell or high water. The probability that democracy survives increases monotonically in per capita income. Between 1951 and 1990, the probability that a democracy would die during any particular year in countries with per capita income under $1000 was 0.1636, which implies that their expected life was about 6 years. Between $1001 and 3000, this probability was 0.0561, for an expected duration of about 18 years. Between $3001 and 6055, the probability was 0.0216, which translates into about 46 years of expected life. And what happens above $6055 we already know: democracy lasts forever.

Wow.

How does one explain this pattern? Przeworski describes a model where elites offer income redistribution plans, people vote, and the elites decide to keep or ditch democracy. The model has a simple feature when you write it out: the wealthier the society, the more pro-democracy equilibria you get.

If true, this model has profound implications of political theory and public policy:

  1. Economic growth is the bulwark of democracy. Thus, if we really want democracy, we should encourage economic growth.
  2. Armed conflict probably does not help democracy. Why? Wars tend to destroy economic value and make your country poorer and that increase anti-democracy movements (e.g., Syria and Iraq).
  3. A lot of people tell you that we should be afraid of outsiders because they will threaten democracy. Not true, at least for wealthy democracies.

This article should be a classic!

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street 

Written by fabiorojas

January 4, 2016 at 3:34 am

book forum: ivan ermakoff’s ruling oneself out

Ruling Oneself Out by Ivan  Ermakoff is a book that should of had a different title. In my view, the book should have been called “When Regimes Just Give Up and Die.” This important book speaks to a political process that direly needs more attention in both politics and sociology: turning points in history when one political order simply surrenders in the face of a challenger.

The book has two layers. One layer is a close reading of two examples of political abdication – the 1933 vote in Germany to give Hitler virtually unlimited powers and the 1940 decision by the French government to transfer authority to Petain.

The second layer is an insanely ambitious attempt to reconstruct how sociologists approach historical explanations. Ermakoff presents a theory of political abdication that combines the following elements: (a) an analysis of how political groups lose cohesion in the face of threat, (b) a game theoretical analysis of how groups under threat reform themselves, and (c) a criticism of other accounts of this process. So rather than throw all explanation to historical accident, Ermakoff tries to tease out how people surrender given their historically contingent self-understanding and their incentives. Think of it as historical explanation that is part phenomonology and part rational choice.

For the next two days, I will review these layers and then wrap up with a discussion of Ermakoff’s recent ASR article that presents a more extensive theory of historical contingency.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

December 22, 2015 at 12:01 am

the risky sex game paper – impact on current research

Yesterday, I described a paper written by Kirby Schroeder and my self on infection networks. Yesterday’s post addresses the professional lessons I learned. Today, I want to talk about the impact of the paper on current work. For a long time, the paper, literally, got zero citations in peer reviewed journals. Then, the citations increased around 2010, with people in economics, health, and biology discussing the paper.

Economics: The main commentary among economists is that this is a model of interaction, which can then be used to assess the impact of policy. For example, a paper in the American Law and Economics Review notes that the paper models risky behavior but does not model the law. Other economists are attracted to our prediction about infection knowledge and epidemic plateaus (once the disease becomes common knowledge, people shift behavior and transmission stalls).

Health: The Archives of Sexual Behavior has an article that discusses our article in the context of trying to expand models of disease transmission. For example, we critique the health belief model for ignoring interaction. We criticize sexual scripting theory for ignoring risk and strategic action.

Biology: Perhaps the most interesting impact of the paper is the impact on mathematical biology. In The Journal of Theoretical Biology, a team of mathematicians use the model to address group formation. In a model derived from our Risky Sex Game model, they show that the population, under certain conditions, will separate into specific groups based on HIV status.

Bottom line: People sure hated the paper when I wrote it, but its children are a joy to behold.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street!!

Written by fabiorojas

April 29, 2015 at 12:01 am