My view of the Obama administration is that immigration reform is a second tier issue and they have little interest in pushing hard for change. For six years, Obama’s administration did little, or might have even encouraged, the massive increase in deportations, including those without criminal records. Obama proposed some extremely modest reforms which have had almost no effect on making it easier to lawfully move between nations. In some cases, he has been blocked in the courts. In other cases, the administration has been unable to properly implement its own very modest reforms.
For example, one reform was that children escaping from gang violence in Latin America could apply for asylum. Seems reasonable, but not when you learn that 0 children out of 5,400 applicants have actually moved to the United States from crime ridden nations. From The New York Times:
“Really, it’s pathetic that no child has come through this program,” said Lavinia Limón, the president and chief executive of the U.S. Committee for Refugees and Immigrants, a nonprofit organization. Pointing to administration officials, she added, “I wonder if it were their child living in the murder capital of the world, whether they would have more sense of urgency.”
When you read the details of the policy, you quickly realize that the policy was never intended to actually let anyone in. Like most immigration policy, the rules are designed to prevent migration, not make it legal:
State Department officials said the program was also slowed by the requirement of DNA tests for parents in the United States and their children in Central America before the children could be granted entry. The officials said some parents had taken a long time to have those tests performed, further extending the delays. The process also includes security checks, medical screenings, payments for airline flights, and other paperwork.
It should be no surprise that people in impoverished areas would have problems with paying for medical tests, paternity tests, airline tickets, and endless paperwork. Most native born Americans would be hard pressed to produce this amount of materials.
In my book, Obama will go down as the deporter of children, many to their deaths. May his successor see the world as a place for free people.
Talking About Organizations is a podcast run by Dmitrijs Kravcenko, Pedro Monteiro, Miranda Lewis, and Ralph Soule. And it is all orgs, all the time. They have four episodes so far and they touch on good topics:
- Management Fundamentals
Recommended for orgheads everywhere.
Very few models in statistics have nice, clean closed form solutions. Usually, coefficients in models must be estimated by taking an initial guess and improving the estimate (e.g., the Newton-Raphson method). If your estimates stabilize, then you say “Mission accomplished, the coefficient is X!” Sometimes, your statistical software will say “I stop because the model does not converge – the estimates bounce around.”
Normally, people throw up their hands and say “too bad, this model is inconclusive” and they move on. This is wrong. Why? The convergence/non-convergence of a model estimate is the result of completely arbitrary choices. Simple example:
I am estimating the effect of a new drug on the number of days that people live after treatment. Assume that I have nice data from a clean experiment. I will estimate the # of days using a negative binomial regression since I have count data which may/may not be over-dispersed. Stata says “sorry, likelihood function is not-concave, model won’t converge.” So I actually ask Stata to show me the likelihood function and it bounces around by about 3% – more than the default settings. Furthermore, my coefficient estimates bounce around a little. The effect of treatment is about two months +/- a week, depending in the settings.
As you can see, the data clearly supports the hypothesis that the treatment works (i.e., extra days alive >0). All “non-convergence” means is that there might be multiple likelihood function maxima and they are all close in terms of practical significance, or that the ML surface is very “wiggly” around the likely maximum point.
Does that mean you can ignore convergence issues in maximum likelihood estimation? No! Another example:
Same example as above – you are trying to measure effectiveness of a drug and I get “non-convergence” from Stata. But in this case, I look at the ML estimates and notice they bounce around a lot. Then, I ask Stata to estimate with different sensitivity settings and discover that the coefficients are often near zero and sometimes that are far from zero.
The evidence here supports the null hypothesis. Same error message, but different substantive conclusions.
The lesson is simple. In applied statistics, we get lazy and rely on simple answers: p-values, r-squared, and error messages. What they all have in common is that they are arbitrary rules. To really understand your model, you need to actually look at the full range of information and not just rely on cut-offs. This makes publication harder (referees can’t just look for asterisks in tables) but it’s better thinking.