LikeLike

]]>1) not only are the fancier methods harder to do but they are harder to read. i believe one of kieran’s articles showed both OLS and logit results for giving blood. now logit is a pretty basic technique, but some of the audiences for that article were only moderately savvy and so including the technically inferior, but substantively similar results, allowed them to read it.

2) extra fancy methods can be so hard that they end up being impractical to apply and the net result is that perfectionism among methodologists may make for /less/ appropriate techniques by the median empirical practitioner. for instance, i currently have an R+R on a methods piece and the reviewers asked that we change some assumptions. their critique was very well-founded but it doesn’t change the results much in our test data and the reviewers’ version can only be done in SAS or BUGS whereas our original version could be implemented in any package that supports multilevel analysis.

LikeLike

]]>1. Most of the fancy models we use have the form:

linear models + some sort of link function/transformation + an MLE/pseudo-MLE estimator.

Basically everything in the sociologists tool box fits this form: OLS, 2/3stageLS, SUR, SEM, WLS, GLS, poisson/neg binomial, logit/probits/multinomial, tobis, heckman, event history (both logit & Cox), log-linear. They are all linear models that have an additional transformation, or an alternate computation of confidence intervals and point estimates.

2. Many times, adding the stuff after the linear model does not tell you much that you didn’t already know from a well chosen linear model. At best, it adds extra information about the structure of the data that is useful to know (e.g., the alpha in an neg binomial).

My point is essentially a pragmatist point. Theoretically, these elaborations of OLS are more justified, but as a matter of practice, they often tell you much the same thing as a smart OLS model. Ultimately, if the results of a fancy model differ from OLS, my gut sense is that it indicated highly unusual data, or that you are picking up on some really minute differences.

LikeLike

]]>LikeLike

]]>LikeLike

]]>LikeLike

]]>LikeLike

]]>TF: The proof is in the pudding. I readily admit that much social science data violates the classical OLS assumptions. But it is very often that OLS gives amazingly good answers and that the fancier method yields fairly similar answers. What’s the point?

What I have always wanted from the methodology crowd is a theorem that says: “You should method X when the data has property Y. If you don’t have Y, then OLS and X will produce answers that differ by Z.” In other words, why should I learn some crazy new technique if a good application of OLS will tell me the same info?

LikeLike

]]>LikeLike

]]>Actually, I mostly use regression analysis too, and sometimes use good ole OLS. I have nothing against it per se, but I think we need to to be open to a variety of methods.

LikeLike

]]>LikeLike

]]>And, there is, of course, much debate yet.

LikeLike

]]>Long live OLS! Long live OLS!!

LikeLike

]]>LikeLike

]]>