orgtheory.net

where theory and method meet

Brayden

One of my take-aways from last week’s comparative organizational conference was that theory and method are much more linked than we sometimes admit. We write our papers as if the two are largely detached from one another. We have an abstract set of ideas that we want to test and we draw on our method tools to do rigorous tests of those hypotheses. In reality though doing good research requires a mix of inductive and deductive work, theory blending with our choice of method. What struck me in our conversations at the conference is how difficult it is to even have a conversation about theory, especially when not everyone in the group shares the same theoretical perspective, without crossing over into a discussion of methods. This was perhaps even more striking at the conference because we were asked to talk about ideas at a highly abstract level, a sort of pre-hypothesis level.

In the latest issue of the Academy of Management Review, John Van Maanen, Jesper Sørensen, and Terence Mitchell tackle the theory-method nexus in a special topic forum. In the introduction to the forum they make some provocative claims.

[I]t is our stand that theory and method – or should be – highly interrelated in practice…[T]he relationship between theory and method remains a complicated one and a source of some befuddlement, if not controversy within and across various organizational research communities. Such difficulties are not always acknowledged. To wit, textbook treatments of theory-method relationship continue to suggest that methods generate meaningful data used to test, in weak form, the plausibility of theories or, in strong form, the validity of theories given modest to severe boundary constraints (e.g., Blalock, 1969; Bryman, 1989; Dubin, 1978; Yin, 2002). As an ideal representation, the interplay of theory and data is not problematic but follows a prescribed – almost magical – sequence. In conventional form, problems are identified that are of interest to a identifiable research community (perhaps more than one), specific research questions or hypotheses are posed that rest on the theoretical resources those in the research community possess (or seek), appropriate research strategies based on either or both deductive or inductive logic are then spelled out, qualitative or quantitative measures are chosen and put to work, data compilation and analysis then follow, and, with pluck and luck, plausible (or verifiable) inferences and conclusions result. End of story.

Reflexivity need not go deep to question this overly simplified and idealized version of the interplay between theory and data. Practicing organizational researchers know both from experience and readily available collegial critique that any narrative suggesting an orderly, standard model of the research process is rather misleading. What seems apparent to those who have carried out organizational research projects is that method can generate and shape theory, just as theory can generate and shape method. There is a back-and-forth character in which concepts, conjectures, and data are in continuous interplay. If one thinks of concepts and conjectures as existing on a conceptual plane and of data residing on an empirical one, the more links and the more varied the links between the two planes, the more promising the research. One function of empirical studies, then, is to generate the kind of data that can be used in the theorizing process itself, thus allowing a study to progress as a cognitive or sensemaking venture that unfolds over time (1145-46).

Their essay is one of the more fascinating things I’ve read in a while. Van Maanen, Sørensen, and Mitchell touch one of the most important topics of contemporary organizational research, but one of the least discussed. I’d like to hear more discussion of how our methods have fundamentally shaped our theories.

Someone made the comment at our conference that macro-institutional research is inevitably a diffusion story. Although this may be somewhat of an exaggeration, I think they’re at least partly right that much of this research focuses on how things spread or change across a population and the effect that this spread of stuff has on other dimensions of organizational life. I don’t think our strong focus on diffusion is the result of limited imagination or a lack of ideas; rather, regression methods lend themselves to easily study how large macro-shifts in patterns of behavior occur. Our methodological toolkit has strongly shaped our theoretical emphases. I think this reality of organizational studies calls for a diverse set of methods so as to maximize the kinds of theoretical questions we can address.

The special forum contains an interesting set of papers, including one by conference participant Peer Fiss that addresses set-theoretical approaches to the study of organizational configurations.

Written by brayden king

October 8, 2007 at 1:55 am

14 Responses

Subscribe to comments with RSS.

  1. I suspect that some researchers are more self-conscious of this connection than others, but overall the point is certainly well taken. I think that we should start using Lis’ phrase “theory-method packages” more often rather than talking about “institutional theory,” “ecological theory”, etc. as if they were free-standing theories disconnected from methods. It was clear that the move beyond cross-sectional OLS regression characteristic of first-generation organizational research and toward process oriented models (Poisson regression, discrete-time event history, etc.) was a key component of the emergence of Stanford institutionalism and ecological theory (i.e. Tuma and Hannan 1984). The Hannan and Meyer (1979) collaboration that kicked off the “world polity” approach was also explicitly methodologically driven (Hannan’s chapter is one of the best introduction to dynamic multiple equation linear models available). And that’s why you usually get a better introduction to Poisson regression and other count models in the introductory chapter of your standard org ecology book than you get in most statistics textbooks. It is also clear that the network approach is not just a theory, but it is inherently connected to the largely methodologically driven imagery derived from network analysis.

    Like

    Omar

    October 8, 2007 at 11:32 am

  2. My beef: At the conference, a lot of people were criticizing OLS as a lame, out of date method. But my attitude is that OLS is the foundation of 90% of all modern statistical work because most sophisticated models are modifications of a linear model. And in many cases, a thoughtful application of OLS gives you answers that are fairly similar to fancier techniques. Why do people trash the model that is the foundation of most of what we do? Is it cool?

    Long live OLS! Long live OLS!!

    Like

    fabiorojas

    October 8, 2007 at 4:39 pm

  3. Fabio: Here is one reason OLS is being disparaged (though, not the one people necessarily have in mind).

    And, there is, of course, much debate yet.

    Like

    tf

    October 8, 2007 at 4:51 pm

  4. TF: Good point, but if you are really interested in extreme events, or power laws, you could do OLS with control variables for extreme cases, or us transformations of LHS variables. Basically, there is little that can’t be fruitfully modeled by clever man using OLS or one of its variants.

    Like

    fabiorojas

    October 8, 2007 at 4:58 pm

  5. Yes, Fabio it is the cool thing to do. Conform now!

    Actually, I mostly use regression analysis too, and sometimes use good ole OLS. I have nothing against it per se, but I think we need to to be open to a variety of methods.

    Like

    brayden

    October 8, 2007 at 4:59 pm

  6. I think everyone is open to a variety of methods, but, the strong argument recently made by McKelvey and others (and, apparently this is the subject of an upcoming Organization Science conference) is that most management/org research is mis-specified and wrong given the Gaussian approach that predominates.

    Like

    tf

    October 8, 2007 at 5:01 pm

  7. Brayden: I agree we need to be open. But I am flustered when people slam OLS and then turn around and praise, say, event history models, when those are usually just juiced up linear models. At a certain level, it is simply illogical.

    TF: The proof is in the pudding. I readily admit that much social science data violates the classical OLS assumptions. But it is very often that OLS gives amazingly good answers and that the fancier method yields fairly similar answers. What’s the point?

    What I have always wanted from the methodology crowd is a theorem that says: “You should method X when the data has property Y. If you don’t have Y, then OLS and X will produce answers that differ by Z.” In other words, why should I learn some crazy new technique if a good application of OLS will tell me the same info?

    Like

    fabiorojas

    October 8, 2007 at 5:09 pm

  8. Fabio: Right, I agree with you about that. In my mind, I’m grouping OLS, event-history, count models, and all other regression models into the same category. When I talk about methodological diversity I mean the inclusion of methods that operate according to a very different logic, including but not limited to ethnographies and case studies.

    Like

    brayden

    October 8, 2007 at 5:20 pm

  9. Relating to Teppo’s point, the special topic form in AMR has an interesting piece by Arturs Kalnins about the problems of sample selection bias. The problem is a big one in organizational studies, as we tend to study big organizations for which there is readily available data. Of course, by sampling on particular kinds of firms (or other organizations) we are building certain theoretical biases, especially if you believe that the kinds of methods that we use influence our theoretical models.

    Like

    brayden

    October 8, 2007 at 5:22 pm

  10. Right, Brayden – this ties into the problem of comparison, e.g., of young and old (where our theories/data heavily build on the latter [Fortune 500 public companies] thus biasing us in terms of the org landscape).

    Like

    tf

    October 8, 2007 at 5:45 pm

  11. The difference between OLS and other non-linear models is that in some cases, OLS does not have a clear theoretical foundation. This was one of the points of Coleman’s analysis of Poisson process models in The Mathematics of Collective Action. The same goes for various duration models, in which theoretical considerations constraint the shape of the hazard function; this cannot be estimated via OLS. In organizational ecology, the founding rate has a clear theoretical interpretation. This quantity cannot be estimated via OLS. Similarly, split population (or hurdle models) have an intuitive link to the postulated data generating process, which is non existent in linear models, such as OLS, in which the expected realization of the dependent variable lies between minus infinity and plus infinity.

    Like

    Omar

    October 8, 2007 at 5:51 pm

  12. Omar, My claim is that:

    1. Most of the fancy models we use have the form:

    linear models + some sort of link function/transformation + an MLE/pseudo-MLE estimator.

    Basically everything in the sociologists tool box fits this form: OLS, 2/3stageLS, SUR, SEM, WLS, GLS, poisson/neg binomial, logit/probits/multinomial, tobis, heckman, event history (both logit & Cox), log-linear. They are all linear models that have an additional transformation, or an alternate computation of confidence intervals and point estimates.

    2. Many times, adding the stuff after the linear model does not tell you much that you didn’t already know from a well chosen linear model. At best, it adds extra information about the structure of the data that is useful to know (e.g., the alpha in an neg binomial).

    My point is essentially a pragmatist point. Theoretically, these elaborations of OLS are more justified, but as a matter of practice, they often tell you much the same thing as a smart OLS model. Ultimately, if the results of a fancy model differ from OLS, my gut sense is that it indicated highly unusual data, or that you are picking up on some really minute differences.

    Like

    fabiorojas

    October 8, 2007 at 7:18 pm

  13. i’m with fabio, but i would add two things:

    1) not only are the fancier methods harder to do but they are harder to read. i believe one of kieran’s articles showed both OLS and logit results for giving blood. now logit is a pretty basic technique, but some of the audiences for that article were only moderately savvy and so including the technically inferior, but substantively similar results, allowed them to read it.

    2) extra fancy methods can be so hard that they end up being impractical to apply and the net result is that perfectionism among methodologists may make for /less/ appropriate techniques by the median empirical practitioner. for instance, i currently have an R+R on a methods piece and the reviewers asked that we change some assumptions. their critique was very well-founded but it doesn’t change the results much in our test data and the reviewers’ version can only be done in SAS or BUGS whereas our original version could be implemented in any package that supports multilevel analysis.

    Like

    gabrielrossman

    October 8, 2007 at 8:44 pm

  14. […] Brayden noted recently, one of the things sociology and org theory tend to study is how institutions spread, in part […]

    Like


Comments are closed.

%d bloggers like this: