orgtheory.net

Archive for the ‘social construction’ Category

is 2020 the “drop your tools” and “do-ocracy” epoch?

In Karl Weick’s (1996) analysis of the Mann Gulch disaster and a similar fire at South Canyon, he differentiates the organizational conditions under which some smoke jumpers survived, while others died when wildfires suddenly turned.  According to Weick, the key turning point between survival and death was the moment when one firefighter ordered others in his team to “drop your tools.”  Among other organizing challenges, this order to leave expensive equipment violated smoke jumpers’ routines, even their central identities as smoke jumpers.  Indeed, some did not comply with this unusual order to abandon their tools, until others took their shovels and saws away.  Post-mortem reports revealed how smoke jumpers who perished were still wearing their heavy packs, with their equipment still at their sides.  Those who shed their tools, often at the urging of others, were able to outrun or take shelter from the wildfires in time.  Weick’s introduction states,

“Dropping one’s tools is a proxy for unlearning, for adaptation, for flexibility…It is the very unwillingness of people to drop their tools that turns some of these dramas into tragedies” (301-302).

 

Around the world, some organizations, particularly those in the tech and finance industries, were among the first to enact contingency plans such as telecommuting and spreading workers out among sites.  Such steps prompted consternation among some about the possible meaning and aims of such actions – is the situation that serious?  Is this just an opportune moment for surveilling more content and testing outsourcing and worker replaceability?  What does all this mean?

 

Meanwhile, other organizations are investing great efforts to continue regular topdown, operations, sprinkled in with the occasional fantasy planning directives.  (Anyone who has watched a class of undergraduates and then a class of kindergarteners try not to touch their faces will quickly realize the limits of such measures.)  Without the cooperation of organizations and individual persons, critics and health professionals fear that certain organizations – namely hospitals and the medical care system – can collapse, as their operations and practices are designed for conditions of stability rather than large, sustained crises.

FlattenthecurveScreen Shot 2020-03-09 at 11.27.45 AM

 

For organizational researchers like myself, these weeks have been a moment of ascertaining whether organizations and people can adapt, or whether they need some nudging to acknowledge that all is not normal and to adjust.  At an individual level, we’re all facing situations with our employers, voluntary organizations, schools and universities, and health care for the most vulnerable.

 

For the everyday person, the realization that organizations such as the state can be slow to react, and perhaps has various interests and constraints that inhibit proactive instead of reactive actions, may be imminent.  So, what can compensate for these organizational inabilities to act?  In my classes, I’ve turned towards amplifying more nimble and adaptive organizational forms and practices.  Earlier in the semester, I’ve had students discuss readings such as the Combahee River Collective in How We Get Free (2017, AK Press), to teach about non- and less- bureaucratic options for organizing that incorporate a wider range stakeholders’ interests, including ones that challenge conventional capitalist exchanges.

 

To help my undergraduates think through immediately applicable possibilities, I recently assigned a chapter from my Enabling Creative Chaos book on “do-ocracy” at Burning Man to show how people can initiate and carry out both simple and complex projects to meet civic needs.  Then, I tasked them with thinking through possible activities that exemplify do-ocracy.  So far, students have responded with suggestions about pooling together information, supplies, and support for the more vulnerable.  One even recommended undertaking complex projects like developing screening tests and vaccines – something, that if I’ve read between the lines correctly, well-resourced organizations have been able to do as part of their research, bypassing what appears to be a badly-hampered response CDC in the US.

 

(For those looking for mutual aid-type readings that are in a similar vein, Daniel Aldrich’s Black Wave (2019, University of Chicago Press) examines how decentralized efforts enabled towns in Japan to recover more quickly from disasters.)

 

Taking a step back, this period could be one of where many challenges, including climate change and growing inequality, can awaken some of us to our individual and collective potential.  Will be this be the epoch where we engage in emergent, interdependent activities that promote collective survival?  Or will we instead suffer and die as individuals, with packs on our backs, laden down with expensive but ultimately useless tools?

Written by katherinechen

March 9, 2020 at 3:29 pm

ho ho ho, to santa’s place we go: the spectacle of turning snow into euros

How does a sparsely populated, snowy, and remote area in Finland become Santa’s retreat, drawing tourists eager to spot Santa and his abode?

Organization Science has an article about Enontekiö’s transformation into a tourist destination.  Here’s the intriguing abstract about how to realize a myth via marketing:

The Conversation blog features co-author ‘s general audience-friendly preview of the article.

Happy holidays, everyone!  Wishing you all happiness and health.

Written by katherinechen

December 25, 2018 at 1:11 am

Posted in culture, social construction

Tagged with

book forum: the conversational firm by catherine turco, part 2

This month, we are reviewing Catherine Turco’s Conversational firm. Earlier, I summarized the contents. The book is an ethnographic account of a tech firm that uses social media for internal communication. Turco’s main goal is to advance the argument that social media has substantially altered communications and hierarchy inside firms. Now, I’ll highlight some strong points of the book and next week I will raise critiques.

First, the book correctly points out that the interactional order of firms is now quite different in the social media age than before. In a world of paper based communication and face to face meetings, it was relatively easy to control who knew what. In contrast, it is now possible for modern firms to have much more wide ranging discussions. The project manager really does have (some) direct access to the CEO. This is truly remarkable.

Second, the book discusses the possibility that authority may be redefined in this situation. If everyone at work has a wiki where they can discuss the firm’s issues, then managers may end up giving away power to others.

For me, these two lessons point to an important issue in organizational design – the importance of social media as a tool for “flattening out” the organization. This has gotten a lot of attention among business writers and management scholars. The lesson I take from Turco’s book is that the story is complex. On the one hand, yes, social media democratizes the culture of many firms. But on the other hand, this is not straightforward or even desirable in many cases. The “internal” public sphere of a firm may not be the best place to settle policy. By allowing the middle of the organization to define issues, it may or may not be valuable or constructive.

Next week: Why didn’t Turco talk about laziness?

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($5 – cheap!!!!)/Theory for the Working Sociologist/From Black Power/Party in the Street  

Written by fabiorojas

March 24, 2017 at 3:12 pm

why do women who do more housework sometimes think it’s fair? an answer from mito akiyoshi

Former guest blogger Mito Akiyoshi has a new article in PLoS One about perceptions of fairness in the family. From the abstract:

Married women often undertake a larger share of housework in many countries and yet they do not always perceive the inequitable division of household labor to be “unfair.” Several theories have been proposed to explain the pervasive perception of fairness that is incongruent with the observed inequity in household tasks. These theories include 1) economic resource theory, 2) time constraint theory, 3) gender value theory, and 4) relative deprivation theory. This paper re-examines these theories with newly available data collected on Japanese married women in 2014 in order to achieve a new understanding of the gendered nature of housework. It finds that social comparison with others is a key mechanism that explains women’s perception of fairness. The finding is compatible with relative deprivation theory. In addition to confirming the validity of the theory of relative deprivation, it further uncovers that a woman’s reference groups tend to be people with similar life circumstances rather than non-specific others. The perceived fairness is also found to contribute to the sense of overall happiness. The significant contribution of this paper is to explicate how this seeming contradiction of inequity in the division of housework and the perception of fairness endures.

Nice application of reference group theory. Once again, more evidence that happiness and grievance don’t always reflect material conditions.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($2!!!!)/From Black Power/Party in the Street

Written by fabiorojas

July 15, 2015 at 12:01 am

asian american privilege? a skeptical, but nuanced, view, and a call for more research – a guest post by raj ghoshal and diana pan

Raj Andrew Ghoshal is an assistant professor of sociology at Goucher College and Yung-yi Diana Pan is an assistant professor of sociology at Brooklyn College. This guest post is a discussion of Asian Americans and their status in American society.

As a guest post last month noted, Asian Americans enjoy higher average incomes than whites in the United States. We were critical of much in that post, but believe it raises an under-examined question: Where do Asian Americans stand in the US racial system? In this post, we argue that claims of Asian American privilege are premature, and that Asian Americans’ standing raises interesting questions about the nature of race systems.

We distinguish two dimensions of racial stratification: (1) a more formal, mainly economic hierarchy, and (2) a system of social inclusion/exclusion. This is a line of argument developed by various scholars under different names, and in some ways parallels claims that racial sterotypes concern both warmth and competence. We see Asian Americans as still behind in the more informal system of inclusion/exclusion, while close (but not equal) to whites in the formal hierarchy. Here’s why.

Read the rest of this entry »

Written by fabiorojas

February 4, 2015 at 12:01 am

defending computational ethnography

Earlier this week, I suggested a lot is to be gained by using computational techniques to measure and analyze qualitative materials, such as ethnographic field notes. The intuition is simple. Qualitative research uses, or produces, a lot of text. Normally, we have to rely on the judgment of the researcher. But now, we have tools that can help us measure and sort the materials, so that we have a firmer basis on which to make claims about what our research does and does not say.

The comments raised a few issues. For example, Neal Caren wrote:

 This is like saying that you want your driverless cars to work for Uber while you are sleeping. While it sounds possible, as currently configured neither ethnographic practices nor quantitative text analysis are up to the task.
This is puzzling. No one made this claim. If people believe that computers will do qualitative work by collecting data or developing hypotheses and research strategies, then they are mistaken. I never said that nor did I imply it. Instead, what I did suggest is that computer scientists are making progress on detecting meaning and content and are doing so in ways that would help research map out or measure text. And with any method, the researcher is responsible for providing definitions, defining the unit of analysis and so forth. Just as we don’t expect regression models to work “while you are sleeping,” we don’t expect automated topic models or other techniques to work without a great level of guidance from people. It’s just a tool, not a magic box.
Another comment was meant as a criticism, but actually supports my point. For example, J wrote:
This assumes that field notes are static and once written, go unchanged. But this is not the consensus among ethnographers, as I understand the field. Jonathan van Maanen, for example, says that field notes are meant to be written and re-written constantly, well into the writing stage. And so if this is the case, then an ethnographer can, implicitly or intentionallly, stack the deck (or, in this case, the data) in their favor during rewrites. What is “typical” can be manipulated, even under the guise of computational methods.
Exactly. If we suspect that field notes and memos are changing after each version, we can actually test that hypothesis. What words appear (or co-appear) in each version? Do word combinations with different sentiments or meanings change in each version? I think it would be extremely illuminating to see what each version of an ethnographer’s notes keeps or discards. Normally, this is impossible to observe and, when reported (which is rare), hard to measure. Now, we actually have some tools.
Will computational ethnography be easy or simple? No. But instead of pretending that qualitative research is buried in a sacred and impenetrable fog of meaning, we can actually apply the tools that are now becoming routine in other areas for studying masses of text. It’s a great frontier to be working in. More sociologists should look into it.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($1!!!!)/From Black Power/Party in the Street!! 

Written by fabiorojas

January 23, 2015 at 12:01 am

computational ethnography

An important frontier in sociology is computational ethnography – the application of textual analysis, topic modelling, and related techniques to the data generated through ethnographic observation (e.g., field notes and interview transcripts). I got this idea when I saw a really great post-doc present a paper at ASA where historical materials were analyzed using topic modelling techniques, such as LDA.

Let me motivate this with a simple example. Let’s say I am a school ethnographer and I make a claim about how pupils perceive teachers. Typically, the ethnographer would offer an example from his or her field notes that illustrates the perceptions of the teacher. Then, someone would ask, “is this a typical observation?” and then the ethnographer would say, “yes, trust me.”

We no longer have to do that. Since ethnographers produce text, one can use topic models to map out themes or words that tend to appear in field notes and interview transcripts. Then, all block quotes from fields notes and transcripts can be compared to the entire corpus produced during field work. Not only would it attest to the commonality of a topic, but also how it is embedded in a larger network of discourse and meaning.

Cultural sociology, the future is here.

50+ chapters of grad skool advice goodness: Grad Skool Rulz ($1!!!!)/From Black Power/Party in the Street!! 

Written by fabiorojas

January 20, 2015 at 12:01 am

race and genomics: comments on shiao et al.

Shiao et al in Sociological Theory, the symposioum, Scatterplot’s discussion, Andrew Perrin’s comments, last week’s discussion.

Last week, I argued that many sociologists make a strong argument. Not only are social classifications of race a convention, but there is no meaningful clustering of people that can be derived from physical or biological traits. To make this claim, I suggested that one would need to have a discussion of what meaningful traits would include, get a huge sample people, and then see if there are indeed clusters. The purpose of Shaio et al (2012) is to claim that when someone conducts such an exercise, there is some clustering.

Before I offer my own view of the evidence that Shiao et al offer, we need to set some ground rules. What are the logical possible outcomes of such an exercise?

  1. The null hypothesis: your clustering methods yield no clusters (e.g., there are no detectable sub-groups of people).
  2. The weak hypothesis: clustering algorithms yield ambiguous results. It’s like getting in regression analysis a small correlation with a p=.07. This is important because it should shift your prior moderately.
  3. The “conventional” strong hypothesis: unambiguous groups that correspond to social classifications of people. E.g., there really is a “White” group of people corresponding to people from Europe.
  4. The “unconventional” strong hypothesis: unambiguous groups that do not correspond to common social classifications of people. For example, there might be an extremely well defined group of people that combines Hawaiians and Albanians.

A few technical points, which are important. First, any such exercise will need top incorporate robustness checks because clustering methods require the use to set up initial parameters. Clustering algorithms do not tell you how many groups there are. Instead, they answer the question of how well the model fits the hypothesis that you have X groups. Second, sociologists tend to mix up these possible outcomes. They correctly point out that there is a social construction called “race” which is real in its effects and influence on people. But that doesn’t logically entail anything about the presence or absence of human populations that are differentiated due to random variation of inherent physical traits over time. Also, they fail to consider #4. Their might be actual differences, but they might not match up to our common beliefs.

So what does Shiao at al offer and where does it lie in this spectrum of possibilities? Well, the article is a not a systematic review of genomic research that searches for clusters or people. Rather, it offers a few important points drawn from anthropology and genomics. First, Shiao et al point out that there is a now undisputed (among academics) human history. Humans originated in East Africa and then spread out (“Out of Africa thesis”). Second, as people spread out, genomic variation emerges as people mate with people close by. Third, genetic drift implies that geography will predict variations in genes. As you move from X to Y, you will see measurable differences in people. Fourth, these differences are gradual in character.

Shiao then switch gears and talk about clustering of people using genomic data. They tell us that there are statistically detectable and stable group differences and that these do not rigidly determine behavior. They also cite research suggesting these statistical groups correlate with self-described racial groupings. Then, the authors discuss a “bounded” approach to social theory where biology imposes some constraints on the variation on behavior but in a non-deterministic fashion.

I’ll get to the symposium next week, but here’s my response: 1. There is a real tension. At some points, Shiao et al suggests a world of gradual variation, which suggests no distinct racial groups (outcome #1) but then there’s a big focus clusters.  2. If we do live in a world of gradual, but real, variation in human biology, then the whole clustering approach is misleading. Instead, we might live in a world that’s like a contour map. It’s all connected, there are no groups, but you see some variables increase as you move along the map. 3. If that’s true, we need an outcome #5 – “race is not real but biology is real.” 4. I definitely need more detail on the clustering methods and procedures. Some critics have pointed out that the clusters found in research are endogenously produced, which makes me suspect that the underlying science might be hovering around outcomes #1 (it all depends on the algorithm and its parameters) or #2 (there might be some clustering, but it is very poorly defined).

50+ chapters of grad skool advice goodness: Grad Skool Rulz/From Black Power

Written by fabiorojas

October 20, 2014 at 12:01 am

why sociology is a bit more threatening than economics: the case of fetal alcohol syndrome

Consider the following approaches to the same in issue – fetal alcohol syndrome (FAS). In 2000, Elizabeth Armstrong and Ernest Abel published an article in the journal Alcohol and Alcoholism arguing that fetal alcohol syndrome had become a moral panic.  Even though people had become obsessed with FAS, there was actually very little evidence to suggest that moderate alcohol consumption damaged fetuses. This argument is elaborated in the 2008 book Conceiving Risk, Bearing Responsibility. In 2013, the economist Emily Oster published a book called Expecting Better, which assesses pregnancy advice with a review of the pertinent clinical evidence. Like Armstrong, Oster finds that the norm against moderate alcohol consumption is  not supported by the data.

The comparison between Oster and Armstrong is revealing. For example, more people know about Expecting Better because, frankly, economists are more respected than sociologists. But there is a deeper lesson. When Oster frames her work, she presents it as a morally neutral project. Her framing is roughly: “Statistics is hard, people may not have all the facts, and you might have a mistaken belief, but as an economist,  I am trained in statistics. I can help you make a better choice.” Thus, the reader is morally blameless.

In contrast, Armstrong’s approach to FAS relies on standard explanations of moral panics in sociology. It goes something like this: “The facts we believe reflect our underlying biases. These biases reflect our evaluations of certain types of people, who may not deserve that stigma.” Thus, if the reader buys FAS, they are implicated in an immoral action – unfairly exercising gender prejudice. Heck, all of society is implicated.

This is an interesting observation about the public image of disciplines. Economists may advocate unpopular policies (e.g., they are often critical of minimum wage laws) but their moral framework is fairly neutral and technocratic. If you don’t buy my policy, it’s probably because you aren’t aware of all the factors involved. You haven’t calculate the social welfare function properly! In contrast, sociologists often make arguments that implicate the moral character of the audience. And that doesn’t buy you a lot of friends.

50+ chapters of grad skool advice goodness: From Black Power/Grad Skool Rulz

Written by fabiorojas

June 11, 2014 at 12:01 am

how corporations got rights

This week the Supreme Court considered whether corporations ought to have constitutional rights of religious freedom, as given to human individuals, in Sebelius v. Hobby Lobby Stores Inc. For many people, the idea that companies ought to be given all of the rights of humans is absurd. But in recent years, this idea has become more and more of a reality, thanks to game-changing cases such as Citizens United vs. FEC. How did we get to this place?

In an article on Slate, Naomi Lamoreaux and William Novak briefly go over the history of how corporations evolved from artificial persons to real persons with human rights. They emphasize that this change was a slow descent that still seemed unthinkable to justices as late as the Rehnquist court.

The court’s move toward extending liberty rights to corporations is even more recent. In 1978, the court held in First National Bank of Boston v. Bellotti that citizens had the right to hear corporate political speech, effectively granting corporations First Amendment speech rights to spend money to influence the political process. But even then, the decision was contentious. Chief Justice William H. Rehnquist, in dissent, reminded the court of its own history: Though it had determined in Santa Clara that corporations had 14th Amendment property protections, it soon after ruled that the liberty of the due-process clause was “the liberty of natural, not artificial persons.”

If you find this piece interesting then I would encourage you to read Lamoreaux’s collaboration with Ruth Bloch, “Corporations and the Fourteenth Amendment,” a much more detailed look at this history. One interesting point that emerges from this paper is that our general understanding of how rights became ascribed to corporations is historically inaccurate. Bloch and Lamoreaux assert that although the Court in Santa Clara v. Southern Pacific Railroad  likened corporations to individuals and asserted that they might have some protected rights, they were careful to distinguish between corporate and human civil rights.

During the late nineteenth and early twentieth centuries, the Supreme Court drew careful distinctions among the various clauses of the Fourteenth Amendment. Some parts it applied to corporations, in particular the phrases involving property rights; but other parts, such as the privileges and immunities clause and the due –
process protections for liberty, it emphatically did not. Although this parsing might seem strange to us today, it derived from a remarkably coherent theory of federalism in which the Court positioned itself both as the enforcer of state regulatory authority over corporations and as the guardian of individual (but not corporate) liberty against state intrusion. To the extent that the Court extended constitutional protections to corporations, it did so to protect the interests of the human persons who made them up.

Read the whole paper. It’s fascinating!

Written by brayden king

March 28, 2014 at 3:15 pm

are we a post-racial society? See Vilna Bashi Treitler’s new book The Ethnic Project: Transforming Racial Fiction into Ethnic Factions

Sociologist Vilna Bashi Treitler has a new and especially timely book coming out about race and ethnicity in the US.  Look for The Ethnic Project: Transforming Racial Fiction into Ethnic Factions at your local bookstore or at the Stanford University Press table at ASAs.

The Ethnic Project analyzes changing depictions of race relations to help readers understand the development and perpetuation of a racial hierarchy in the US.

The cover of Harpers Weekly from 1876 depicting the status of African-Americans and Irish-Americans

The cover of Harper’s Weekly depicts the status of African-Americans and Irish-Americans using racial stereotypes in 1876

A contemporary cartoon and summary after the jump.
Read the rest of this entry »

Written by katherinechen

July 23, 2013 at 5:00 pm

delegitimization as a new world order

Imagine going to the ATM and discovering you can’t withdraw your money because the ATM is out of cash. Not only that, but the bank is closed because of a national holiday so you can’t use the bank teller to withdraw money, electronic transfers of funds are frozen, and the stores refuse to accept credit cards out of concerns that electronic payments won’t be made. If you are able to get to your money, you learn that 6.75% of your funds (or 9.95% if you are a lucky ducky with over 100K euros in your account) will be converted into bank shares under a compulsory levy intended to prop up the banking system. The mortgage payment that you scheduled, the student loan check that you deposited to pay for your education, the vendors that you need to pay for your small business – all are up in the air.

Even if you are told “nevermind, we’re re-evaluating that policy, back to the drawing board!”, what’s the rational thing to do? Most likely, you as a depositor will lose trust in the banking system and pull out as much as you can. If you are in an adjoining country with a shared currency, the mattress, precious metals, and alternate currencies are looking like more attractive places to keep your money. This is the scenario currently unfolding for residents in Cyprus and those who were parking their money in what seemed like a safe haven.

Less than a year ago, Greece was in a similar situation and is still dealing with the consequences. Now, it’s Cyprus’s turn. These supposedly one-off, “unique” situations involving untested interventions are becoming regularities as banking and governance systems around the world are becoming more tightly coupled together. Although Chick Perrow‘s Normal Accidents: Living with High Risk Technologies discusses nuclear plants and chemical plants, his concept of how reactions, once started, are hard to stop (much less understand) in tightly coupled systems, is a helpful read. Add to his concept the erosion of a shared understanding and belief in institutions for a potent mix – that is, the delegitimization processes of trust in banking and governance that we may be seeing in the EU.

For those of us who have been living under the various rocks of committee work/teaching/research/other commitments, a little background reading: Dealbreaker’s take, with plenty of links to others’ analysis, Reuters,and Zero Hedge’s mordant posts.

Written by katherinechen

March 21, 2013 at 2:47 pm

emergence of organizations and markets, part I by padgett & powell

A guest post by John Padgett and Woody Powell about their new book The Emergence of Organizations and Markets:

Innovation in the sense of product design is a popular research topic today, because there is a lot of money in that. Innovation, however, in the deeper sense of new actors—new types of people, new organizational forms—is not even much on the research radar screen of contemporary social scientists, even though “speciation” (to use the biologists’ term for this) lies at the heart of historical change over the longue durée, both in biological evolution and in human history. Social science—meaning mostly economics, political science and sociology—is very good at understanding selection, both at the micro level of individual choice and at the macro level of institutional regulation and lock-in. But novelty, especially of actors but also of alternatives, has first to enter from off the stage of our collective imaginary for our existing theories to be able to go to work. Our analytical shears for trimming are sharp, but the life forces that push up novelty to be trimmed tend to escape our attention, much less our understanding. If this book accomplishes anything, we at least hope to put the research topic of speciation—the emergence of new organizational forms and people—on our collective agenda.

Read the rest of this entry »

Written by fabiorojas

February 7, 2013 at 12:01 am

february guest bloggers: john padgett and woody powell

It is my pleasure to announce our February guest bloggers: Woody Powell and John Padgett. Professor Powell is Professor of Education and (by courtesy) Sociology, Organizational Behavior, Management Science and Engineering, Communication, and Public Policy at Stanford University. Professor Padgett is Professor of Political Science at the University of Chicago.

Woody and John are both leading figures in the study of organizations and networks. Professor Powell is co-author, with Paul DiMaggio, of the groundbreaking “iron cage” article and then went on to publish a series of highly influential papers in social network analysis. John Padgett is one of political science’s leading formal modellers, having written seminal papers on budgeting & garbage can processes, the courts, and state formation. His most well known work is likely the “Medici paper,” which used network analysis to describe the cultivation of political power in early modern Italy and introduced the idea of “robust action” into modern social theory.

They will be discussing their new book: The Emergence of Organizations and Markets. Here’s a summary:

The social sciences are rich with ideas about how choice occurs among alternatives, but have little to say about the invention of new alternatives in the first place.  These authors directly address the question of emergence, both of what we choose and who we are.  With the use of sophisticated deductive models building on the concept of autocatalysis from biochemistry and rich historical cases studies spanning seven centuries, Padgett and Powell develop a theory of the co-evolution of social networks.  Novelty in new persons and new organizational forms emerges from spillovers across multiple, intertwined networks.  To be sure, actors make relations; but the mantra of this book is that in the long run relations make actors.  Through case studies of early capitalism and state formation, communist economic reforms and transition, and technologically advanced capitalism and science, the authors analyze speciation in the context of organizational novelty.  Drawing on ideas from both the physical sciences and the social sciences, and incorporating novel computational, historical, and network analyses, this book offers a genuinely new approach to the question of emergence.

This week and next week, I’ll post some thoughts that John and Woody have shared with me. This is *required* reading for sociologists, management scholars, political scientists, and economists. And yes, there, will be a quiz!

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

February 5, 2013 at 12:01 am

counter culture and social movements

Last semester, an undergraduate student wrote an essay about the Vietnam war movement. She asked why the movement itself was relatively unpopular even though the public was becoming disillusioned with the war. In other words, the antiwar movement won on policy, but lost on politics. Why?

Her hypothesis was that the antiwar movement became strongly associated with the counterculture. This is an important point. In my research on movements – mainly movements of the left for the most part – I have found that activists tend to have a very tense relationship with mainstream American culture at best. They think that conventional politics and bourgeois culture are to be mistrusted.

This leads to an issue that I’ve been thinking about – is left politics inherently counter cultural? Maybe not. The Civil Rights movement was obsessed with adherence to the social norms of the day. Participants were urged to be polite, look proper, and learn how to work within and against mainstream institutions. Nowadays, most left movements seem to have a hostile relationship to mainstream culture. Occupy Wall Street was a grungy DIY movement. The antiwar movement of the 2000s followed in the steps of the anti-globalization movement in working outside conventional channels. For anyone interested in social change, it is worth thinking about this link and if it is a necessary development, or merely an affectation of a current generation of activists.

Adverts: From Black Power/Grad Skool Rulz

Written by fabiorojas

January 17, 2013 at 12:02 am

glaeser book forum part 4: theories of revolution

Part 1part 2, part 3.

This is the last installment of this Fall’s book forum on Andreas Glaeser’s Political Epistemics. I usually reserve the last installment of the book forum for criticisms and conjectures. This will be no exception. I’ll focus on the limits of the sociology of understanding as it pertains to explaining revolutions.

As you may remember from earlier parts of the book forum, the theoretical mission of Political Epistemics is to develop a “sociology of understanding,” which is a thick description of how people make sense of their social worlds. Glaeser used interview data and archival materials to explain how people developed their identity in East Germany and how that identity eroded in the 1980s to such an extent that the Stasi refused to repress anti-socialists movements in 1989.

What I like about the sociology of understanding is that it effectively undermines Western theories of socialist collapse. It wasn’t about folks reading Hayek. It was about East Germans using socialist ideas to formulate a critique of the whole system. The internal criticism was like tugging at a loose thread.

Now, what I take issue with is the incompleteness of this explanation. It doesn’t really tap into other elements of the socialist system and its eventual collapse. For example, you don’t really get a sense of the extreme violence involved in maintaining East European socialism. This system was imposed by political conquest. It was also supported by periodic mass repression (e.g., Hungary ’56, Prague ’68). East European nations did not treat dissidents well and many were violently treated. I’m a bit surprised that Glaeser didn’t delve into the violence that permeated the entire system.

Another issue is that by itself the sociology of understanding doesn’t explain the timing of the collapse. Why in 1989? Didn’t people question socialism before then? They did and there were uprisings as well. Heck, even Emma Goldman observed in the early 1920s that people weren’t thrilled with what was happening in the Soviet Union.

The key issue is that there was a generational turnover in the elite of the Soviet state and they were willing to let social change occur. This created a chain of protests first in the Baltic region, then Russia itself and then East Europe. As usual, various factions tried to repress these movements but the key elite group  – the secret police – refused to do so. Thus, Glaeser doesn’t really, in my view, replace conventional views of revolution that link elite support of protest to success. Rather, he provides an account for why the elite might defect from the state. This fits neatly within current theories of revolution.

Finally, let me add that what I’d like to see is additional work by other scholars. I’d like to see the sociology of understanding applied to other groups, not just the elites. How did, say, farmers in the Ukraine construct their experience of communism? What was it about the Baltic states or those souls in 1956 Hungary that made them come out in the street? I’d love to find out.

Buy these books if you ever want to finish graduate school: From Black Power/Grad Skool Rulz

Written by fabiorojas

November 27, 2012 at 12:04 am

The “Old” New Institutionalism versus the “New” New Institutionalism

I signed on to blog on Orgtheory a couple of months ago with the express purpose of writing about “A Theory of Fields” (Oxford Press, 2012), my new book with Doug McAdam. So here it goes.

Today I want to explain something about the shape of research in organizational theory for the past 35 years ago in order to situate “A Theory of Fields” in that research. The cornerstones of the “new institutionalism” in organizational theory are three works, the Meyer and Rowan paper (1977), the DiMaggio and Powell paper (1983), and the book edited by Powell and DiMaggio (1991).

I would like to take the provocative position that since about 1990, most scholars have given up on the original formulation of the new institutionalism even though they are ritually fixated on citing these canonical works. It is worth thinking why they found that formulation limited.

The Meyer/Rowan and DiMaggio/Powell position on organizations is that actors in organizations do not have interests and that their actions are “programmed” by scripts. Moreover, actors are unable to figure out what to do, so they either follow the leader (i.e. mimic those they perceive as successful), act according to norms often propagated by professionals, or else find themselves coerced by state authorities. The Meyer/Rowan and DiMaggio/Powell world was not only void of actors; it was also void of change. Once such an order got into place, it became taken for granted and difficult to dislodge. “People” in this world told themselves stories, used myth and ceremony, and they decoupled their stories from what they were doing. This meant that the consequences of their actions were not important.  DiMaggio recognized this problem in 1988 when he suggested that in order to explain change we needed another theory one that involved actors, interests, power, and what he called “institutional entrepreneurs”.

The core of organizational studies since the early 1990s has been to reintroduce interests, actors, power and the problem of change into the center of organizational studies. Indeed, the field of entrepreneurship in management studies is probably at the moment, the hottest part of organizational theory. If one looks at these papers, one still sees ritual citing of DiMaggio/Powell and Meyer/Rowan. But the core ideas of these papers could not be farther from those works. The focus on entrepreneurial studies is on how new fields are like social movements. They come into existence during crises. They invoke the concept of institutional entrepreneurs who build the space and create new cultural frames, interests and identities. In doing so, the entrepreneurs build political coalitions to dominate the new order. Indeed, the gist of the past 15 years of organizational research is entirely antithetical to the “old” new institutionalism.

I submit to you that the time is now right to reject the “old” new institutionalism” entirely, free our minds, and produce a “new” new institutionalism.

Read the rest of this entry »

Written by fligstein

August 23, 2012 at 9:19 pm

forget the environment, everything is endogenous

Teppo is too humble to let us know that he’s the guest editor of a new special issue of Managerial and Decision Economics.  The issue’s theme is the “emergent nature of organization, market, and wisdom of crowds.” The special issue has an impressive lineup of authors, including Nicolai Foss, Robb Willer, Bruno Frey, Peter Leeson, and Scott Page.  Teppo’s introduction, as you might expect, is provocative, challenging learning theory and behavioral theories of the firm. Here’s a little teaser:

My basic thesis is that capabilities develop from within—they are endogenous and internal. In order to develop a capability, it must  logically be there in latent or dormant form. Capabilities grow endogenously from latent possibility. In some respects, capabilities should be thought about as organs rather than as behavioral and environmental inputs. Experience, external inputs and environments are, in important respects, internal to organisms, individuals and organizations. Although environmental inputs play a triggering and enabling role in the development of capability, the environment is not the cause of capability. Furthermore, the latency of capabilities places a constraint on the set of possible capabilities that are realizable. But these constraintsare scarcely deterministic; rather, they also provide the means and foundation for generating noveltyand heterogeneity (285).

Teppo offers a real challenge to the typical “blank slate” approaches that dominate organizational theory and sociology. Social construction has  limits if you assume that some capabilities are simply latent and waiting to be triggered into action. This reminds me of what my graduate school contemporary theory instructor, Al Bergesen, used to say about the deficiency of  most sociological theory. (In fact, he repeated the whole bit to me again when I ran into him in Denver’s airport Monday evening.) Sociology, he’d say, has never fully come to grips with the cognitive revolution of psychology or linguistics. We still assume that individuals are completely shaped by their social world and ignore cognitive structure  and the limits this imposes on how we communicate and who we can become.  Teppo and Al would have a lot to talk about.

Written by brayden king

August 23, 2012 at 1:46 am

the ncaa and penn state’s history

Ever since the NCAA announced they would sanction Penn State for its cover-up of the Sandusky sex abuse scandal, I’ve been thinking about writing a post related to institutional jurisdictions, authority, and reputation.  I completely understand the NCAA’s response to the scandal, especially in light of the findings of the Freeh report, and I think this was a very predictable response. Was the punishment harsh? Yes. Was it excessively harsh as a condemnation of the crimes of Sandusky? No.  Was the NCAA operating within its jurisdiction and exercising proper use of authority by making these sanctions? That’s debatable (and I’m sure it will be in the months to come).

My colleague Gary Alan Fine, who has thought a lot about scandal and collective memory (e.g., Fine 1997), has offered his thoughts on the sanctions in a New York Times op-ed. Gary questions “history clerks” who attempt to rewrite history as a response to a contemporary event/scandal.

The more significant question is whether rewriting history is the proper answer. And while this is not the first time that game outcomes have been vacated, changing 14 seasons of football history is a unique and  disquieting response. We learn bad things about people all the time, but should we change our history? Should we, like Orwell’s totalitarian Oceania, have a Ministry of Truth that has the authority to scrub the past?  Should our newspapers have to change their back files? And how far should we go?

This is a tricky issue. Everyone can agree that what happened at Penn State was deplorable. However, I think it’s perfectly reasonable to question whether the NCAA made these moves more as an effort to protect its own reputation and to safeguard the purity of college football, rather than as a reasoned response to the institutional crimes committed by Penn State’s decision-making authorities.  This scandal isn’t disappearing anytime soon, and so I expect we’ll hear a lot more about this in the months and years to come.

Written by brayden king

July 25, 2012 at 3:37 pm

Why strong social constructionism does not work I: Arguments from Reference

In this and a series of forthcoming posts, I will attempt to outline an argument showing that most of the time claims to have derived a substantively important conclusion from constructionist premises are incoherent.   By a substantively important conclusion I refer to strong arguments for the “social construction of X” where X is some sort of category or natural kind that is usually thought to have general ontological validity in the larger culture (e.g gender, race, mental illness, etc.).

In a nutshell, I will argue that the reason for why these sort of arguments do not really work is that they require us to draw on a theory of meaning, language and reference that is itself inconsistent with constructionism.  To put it simply: substantively important conclusions derived from constructionist premises require a theory of reference that implies at least the potential for realism about natural kinds and a strong coupling between linguistic descriptions and the real properties of the entities to which those descriptions apply, but constructionism is premised on the a priori denial of realism about natural kinds and of such a strong coupling between language and the world.  Thus, most strong claims about something being “socially constructed” cannot be strong claims at all.  This argument applies to all forms of social constructionism, whether of the phenomenological, semiotic, or interactionist varieties.

Here I will first do two things:  1) give a more “technical” definition of what I mean by a “substantively important conclusion” within a constructionist mode of argumentation (noting that my argument does not apply to “softer” versions of constructionism) and 2) nail down the point that constructionism (and any other set of premises designed to draw substantively important conclusions about the natural and social worlds) depends on an “argument from reference” in order to work.  Finally, I will lay out the argument that 3) because of this dependence, strong constructionist conclusions are usually not warranted (they follow from an incoherent argument).

The shock value in constructionism.-  In a constructionist argument, a substantively important conclusion is one that has “shock value.”  By shock value, I mean that the argument results in the conclusion that something that we thought was “real” in an unproblematic sense is shown to be either a) a fictitious entity that has never been or could never be real or b) a historically contingent entity endowed with a weaker form of existence (e.g. a collectively sustained fiction or even delusion).  This is “shocking” in the sense that the constructionism thesis upsets the “folk ontology” heretofore taken for granted by lay and professional audiences alike.

A useful analogue (because it makes the technical argumentative steps clear) comes from the Philosophy of Mind. There, the most “shocking” argument ever put forth is know as “eliminativism” in relation to the so-called “propositional attitudes” (Stich 1983; Churchland 1981).  Note that this argument is actually espoused by people who consider themselves to be radical materialists almost blindly committed to a traditional scientific epistemology and an anti-dualist ontology.  Thus, I am not claiming a substantive commonality between constructionists and eliminativists.  All that I want to do here is to point to some formal commonalities in their mode of argumentation in order to set up the subsequent point of common reliance on an argument from reference.

According to the eliminativist thesis, the denizens of the mental zoo that play a role in our ability to account for ours and other’s people’s behavior (such as beliefs, desires, wants, etc.) do not actually exist. The reason for that is that the theoretical system in which they play a role (so called “folk” or “belief-desire psychology”) is actually an empirically false theory, one that relies on the postulation of theoretical entities (mental entities) that have no scientifically defensible ontological status.

According to belief desire psychology, persons engage in action in order to satisfy desires.  Beliefs play a causal role in behavior by providing the person with subjective descriptions of how means connect to desirable ends.  Using belief-desire psychology, we can explain why person A engages in behavior B, by postulating that “Person A believes that by doing B, she will get C, and she desires/wants C.” A belief is a proposition about the world endowed with a truth value and a desire is a proposition that describes the sorts of states of affair that the person would like to bring about.   Both are conceived to be mental entities endowed with “intentional” content (they are about something). Their intentional content dictates how they can relate to other entities in a systematic way (e.g. because some propositions logically imply others). We can then “predict” (or retrodict) the behavior of persons by linking desires to beliefs in a way that preserves the rationality of persons.

Accordingly, if I see somebody rummaging through the contents of a refrigerator, I can surmise that this person is engaging in this sort of behavior because she believes that she will find something to eat in there, and she wants something to eat.  Relatedly, when persons are questioned as to why they did something, they usually give a “reason” for why the did what they did.  This reason takes the form of a “motive report.”  If I question somebody about why they are rummaging through a refrigerator, they are likely to say “because I’m hungry.”

According to eliminativists, the main causal factors in belief desire psychology have no ontological status.  Thus, neither propositional beliefs of the sort of “I think that p” where p is a proposition of the sort “there is food in the refrigerator” nor desires of the sort “I want q” have any ontological status.  As such, belief-desire psychology stands to be replaced by a mature neuropsychology, one in which “folk solids” such as desires and beliefs (to use Andy Clark‘s terms) will play no role in explanations and accounts of human behavior.  These notions, previously thought to be natural kind endowed with unquestionable reality, are eliminated from our ontological storehouse and into the dustbin of fictional entities discarded by modern science (such as Phlogiston, Caloric, The Ether, The Four Humors, etc.).

Constructionism and eliminativism.- I argue that most substantively important conclusions within the constructionist paradigm are actually modeled after “eliminativist” arguments in the Philosophy of Mind.

All of the pieces are there.  First, a constructionist argument usually takes some (folk or professional) system of “theory” as their target. This is regardless this is a system of theory currently in existence or from a previous historical era.  This is usually a folk (or sometime professional) “theory of X” (e.g the “folk theory of race” or the “folk theory of gender”).  Second, within this system the constructionist picks one or more central theoretical categories or concepts (X), which, within the system are endowed with an non-problematic ontological status as real (e.g. gender  or racial “essence”).  Third, the constructionist shows the folk theory of X to be false from the point of view of a more sophisticated theory (modern population genetics in the case of the old anthropological concept of “race”).  Thus X (e.g. race), as conceptualized in the folk theory, does not really exist, even though it forms a key part of certain contemporary folk theories of race. The title of the famous PBS documentary: “Race: The Power of an Illusion” conveys that point well.

The constructionist may also argue for the indirect falsity of the current theory of X, by simply using the historical or anthropological record to show that there are cultures/historical periods  in which X either was not presumed to exist in the way that it exists today or was part of a different theoretical system which radically changed its status (the properties that define membership in the concept were radically different).  Here the constructionist will agree that X “exists” in the current setting, but it does not have the sort of existence attributed to it in the folk discourse (transhistorical and transcultural) instead it has a weaker form of existence: social; as in “sustained by a historically and culturally contingent social arrangement which could theoretically be subject to radical change.”  Foucault’s famous argument for the radically different status of the category of “man” within the so-called “classical episteme” is an example of that sort of claim.  The category of man in the modern era has a meaning that is radically incommensurate to the one that it had in the classical episteme.  The implication is that therefore the category of “man” does not refer and we can thus conceive of a possible future in which it plays no actual role, follows.

The common element here is that a category that we take for granted (within the descriptions afforded by some lay or professional theoretical system) to be ontologically “real” (race, gender, the category of “man”, etc.) is shown instead to  “actually” have a fictitious status because there is nothing in the world that meets that description. More implicitly, insofar as a concept has undergone radical changes in overall meaning (with meaning determined by its place within a network of other concepts in the form of a folk or professional theory), then there cannot be a preservation of reference across the incommensurate meanings.Hence the concept cannot really be picking out an ontologically coherent entity in the world. I refer to this as the “strong constructionist effect.”  The basic idea, as I have already implied is that in order for the effect to be successful, we must already be working from within some theory of reference, otherwise the claim that “there is nothing in the world that meets that description” is either vacuous or incoherent.

Constructivism and arguments from reference.- What are “arguments from reference”? Arguments from reference are those that implicitly or explicitly require a theory of reference for their conclusions to follow (or even make sense), as has been recently pointed out by Ron Mallon (2007).  When this is the case, it can be said that the substantively important conclusion is  dependent on the (logically autonomous) theory of reference. It is striking how little most social scientists spend thinking about reference. They should, because even though it is seldom explicit, we all require some theory about how conceptualizations link up (or fail to!) to events in the world in order to make substantive statements about the nature of that world. I argue that in order to produce the strong constructionist  effect, and thus derive substantively important conclusions, the argument from social construction requires a particular theory of reference.

One would think that when it comes to theorizing about how conceptual, theoretical or folk terms “refer” to the world there would be various competing theories.  Instead, twentieth century analytic philosophy was long dominated by single dominant account of how concepts refer.  This was Frege’s suggestion that “intension” (the meaning of a term) determines “extension” (the object in the world that the term picks out).  Lewis (1971, 1972) formalized this formulation for the case of so-called theoretical entities in scientific theories.  According to Lewis, terms in scientific theories purport to describe objects in the world bearing certain properties or standing in certain relations with other objects. This is the description of that term.  According to Lewis, the terms of Folk Psychology are theoretical entities that gain their meaning from their relations to other entities and observational statements within a system of theory.  Eliminativists built their argument on this suggestion, by suggesting that there is nothing in the (scientifically acceptable) world that meets the description for a propositional attitude (a mental entity endowed with “intentional” content); ergo, belief-desire psychology is false, its terms do not refer, and we need a better theory of the mental.

In short, from the viewpoint of a descriptivist theory of reference, a given term or concept defined within a given theoretical system refers if and only if there is an object in the world that bears the properties or stands in the relations specified in the description.  According to this theory, terms refer to real world entities when there exists an object satisfies the necessary and sufficient conditions of membership in the category defined by the term (which in the limiting case may be an individual).  Descriptions that have no counterpart in the real world are descriptions of fictional entities and thus fail to refer (and the validity of the theoretical systems of which they are a part is therefore impugned).  When competent speakers use the terms of any theory (scientific or folk) they have a description in mind, which specifies the set of properties that an object would have to have for that term to be said to successfully refer to it.

The basic argument that I want to propose here is that “shock value” constructionism depends on a descriptivist theory of reference. This should already be obvious.  The standard constructionist argument begins by a painstaking reconstruction of a given set of folk or professional descriptions.  The analyst then moves on to ask the rhetorical question: is there anything in the world that actually satisfies this description?  If the answer is no, then the conclusion that the term fails to refer (and is a fictional and not a real entity) readily follows.  The standard criteria for satisfaction of these conditions usually boil down to some sort of semantic analysis. For instance, in Orientalism, Edward Said painstakingly reconstructed a Western “image” (read description) of the Middle East as a kind of place and of the Arab “Other” as a (natural?) kind of person. Said pointed out that this description of Arab peoples (menacing, untrustworthy, exotic, emotional, eroticized, etc.) was not only logically incoherent; it was simply false, there had never been a group of people who met this description; it had been a fabrication espoused by a misleading theoretical system: Orientalism. Thus, Orientalism as a culturally influential theory of the nature of the Arab “Orient” needed to be transcended. The main theoretical entity implied by such theory, the Oriental “other” endowed with a bizarre set of attributes and properties was thereby eliminated from our ontological storehouse.

Houston we have a problem.- It would be easy to show that essentially all arguments that produce the “strong constructionist effect” follow a similar intellectual procedure.  There are at least two problems with this (largely unacknowledged) dependence of social constructionism on a descriptivist theory of reference. First, constructionism denies the conditions that make a descriptivist strategy an adequate theory of reference, which is at a minimum the validity of a truth-conditional semantics and the capacity of words to unambiguously (e.g. literally) refer to objects and events in the world.  This is not a problem for Gottlob Frege and David Lewis, or most descriptivist theorists in analytic philosophy, most of whom subscribe to some version of propositional realism (propositions have truth values that can be unproblematically redeemed by just checking to see if the “correspond” to the world).  However, this is a problem for constructionists because they cannot accept such a strong version of realism.

Thus, if the very theory of the relationship between language and the world that is espoused by social constructionism (skepticism as to the applicability of a truth conditional semantics and unambiguous reference) is true then descriptivism has to be false. This means that social constructionism is an inherently contradictory strategy; to produce substantively meaningful conclusions (the strong constructionist effect) it has to rely on a theory of the relationship between meanings and the world that is denied by that very approach. Second, even if this logical argument could be sidestepped, constructionism would still be in trouble.  The reason for this is that there is a competing (and equally appealing on purely argumentative grounds) theory of reference in modern philosophy: this is the causal-historical theory of reference most influentially outlined by Saul Kripke and Hilary Putnam.  The basic issue is not that this is a competing account of reference; the problem is that this account of reference actually denies a key link in the constructionist argument: that in order to refer, there has to be match between the description of the term and the properties of the object that the term putatively refers to.

Instead, causal-historical theories of reference allow for two possibilities that are seldom taken into account by constructionists:  1) that persons can refer to things in the world even though their mental description of the term that they are using to refer to those things those not at all match the properties of those things, and 2) that the description of a term can undergo radical historical change while the term continues to refer to the same entities or cluster of entities.  The first possibility undercuts the capacity of the constructionist to “correct the folk,” because reference is decoupled from the descriptive validity of the terms that are used to refer.  The second possibility undercuts the argument for social construction based on historical and cultural variability of descriptions. It opens up the possibility that there is “rigid designation” to the same set of social or natural realities across cultures in spite or radical differences in the cultural frameworks from within which these referential relations are established.

A reasonable objection is simply to point out that we simply do not have sufficiently strong grounds of picking descriptivism over causal-historical theories of reference, as equally respectable arguments have been put forth in defense of both. This is in fact the position taken by most philosophers who instead go on to worry about whether people are cherry-picking one of the two theories of reference to support their preferred argumentative strategy.  However, I believe that most constructionists in social science cannot be content with this non-committal solution. Instead, like other areas of Philosophy (e.g. epistemology, ethics, mind), there is a way to “break the tie” between various philosophical theories and that is to look to naturalize these types of inquiry by looking at what theories seem to be consistent with the relevant sciences.  Here we have good news and bad neews for constructionists.

Research in cognitive science, cognitive semantics and cognitive linguistics points to the inadequacy of descriptivist theories of reference from a purely naturalistic standpoint. This should be good news for constructionists because the upshot is that truth-conditional semantics roundly fails as an account of how persons generate meaning (Lakoff 1987).  The irony is that these theories redeem the original skepticism of constructionism vis a vis any form of truth-conditional semantics and propositional realism, but in so doing also undercut the ability of constructionists to engage in the sort of  argument that results in “shocking” or substantively strong claims for the social construction of X, because the rhetorical force of these arguments depends on descriptivism and descriptivism implies propositional realism and “objectivism” (that truth is the literal correspondence of statements and reality).  The resulting counter-intuitive conclusion is that it is precisely because linguistic meaning and natural categories meet the constructionist specifications that strong constructionist arguments are actually impossible.  In fact, it is precisely because language and semantics work the way that constructionist (implicitly) presuppose that they do that the norm in historical change may not be the radical transformation of reference relations in historical and cultural change (as implied by Foucauldian analysts), but rigid designation of the same (social, or natural) “essences” and relations even in the wake of superficial shifts in the accepted cultural description of those entities.

Written by Omar

March 7, 2012 at 6:57 pm