systemic risks: too big, too complicated or too central?
Duncan Watts had an article in the Boston Globe last week (hattip: Karl Bakeman) looking at how the network structure of the banking industry might have amplified the financial crisis. Watts comments:
Traditionally, banks and other financial institutions have succeeded by managing risk, not avoiding it. But as the world has become increasingly connected, their task has become exponentially more difficult. To see why, it’s helpful to think about power grids again: engineers can reliably assess the risk that any single power line or generator will fail under some given set of conditions; but once a cascade starts, it’s difficult to know what those conditions will be – because they can change suddenly and dramatically depending on what else happens in the system. Correspondingly, in financial systems, risk managers are able to assess their own institutions’ exposure, but only on the assumption that the rest of the world obeys certain conditions. In a crisis, it is precisely these conditions that change in unpredictable ways.
He suggests that regulators assess a company’s network position and take action to ensure systemic viability:
On a routine basis, regulators could review the largest and most connected firms in each industry, and ask themselves essentially the same question that crisis situations already force them to answer: “Would the sudden failure of this company generate intolerable knock-on effects for the wider economy?” If the answer is “yes,” the firm could be required to downsize, or shed business lines in an orderly manner until regulators are satisfied that it no longer poses a serious systemic risk. Correspondingly, proposed mergers and acquisitions could be reviewed for their potential to create an entity that could not then be permitted to fail.
This is a very interesting idea. But it also raises a number of intriguing questions worth fleshing out in a little more detail:
1. First, measurement: How would one measure the network connectedness of companies in a way that could adequately inform policy? Many a network analyst would give up a limb in return for a government mandate requiring companies to provide network data of this kind. But which network ties are the most relevant when it comes to robustness? Cross-holdings? Joint ventures? Insurance instruments?
2. Would the act of publicly measuring these network ties change the network structure? Could simply collecting the data and educating people on what they mean be enough to influence tie formation? Could doing that achieve robustness more effectively than regulating against being “too central to fail”? In what ways might doing that pervert the “organic” process of network formation? What unintended consequences might result?
3. Lastly, a general question concerning the relationship between networks and institutions. Is it a problem of a particular network structure which introduces too much systemic vulnerability? Or is the problem a complex of rules that are too obscure to allow actors to take rational action?
On this last one, contrast Watts’s discussion of the Aisin-Toyota crisis in Chapter 9 of his book with an earlier paper on the topic of robustness (coauthored with Sheridan Dodds and Chuck Sabel). In the book, Watts discusses the case of a fire at Aisin, a supplier to Toyota, which occupied a bottleneck node in the production system. The fire threatens to bring production throughout the system to a halt. But the system quickly adapted because “this… is where all the training kicked in. After years of experience with the Toyota Production System, all the companies involved possessed a common understanding of how problems should be approached and solved….”
To me, that speaks to the transparency and complexity of the rules by which the system operates more than it does to the particular network structure. The rules were complex, but so well understood that others could adapt efficiently. But the paper (and the Globe piece) focus on the network structure. The theory is (1) that distributed network relationships generate more information at the moment a crisis hits allowing actors to adjust more quickly and (2) a more distributed network can keep a cascade from spreading by diluting the risk associated with any one node.
These interpretations lead in two different directions when it comes to policy (complementary, perhaps, but still different). If the interpretation is that rules and associated risks were obscure to actors two, three, four (more) links away in the chain of dependencies, then it seems to me that the policy proscription should be to make the rules governing the system more transparent. If the interpretation is that too much risk came to be associated with too few nodes, then policy proscription he offers in the article — influencing the network structure to eliminate risk congestion — makes some sense in that it might slow down the cascade (theoretically giving other actors in the system time to adjust their strategies). But that leads to one last question: is it sufficient?
Subscribe to comments with RSS.
Comments are closed.