James G. Rickards has an interesting op-ed in today’s Washington Post on the role of financial risk modeling in the banking crisis. This is an important topic on its own, but also has sweeping implications for environmental policy, disaster preparation, and other issues. As he explains it, Wall Street’s financial risk models (known as “value at risk”) aggregate the day-to-day risks of various securities:
What’s left is “net” risk that is then considered in light of historical patterns. The model predicts with 99 percent probability that institutions cannot lose more than a certain amount of money. Institutions compare this “worst case” with their actual capital and, if the amount of capital is greater, sleep soundly at night. Regulators, knowing that the institutions used these models, also slept soundly. As long as capital was greater than the value at risk, institutions were considered sound — and there was no need for hands-on regulation.
But there’s a forest-for-the-trees problem here. Aggregating individual risks is fine in a relatively stable system. But what if the system itself becomes unstable? The risk model will not anticipate it. It’s a familiar problem from complexity science:
Think of a mountainside full of snow. A snowflake falls, an avalanche begins and a village is buried. What caused the catastrophe? The value-at-risk crowd focuses on each snowflake and resulting cause and effect. The complexity theorist studies the mountain. The arrangement of snow is a good example of a highly complex set of interdependent relationships; so complex it is impossible to model. If one snowflake did not set off the avalanche, the next one could, or the one after that. But it’s not about the snowflakes; it’s about the instability of the system. This is why ski patrols throw dynamite down the slopes each day before skiers arrive. They are “regulating” the system so that it does not become unstable.
The more enlightened among the value-at-risk practitioners understand that extreme events occur more frequently than their models predict. So they embellish their models with “fat tails” (upward bends on the wings of the bell curve) and model these tails on historical extremes such as the post-Sept. 11 market reaction. But complex systems are not confined to historical experience. Events of any size are possible, and limited only by the scale of the system itself. Since we have scaled the system to unprecedented size, we should expect catastrophes of unprecedented size as well. We’re in the middle of one such catastrophe, and complexity theory says it will get much worse.
This problem – a reliance on computer modeling that cannot accurately anticipate catastrophe – isn’t restricted to finance. Our society bases all of its policy and financial decisions on such “hard” forecasting numbers. Insurance companies employ risk models to gauge likely hurricane losses. Less sophisticated, but still highly trusted, models were used to construct the pre-Katrina New Orleans levees. They rendered the likelihood of a Katrina-sized storm surge relatively small. Often, models are built on datasets with short histories, put together in times of relative stability. But the capacity for a bigger, systemic event to sweep in and take place is always there – the fact that it is very hard to quantify doesn’t mean it’s not real.
The problem now is that the world – its financial, energy, and food systems and the physical environment itself – are changing rapidly. That means more unexpected events will occur – floods, droughts, shortages, gluts, crashes. Government and private institutions need to recognize that the future won’t be like the past, recognize the limits of their current numbers, and prepare accordingly.
Also: Here’s Nassim Taleb’s take on the same topic.