I’ll be the first to admit I don’t know much about economics. Then again, based on the last few weeks, there are a lot of people on Wall Street that don’t know much about economics either. (Why would an investment house acquire assets it doesn’t know the value of?)
Anyway, it’s clear that there’s going to be a lot of Monday morning quarterbacking of the whole financial system when things finally calm down. The following two articles – one in the New York Times and one in the Washington Post – offer intriguing views on some of the flaws in the current system, drawing on ideas from modern science.
In his Times article This Economy Does Not Compute, theoretical physicist Mark Buchanan takes issue with traditional economics’ methodology. He begins by discussing the shortcomings of equilibrium theory, which “views markets as reflecting a balance of forces.”
Really understanding what’s going on means going beyond equilibrium thinking and getting some insight into the underlying ecology of beliefs and expectations, perceptions and misperceptions, that drive market swings.
Buchanan says a number of people are now using computer models to get this insight. After discussing three different models and their initial findings, he notes:
Sadly, the academic economics profession remains reluctant to embrace this new computational approach (and stubbornly wedded to the traditional equilibrium picture). This seems decidedly peculiar given that every other branch of science from physics to molecular biology has embraced computational modeling as an invaluable tool for gaining insight into complex systems of many interacting parts, where the links between causes and effect can be tortuously convoluted.
If we’re really going to avoid crises, we’re going to need something more imaginative, starting with a more open-minded attitude to how science can help us understand how markets really work. Done properly, computer simulation represents a kind of “telescope for the mind,” multiplying human powers of analysis and insight just as a telescope does our powers of vision. With simulations, we can discover relationships that the unaided human mind, or even the human mind aided with the best mathematical analysis, would never grasp.
Better market models alone will not prevent crises, but they may give regulators better ways for assessing market dynamics, and more important, techniques for detecting early signs of trouble. Economic tradition, of all things, shouldn’t be allowed to inhibit economic progress.
In a similar vein, James G. Rickards takes issue with the prevailing wisdom on risk management in his Post article A Mountain, Overlooked; How Risk Models Failed Wall St. and Washington. He begins by describing the way risk has been evaluated by financial institutions up to now, using complex mathematical models:
Since the 1990s, risk management on Wall Street has been dominated by a model called “value at risk” (VaR). VaR attributes risk factors to every security and aggregates these factors across an entire portfolio, identifying those risks that cancel out. What’s left is “net” risk that is then considered in light of historical patterns. The model predicts with 99 percent probability that institutions cannot lose more than a certain amount of money. Institutions compare this “worst case” with their actual capital and, if the amount of capital is greater, sleep soundly at night. Regulators, knowing that the institutions used these models, also slept soundly. As long as capital was greater than the value at risk, institutions were considered sound — and there was no need for hands-on regulation.
However, he notes “Lurking behind the models…was a colossal conceptual error: the belief that risk is randomly distributed and that each event has no bearing on the next event in a sequence.” This is basically the same idea as tossing a coin: no matter how many times heads comes up, the odds of it being heads or tails the next time is still 50-50. But are markets really like coin tosses? Rickards says:
Both natural and man-made systems are full of the kind of complexity in which minute changes at the start result in divergent and unpredictable outcomes. These systems are sometimes referred to as “chaotic,” but that’s a misnomer; chaos theory permits an understanding of dynamic processes. Chaotic systems can be steered toward more regular behavior by affecting a small number of variables. But beyond chaos lies complexity that truly is unpredictable and cannot be modeled with even the most powerful computers. Capital markets are an example of such complex dynamic systems.
Think of a mountainside full of snow. A snowflake falls, an avalanche begins and a village is buried. What caused the catastrophe? The value-at-risk crowd focuses on each snowflake and resulting cause and effect. The complexity theorist studies the mountain. The arrangement of snow is a good example of a highly complex set of interdependent relationships; so complex it is impossible to model. If one snowflake did not set off the avalanche, the next one could, or the one after that. But it’s not about the snowflakes; it’s about the instability of the system. This is why ski patrols throw dynamite down the slopes each day before skiers arrive. They are “regulating” the system so that it does not become unstable.
Financial systems overall have emergent properties that are not conspicuous in their individual components and that traditional risk management does not account for. When it comes to the markets, the aggregate risk is far greater than the sum of the individual risks; this is something that Long-Term Capital Management did not understand in the 1990s and that Wall Street seems not to comprehend now. As long as Wall Street and regulators keep using the wrong paradigm, there’s no hope they will appreciate just how bad things can become. And the new paradigm of risk must be understood if we are to avoid lurching from one bank failure to the next.
I, for one, vote a Big No on all this lurching. Hopefully, Buchanan and Rickards can get their messages through to the Powers That Be.