When the credit crisis began in 2007, it carried with it a wave of criticism of quantitative models, especially risk models. Books like Nassim Taleb’s The Black Swan and Scott Patterson’s The Quants: How a New Breed of Math Whizzes Conquered Wall Street and Nearly Destroyed It would lay apart the 50 years of risk modeling predicated on risk being defined as volatility and volatility being measured by the standard deviation of return, all under the “Gaussian Approximation.” While some of what they may have written is true, their main complaint centers around using historical data and models created from it in diagnosing the current environment and for predicting the future. Ben Graham, Warren Buffet’s teacher on the other hand, said:
“It is true that one of the elements that distinguishes economics, finance and security analysis from other practical disciplines is the uncertain validity of past phenomena as a guide to the present and future. Yet we have no right to reflect on lessons of the past until we have at least studied and understood them.”
In addition, Graham said if experience cannot help today’s investor, then we must be logical and conclude that there is no such thing as investment in common stocks and that everyone interested in them should confess himself a speculator. The reason we like exploring the past has to do with the comfort we draw from our ability to use existing constructs and memorable ideas to explain historical developments, make current decisions, and estimate our future directions. That, of course, is why we may be late in recognizing significant shifts in the market in accordance with Nassim’s findings.
To that end, optimism and over-confidence have always accompanied bull markets and pessimism always bear markets. In practice, most of us find three-day weather forecasts useful. We buy flood insurance in areas where floods have historically occurred, and in fact we make decisions every day based on past experience. So the logic of taking the outcomes of the past and counting them up to form a distribution with which to make future decisions does have some precedent. The normal distribution (Gaussian) is not representative of a future distribution but is one for picking members out of a population and has no future values in it. Since the future hasn’t happened yet, there isn’t a full set of outcomes in a normal distribution created from past experience. When we use statistics like this to predict the future, we are assuming the future will look like today or similar to today, all things being equal, and also assuming extreme events don’t happen altering the normal occurrences. This is what quants do. They are clearly aware that extreme events happen, but one doesn’t throw away a useful model just because some random event can happen. I wear seatbelts because they will offer protection in the majority of car accidents, but if an extreme event happens, like a 40-ton tractor-trailer hitting me head on at 60 MPH, the seatbelt won’t offer much safety. Do we not wear seatbelts because of the odd chance of the tractor-trailer collision? Obviously we wear them.
To add color, however, and offer an apologetic for quants, consider that there are more than one “cause in effect” for almost all known observations of any phenomena in the universe! Scientists usually attempt to understand the strongest influencers of an outcome or event, not necessarily all of the influencers of an outcome. So in reality, multiple causes are in effect for every event, even those which are extreme or “Black Swan.” Extreme events have different mechanisms (one or more) that trigger cascades and feedbacks, while everyday normal events (those occurring that aren't extreme) have a separate cause. One only enters into the conundrum of explanation when one tries to link all observations, both from the world of extreme random events with normal events when in reality these are usually from separate causes. In behavioral finance literature, this falls under the subject of multiple-equilibria and has been demonstrated in markets where there is deviation from no-arbitrage opportunities for a considerable amount of time and is of course noticed where structural mispricings occur for longer periods than a single equilibria mechanism would allow. In this context, highly non-linear and chaotic market behavior occurs where small triggers induce cascades and contagion similar to the way very small changes in initial conditions brings out turbulence in fluid flow. The simple analogy is found in the game of "pickup sticks," where the players have to remove a stick from a randomly connected pile without it affecting all the others. Eventually, the interconnectedness of the associations between sticks results in an avalanche. Likewise, so behaves the market.
To those who would criticize the use of a normal curve to examine past data or as criticism for modeling regularly observed events, understanding the statistical interpretation of a time-series of past data is not related to the applicability of history to appear for explanation after a market meltdown. It is not very helpful after the fact to explain chapter and verse why what happened should have happened because it happened before, because while we judge life in reverse, we live it forward. Graham also purported and even had in his lecture notes the example of a stock reaching new highs, that after awhile it goes down to levels below previous highs and that one can take this example as a warrant for purchasing securities based on past or historical histories at lower prices than the current high. Hence, he himself didn’t dismiss the lessons of history but used models as in this example and even admonished the use of empirically determined methodologies when the data was of a regular variety -- that is, of events that happen frequently enough to obviate their inclusion as extreme events or Black Swans. So indeed Graham was an empiricist to some extent; however, it’s also clear that anomalous investing strategies, uncovered from analyzing the past market through empirical analysis, do not constitute a proof of enhanced return availability through these methods, nor does risk prediction based on normal approximations. However, the normal approximation is valid most of the time and predicts normal events quite well. Quants have the tools to apply in extreme event risk prediction, should the cause of extreme events be someday codified. But one cannot blame the quants for their failure to include the unpredictable in their everyday analysis. Modern risk models at FactSet, are predicated on sound economic principles, but if an asteroid hits the New York metropolis, even the famed Metropolis Monte Carlo algorithm wouldn’t have predicted it.
Extreme earth shattering market events happen, but by other mechanisms than standard risk statistics should be expected to predict. This is no error on their part, but an error of expectation of the user. I’ll wear the seatbelt and in the majority of car accidents I have with other cars under 40 MPH, I’ll be safe. But to expect that seatbelt to save me with a semi speeding head-on at 65 MPH, the failure is mine not to have prepared for that event some other way, other than relying totally on my seatbelt! So when the market runs headlong downward at 60 MPH, and the VIX shoots to 90, you better have devised some other strategy other than relying on and then later blaming your risk model or risk modelers.
To receive new posts by e-mail, subscribe to this blog.