Friday, June 26, 2009

VaR vs. Tracking Error: The flawed debate

Between industry conferences and the end of the comment period on GIPS 2010, there seems to be regular and recently increasing discussion and debate about predicted VaR vs. tracking error. Frankly, we believe those comparisons consistently miss two core questions that are essential to your consideration of VaR vs. tracking error.

First, are portfolio returns normally distributed or not?


Tracking error is a standard deviation, and standard deviation elegantly summarizes a normal distribution in a single number. The VaR of a normally distributed series can be easily inferred from the standard deviation. But if the portfolio returns aren’t normally distributed, tracking error doesn’t explain the distribution. That’s just Statistics 101.

When trying to answer whether an equity portfolio’s returns are normally distributed, it is often easiest to first ask whether the portfolio contains equity options. While it is possible that a portfolio containing a diversified set of equity options could still be normal given a long risk horizon, it is extremely unlikely. Realistically, the equity options are in the portfolio to manipulate performance given the manager’s expectation of what will happen. It is our experience that if the portfolio contains equity options, then the returns aren’t normally distributed and therefore VaR is required and tracking error is invalid.

We are aware that one could delta-adjust the option into its underlying equity exposure and then calculate tracking error. Respectfully, this is just wrong. There are few issues in risk analysis as clear-cut as this one.

So, if the portfolio includes no equity options, is it normally distributed? This is a reasonable debate, but an intelligent discussion should center on the second core question that we highlight when working with our clients:

What is the time horizon of the predicted risk analysis?

If you are looking at one day risk, there is general consensus that one day equity returns aren’t normally distributed. Most agree: they are fat-tailed and have negative skew. In practice, you wouldn’t look at the one day tracking error of a portfolio.

Similarly, if your horizon is one year, there is reasonable consensus that equity annual returns are normally distributed. It is reasonable to focus on the tracking error of one year risk for an equity portfolio. Frankly, it would be odd to focus on the one year VaR of a portfolio that only held equities. Tracking error is the more common and appropriate measure of risk.

Our risk model providers, APT, Barra, Northfield, and R-Squared seem to pretty much agree that the minimum period for assuming that equity returns are normally distributed is between two weeks and a month. You may or may not agree, but this should be at the heart of a discussion of VaR vs. tracking error.

But, to come back to you and your needs, what is your risk horizon? If you are a defined-benefit plan sponsor, your horizon is clearly longer than a month and your decision on VaR vs. tracking error should focus on whether the portfolio in question contains non-linear assets. If you are a hedge fund, your horizon is likely less than two weeks, so you should be VaR-centric and focused on how the risk analytics account for fat tails.

My sense is that these two critical points are ignored because they don’t lead to a nice, clean, definitive, and simple answer. Essentially, VaR vs. tracking error depends on what is in the portfolio and your risk horizon.

Subscribe today to receive e-mail updates whenever a new blog is posted.

8 comments:

  1. What do you consider to be the most significant dangers of using a short term, downside measure (e.g. VaR) for a fund with a long term investment hoizon? Is there much literature on this? Presumably much of it is behavioural?

    ReplyDelete
  2. VaR will still be wrong when returns are non-normal as will the standard deviation of tracking error or any other risk measure conditioned on Normality. Two wrongs do NOT make a right!

    ReplyDelete
  3. Hello Sinclair,
    You are certainly correct about the fact that non-normality of short-term returns has be accounted for in any short-term risk estimation (on a long horizon this problem is far less pronounced, so the use of the tracking error is fine there). To be convinced of the fatness of tails in the short-term series it is enough to consider the fact that 3 year daily kurtosis of S&P 500 returns was around 8 even prior to the crash of 2008!
    However, I cannot agree with you that VaR is conditional on normality. It is common to confuse the concepts of the measure and that of the metric. VaR is a metric that can be measured in many ways, Parametric (that is conditional on normality) VaR being only one of them. VaR has a statistical definition, it is a quantile of a distribution. Nothing in this definition in any way implies that VaR has to be based on a normal distribution. In fact, we have a whole webcast (http://www.factset.com/files/webinars/market_crises/lib/playback.html) and a whitepaper (http://www.factset.com/websitefiles/PDFs/whitepapers/tailrisk) almost wholly dedicated to this very subject. A client who uses a short-term model on our system can use a power law distribution called a Multivariate Student's T (paper has detail on it). So, our agreement with you goes so far that we actually developed a module to account for the problem that you raise.

    ReplyDelete
  4. I agree completely.

    The problem was that the post did not define the VaR method being used.

    Hence, I assumed they were referring to the conventional approach based on Normality. Once we reject the Normality hypothesis, it is NOT obvious what the alternative distribution is.

    The literature contains a vast array of parametric alternatives which may or may not account for the non-nomality in a given sample Simply because they are non-normal is not sufficient. Incorrect inferences are still possible. Yes, VaR is a quantile of a distribution ... but which one ... Multivariate Student's T may or may not be appropriate in some circumstances?

    Even historical simulation methods will deliver different answers depending on sample size and sampling time frame ... I short, there is no correct answer when our theories require Normality and we do NOT observe it!

    ReplyDelete
  5. Hi Sinclair,
    You wrote:
    "Yes, VaR is a quantile of a distribution ... but which one ... Multivariate Student's T may or may not be appropriate in some circumstances?"
    Empirical testing and the inference from it are the only ways for us to act on anything. Strictly speaking, as David Hume once observed, we don't even know if the sun will rise tomorrow, but empirical testing is the only thing we have to go on. There are variety of tests that we did (some of them are in that paper), that show that T-distribution is a very good statistical fit to high frequency returns. Of course, picking a correct distribution is far from a complete answer for risk estimation, but it is a necessary start.

    ReplyDelete
  6. G'Day Daniel,
    You wrote:
    "Empirical testing and the inference from it are the only ways for us to act on anything."

    I think you overstate your case. In many situations our theories guide us as to the appropriate methods, measurements and valid inferences. This is a feature of the paradigm at the core of Kuhn's philosophy of science. In that framework your assertion is flawed because it lacks a theoretical framework.

    Nevertheless, what is the alternative hypothesis when the data confound the mean-variance paradigm? And, more importantly, what is the cost of using the wrong statistic at the wrong time.

    Inevitably, risk measurement will always be more art than science ... and we shouldn't try to convince ourselves otherwise. Pure empiricism is not a sbstitute for good theory.

    ReplyDelete
  7. Dear Sinclair,
    To bring some more clarity into our discussion we should define our terms more precisely. Let’s start with the Risk Process itself. For example, you write: “ inevitably, risk measurement will be more art than science…”
    The first and a fairly obvious problem with this statement is that it is self-contradictory. A dictionary tells us that measuring is an act of assigning a number to a phenomenon according to some pre-specified rule. How this applies to art is not at all clear to me. Measurement is a scientific tool. But this is a minor point. Let me assume that you said risk management instead of measurement. In this case in order to clarify our disagreements let me introduce a concept of the Risk Process. Roughly it can be divided into two segments called Risk Management and Risk Estimation which take place in the following order:
    1. Risk Management decides which risk metrics it will need to form an opinion. In a well structured process Risk Management will require estimation of at least a few metrics like Monte Carlo VAR, Stress testing metrics, ETL, Tracking error, kurtosis. These metrics will form a sort of a dashboard of risk indicators.
    2. Risk Estimation is concerned with estimating the risk metrics that are required by the Risk Management in step 1.
    3. Risk Management now takes the estimates from step2 and uses them to make decisions based on the expertise of the manager.
    From this definition it becomes apparent that step 2 is using scientific methods, because in it hypotheses are formulated and empirical testing allows for their falsification. To be sure, this is a social science and as such does not allow for formulation of permanent laws like natural sciences do ( some disagree that natural sciences formulate or discover permanent laws, but we will not discuss that here). On the other hand steps 1 and 3 are not scientific, but are art or a craft, if you will. So, we are in partial agreement here. Note that FactSet is only concerned with step 2 allowing our clients to handle steps 1 and 3. If management’s craft was not necessary the whole financial industry could do with a few rent-a-geeks like me ( only smarter).
    You also ask: “…what is the alternative hypothesis when the data confound the mean-variance paradigm?”
    The white paper linked above shows how we operate in our research. We hypothesise a distribution and a method for estimating a particular risk metric from it. We than attempt to falsify a hypothesis using empirical data that includes periods of market instability.
    The answer to your next question ”…what is the cost of using the wrong statistic at the wrong time?” should be plain to see after we have defined the risk process. This is a step 3 and falls in the domain of a risk manager’s expertise. We can provide guidance on what we consider to be best practices but ultimately this is a part of a managers craft ( I try to reserve the word “art” for work of people like J. S. Bach and F. M. Dostoyevsky).
    As far as your claim that: “This is a feature of the paradigm at the core of a Kuhn’s philosophy of science. In that framework your assertion is flawed, because it lacks a theoretical framework”, the definition above should also clarify this for you. We do have a theoretical framework, it consists of formulating hypotheses about prospective distributions of returns in various conditions. If by theoretical framework you mean those grand economic theories “of everything” that gained acceptance in the 19-th century, then I must disappoint you. I simply do not believe that they reflect reality and think of them as little more than elegant axiomatic systems that allow for mental gymnastics. In addition, they are not really falsifiable, but it is a separate discussion.
    In fact, my next post is going to deal with major misconceptions that resulted from such excessive theorizing.

    ReplyDelete
  8. I have answered all of your questions (some of them twice) about our philosophy of research. If you disagree, that is fine, but in order for continued discussion to have meaning I think you should provide alternative framework and let the readers judge.
    I do thank you for the opportunity to discuss these issues as I find them very interesting. I hope at some point we will also get to have a friendly debate about what you see as a center of Thomas Kuhn’s philosophy (especially since his dissertation is one of my all-time favorites). Sincerely, Daniel.

    ReplyDelete