Tuesday, December 21, 2010
Taking RIsk has moved!
If you subscribe to the blog by e-mail, your subscription will automatically be updated. If you subscribe by RSS, you will need to update the feed address to www.factset.com/blogs/takingrisk/RSS.
Thanks for reading!
Wednesday, February 17, 2010
Don't miss FactSet's upcoming Risk events
March 10 Seminar
Risk: The challenges of multiple asset classes
Join FactSet and the CFA Toronto for a luncheon on March 10, where Senior Product Manager Bill McCoy will address the multiple challenges of building a risk model that truly meets the needs of multiple asset classes.
March 11 Seminar
Fixed Income Management Essentials: From single bonds to risk analysis
We'll be in Boston on March 11 from 9:00 a.m. to 1:30 p.m. to present a comprehensive and educational event on the challenges of managing a fixed income workflow.
March 11 Luncheon
Risk for Super Funds
In Melbourne, we'll discuss why Super Funds should not only be thinking about risk, but analyzing it in a sophisticated way. FactSet and our risk partners APT, Barra, and Northfield will describe how Super Funds can measure, manage, and understand the risks they are taking or external managers are taking on their behalf.
March 17 Live webcast
Accurately Measuring Risk Across Asset Classes
Most risk models are either equity focused with some fixed income flavor or fixed income models with some rudimentary equity addition. An accurate total risk model must unite, in one framework, descriptors of various equity, currency, and fixed income risks to provide a very granular view of both equity and fixed income markets. Only this type of model can truly report risk across asset classes.
Daniel Satchkov will demonstrate how to accurately measure risk across all asset classes through a collection of risk statistics such as Value at Risk (VaR), Expected Tail Loss, Kurtosis, Skewness, Tracking Error, Stress Testing, and others.
For the latest in FactSet events, follow us on Twitter @FactSet.
Wednesday, February 10, 2010
A quick VIX?
On this stage, we have discussed the suitability of a risk model in terms of matching its forward looking horizon to a client’s investment process. I’m the first to agree that this indeed is an important dimension and should be considered carefully. What use has a risk prediction if the information is too late to react to? That said, I have noticed a trend among the different risk model providers to provide shorter term horizons in addition to their standard models. Both Barra and Axioma provide shorter and longer term versions of their risk models, and Northfield has produced near term versions of most of its models. The major focus of such models is to pick up increased levels of risks faster. Now, one could philosophise whether these additional horizons have arrived in time and whether they could have provided adequate warning signals for the multi sigma events we encountered in late 2008, but the fact remains that these models are here now and can provide useful insight if deployed in the right manner.
To highlight this responsiveness, consider the observed jump in the VIX over the last couple of weeks (while not all encompassing, the VIX is considered a valid proxy as a broad indication of perceived risk in the markets). As a broad indicator, the recent movement gives as an opportunity to see if and when a short-term model picks up on such an event. To do so, I decided to look at the absolute risk levels of some broad indices, in this case the FTSE All World, S&P 500, and MSCI Europe. I did this using the R-Squared Global Risk model, a daily updated model designed to predict risk on the ultra short horizon and to be very responsive.
Considering the above charts, we clearly see increases in the levels of risk after spikes in the VIX, especially during the last week. The magnitude of the impact in the spike seems related to the amount of exposure to the U.S. Market in our three indices (VIX is a measure of the volatility of the U.S. S&P 500 Index options).
Now consider the monitored impact on a monthly model. It’s going to be both less timely and a more drastic. This because the factor variances will be updated from month end to month end, so a fund may perceive a shock in predicted risk from one day to the other (when the model updates), while the fund composition itself didn’t change.
We’re not suggesting a quick fix here, saying one should only look at short term risk. But we do advocate multiple horizon measurement; longer term risk management may reflect the strategy of a fund, but when markets volatility rises, the information and analysis permitted through a shorter term model carries real value.
Subscribe by e-mail to receive new Taking Risk posts as they are published.
Monday, February 8, 2010
Notes from the IPARM conference in Hong Kong, part 3
The second day of the Third Annual Investment Performance Analysis and Risk Management Asia 2010 (IPARM) conference at the Kowloon Shangri-La in Hong Kong, China began with a panel discussion on the lessons learned from the global financial crisis and the roadmap for 2010 and beyond. Jean-Marc Sabatier, Head of Risk Management Asia for Amundi, started off the conversation talking about how 2010 is the year of opportunity for risk management, particularly in Asia. He cited two reasons:
- Human resources. Management finally sees the need, nay, the requirement, for a real risk management team in place and are finally willing to pay and support to keep a highly respected team in place.
- The balance of power has switched (a little bit). In the past, if the risk manager said “no,” it was acknowledged, but then everyone moved on. Now, risk managers are more easily and readily able to say “no” to portfolio and managers and marketing groups and actually carry some weight.
Jean-Marc also added that the job of a risk manager is no longer only about reporting; the job of a risk manager begins with the risk report. Oliver Bolitho, Managing Director from Goldman Sachs Asset Management, then discussed the concept of regret risk which in Asia is related “to a ‘face’ thing that leads to taking logic off the table." Oliver believes this is one of the bigger issues facing the industry as he has see countless examples of portfolio managers turning off their thinking when faced with an investment decision.
From the audience, the panel was asked who should have the final say on the risk of a portfolio? The portfolio manager? Risk manager? Combination of the two? Someone else?
Oliver jumped in first by describing risk as a culture. He continued by saying that if we tried to codify risk, it will get boxed in and will not be there when we need it most. By way of example, if we codified risk (e.g, you could only buy securities with a certain rating), just think what people would have bought in the past few years. Oliver concluded that he thought the risk manager should have the final say, but it should not be up to a single person.
Also in response to the question, Dr. Lincoln Rathnam, CFA, Global Head of Investment Management for EM Capital Management, mentioned the positive experience he had working for an investment management firm that was a partnership and anyone at the firm could say no. Anecdotally, Lincoln thought that having a corporate structure of a partnership was a prime reason why Brown Brothers Harriman escaped relatively unscathed from the financial crisis; everyone at the firm had a veto and they avoided the toxic assets that others so readily accumulated. But ultimately, Lincoln’s answer was that he thinks there needs to be a balance of power between the portfolio manager and the risk manager, one party always wants to say yes, the other party wants to say no, and a middle ground that must be found.
There was also a brief discussion about the “age factor” of risk managers (also known as the “value of experience” to the older demographic). In Asia, and perhaps globally, many risk management teams are junior, i.e., it is often a junior member of the investment management team and all too often someone who has not gone through many of the historical ups and downs. There were no firm answers on how to address this issue, although Jean-Marc mentioned that Amundi recently announced a new policy in which all Portfolio Managers must spend at least three years serving in a risk management capacity. Lincoln made the analogy to General Electric back in the Jack Welch days when he mandated as part of their executive management program that everyone had to spend some time in internal audit.
Next up was Dr. Stan Uryasev, Editor-in-Chief of the The Journal of Risk, who gave a talk on deviation CVaR (Conditional Value at Risk). While this risk measure has been around for a while, as a co-inventor of the methodology, Stan was able to expand on the methodology both in theory and practice. Of particular note, Stan emphasized that CVaR is most useful for risk management, not risk measurement. I strongly encourage those of you interested in learning more to check out his slides that he has made available on his website here.
After serving on the panel, Lincoln pulled double duty with a presentation on stress testing, although to me, the most interesting part of his talk was when he touched on the topic of crisis. Contrary to most, Lincoln believes that “crises are not rare events. We go from crisis to crisis to crisis. It is our nature.” Not a comment you can ignore coming from a man with over 30 years of investment management experience, and to solidify his point, he made reference to a list of financial panics, scandals, and failures. Far from comprehensive, I am sure, but it did cement his point that the next crisis is never too far away as it goes through crises starting in the 17th century and ominously ends with nothing next to number 210.
The final panel of the day focused on finding a risk model or performance system that is appropriate for your investment process. Dr. Laurence Wormold, Head of Research at SunGard APT, started things off with his three pillars of risk analysis:
- Risk measures: The simple stuff, tracking error, VaR, etc.
- Attribution: In Laurence’s words, “turning 1 number into 100.”
- Stress testing and scenario analysis: In his mind, this is the most often ignored aspect of risk analysis as he firmly believes in building shocked market risk models.
A member of the audience immediately jumped in questioning Laurence’s assertion as every firm that he knew of did some form of stress testing. Lawrence acknowledged this, but added that for most firms, stress testing is a box ticking exercise that is largely ignored throughout the company. The stress testing that most firms do lack imagination and is too simplified (e.g., S&P 500 goes down 20%). For the most part, the stress testing that is in the marketplace today suffers from a herd approach; everyone is testing the exact same thing. Laurence further suggested that the current tests should be anchored in economic plausibility; a firm should start from a historical event and then invite colleagues to take that information and think about other ways to create realistic scenarios.
Overall, IPARM Asia was a well organized conference with a very solid slate of speakers and I was quite happy to hear that the organizers have already announced that the fourth annual conference will take place in Hong Kong again next February.
Subscribe by e-mail to receive new Taking Risk posts as they are published.Thursday, February 4, 2010
The illusion of stability, part 2: What fuels bubbles?
"It wasn't me. It was the one-armed man. Alright, alright, I confess. I did it, you hear? And I'm glad, glad, I tell you! What are they going to do to me, Sarge? What are they going to do?"As if to respond to my 2009 year in review post, Ben Bernanke came up with a huge speech (please do not smile; I wrote “as if”) outlining why zero real interest rates have had no effect on the housing prices. With this groundbreaking discovery, he opened 2010 by declaring war on common sense and possibly laying claim to the Noble Prize. Yes, you heard it right, zero interest rates do not affect housing prices.
"Sorry, son, that's not my department."
- Jim Carrey in the The Mask
Let's consider some of the Chairman’s arguments:
The Taylor Rule
Bernanke invoked a formula called the Taylor rule which purports to proscribe appropriate monetary policy to strike a balance between economic growth and inflation of asset bubbles. Bernanke suggested that if forecasted inflation rate is used as an input into the Taylor rule then Fed’s policy of zero real interest rates in the years when the housing prices were exhibiting explosive growth then the rule suggests that the policy was appropriate. Taylor himself responded to Bernanke explaining that he misinterpreted and misused the rule. Without getting into any mathematical detail, the first question to ask is, "How valuable can a rule be, if different inputs can be used to provide any result you want?" Imagine a space engineer whose creation crashed and injured the public proclaiming that he used a different gravity constant and therefore was correct. The point is that economics is not physics and a heavy dose of common sense is needed. Even Taylor himself said that mathematical discussions should not obscure the plain fact that interest rates were ZERO.
Smoke and Mirrors
"The fact that our econometric models at the Fed, the best in the world, have been wrong for 14 straight quarters does not mean they will not be right in the 15th quarter." - Alan Greenspan in a testimony before Congress
In the second argument, Chairman Bernanke made a similar attempt to cloud economic discussion with equations carrying a boatload of assumptions which will give most any answer desired. He referred to a set of econometric models developed at the Fed based on the technique called Vector Autoregression. Using past values of housing prices, interest rates and other economic variables, the model can be coaxed into showing what values are reasonable for any one of the variables given the actual values that were observed for the rest (conditional distribution). Using that technique, it can be shown that only about half of the explosive housing growth can be attributed to the Fed’s policy. In addition to eschewing common sense, there are fairly obvious and fatal statistical problems with Bernanke analysis:
- The model was calibrated during the period of non-zero interest rates and then linearly extrapolated into the zero interest rate period. It assumes that the relationships between variables are linear and constant, that there is nothing inherently different about zero interest rate environment and that only changes in variables matter. Needless to say, all these assumptions are flawed, and this is a perfect example of the fallacy of linearity of which I have written here.
- The range of housing prices conditional on the interest rates was limited to two standard deviations. Even using Bernanke’s own heroic assumptions there is still 32% of the distribution remained outside. When looking at the graphs at the end of the speech, it become fairly obvious why two standard deviations were used. Simply put, three standard deviations would not give the desired answer; the line would be too close to the actual housing line, thus suggesting that even with all its flaws the model rather contradicted the speaker.
Adjustable Rate Mortgages and Rates
The third argument was that the problem stamped from lax regulations not from zero interest rates. Here is one quote from the speech:
“Moreover, less accommodative monetary policy would not have had a substantial effect on ARM payments…Clearly, for lenders and borrowers focused on minimizing the initial payment, the choice of mortgage type was far more important than the level of short-term interest rates.”
Here, the Chairman is partially right, and his call to create a systemic risk agency can only be applauded. However, to argue that low interest rates had nothing to do with the proliferation of creative mortgage lending is to ignore the reason why people took on ARM mortgages in the first place. They did so because housing prices were rising at a tremendous pace for a number of years before. As Bernanke himself shows, ARM mortgages did not become popular until 2006, which was closer to the end of the bubble. Without zero interest rates, we would not be talking about them.
More Smoke and Mirrors
“'You might just as well say that "I see what I eat" is the same thing as "I eat what I see"!'
'You might just as well say,' added the March Hare, 'that "I like what I get" is the same thing as "I get what I like"!'
'You might just as well say,' added the Dormouse, who seemed to be talking in his sleep, 'that "I breathe when I sleep" is the same thing as "I sleep when I breathe"!'
'It is the same thing with you,' said the Hatter …” - Lewis Carroll, Alice in Wonderland
Bernanke's last point confuses the whole matter even more:
“In particular, we need to understand better why some countries drew stronger capital inflows than others. I will only note here that, as more accommodative onetary policies generally reduce capital inflows, this relationship appears to be inconsistent with the existence of a strong link between monetary policy and house rice appreciation.”
Essentially, what this passage is saying is the following: Countries with real estate bubbles exhibited higher capital inflows. Since higher capital inflows usually happens when interest rates are high and not when they are low all this is very confusing. The interest rates must not have been low after all.
This kind of reasoning implicitly assumes that economics can be described by the same functional relationships as Newtonian physics, i.e., that if high interest rates cause capital inflows then high capital inflows will not occur when interest rates are set too low. It assumes a kind two-way relationship that is called one-to-one correspondence in mathematics.
Why did the Chairman have to avoid considering the possibility that capital inflows can be caused by reasons other than high interest rates? Is it not the obvious conclusion that the capital inflows were caused by the very presence of the various bubbles that Alan Greenspan and Ben Bernanke then denied? Who cares about a few percentages in interest rates when values of assets are doubling?
What does all this mean for a risk manager? It means that Ben Bernanke sees no inherent problem in zero interest rates as long as Consumer Price Inflation stays low. This in turn means that we are in for another decade of Black Swans. Buckle up.
Subscribe by e-mail to receive new Taking Risk posts as they are published.
Monday, February 1, 2010
Notes from the IPARM conference in Hong Kong, part 2
A day full of fascinating discussions at the Third Annual Investment Performance Analysis and Risk Management Asia 2010 (IPARM) conference at the Kowloon Shangri-La in Hong Kong, China. Hope you've been enjoying my live tweets from the conference.
Kicking the event off was Trevor Persaud, the Head of Investment Risk Oversight and Performance at Prudential Asset Management in Singapore. Trevor's topic was redefining the roles and responsibilities of performance analysts and risk managers to better support the investment management teams. Trevor started off his presentation by asking the audience whether the portfolio manager at their respective firms was the only person at the firm who can say exactly what is going on in a particular portfolio. While Trevor acknowledged that this was expected since the PM should have the expertise, he questioned whether this lack of challenge and oversight is healthy for the fund. From his experience, he has found that independent expertise of a fund helps moderate the action a PM might otherwise undertake.
Having worked in both Europe and Asia, the first question asked of Trevor by the audience was what were the big differences between the risk management function in the two regions? Trevor cautioned his response by saying this was not a sweeping statement, but he thought that in Asia, up until recently, the role was purely operational, very limited independence and oversight. However, of late, he has noticed that in Asia, there is a thicker layer of senior management (compared to Europe where the Portfolio Manager is king), and management has been more receptive to risk managers looking to take a more active role and becoming more than just serving a pure operational function. In Europe, Trevor thought that risk managers have taken on more responsibility than their Asian counterparts, but that a risk manager's ascension is capped, there comes a point (and I hope he was speaking from personal experience) when the risk manager is at peril of overstepping his or her bounds in the hierarchy of a European investment management firm.
Daniel Wallick, Principal in the Investment Strategy Group at Vanguard, gave an equally interesting talk and got everyone's attention when he equated risk management with the Allegory of the Cave from Plato's The Republic.
For those who haven't read the book since high school, Daniel's analogy was that risk is similar to man looking at the shadows on the wall in the cave; we are not really sure what we are looking at. Daniel's talk expanded into the two reasons why we need risk management (people are imperfect and crises happen) and ended with four interesting fundamentals that are followed at Vanguard:
- Risk management is an integral part of the investment decision making process.
- An enduring value-added investment program in good ties and (crucially) in bad.
- Top-down qualitative judgment/rigor + discipline in quantitative measures.
- No substitute for the judgment and experience of the risk management and investment teams.
Daniel's session ended with a question from the audience on his assessment of the current risk environment in the U.S. given his experience and focus there. Daniel replied that he was not specifically concerned near term with inflation (in the U.S.), but what concerned him was how exactly should the Fed back out of what it has been doing and how/when do they do that.
There were a couple of additional speakers today who covered topics that I will take up in a future blog post. As for tomorrow, we have an equally interesting set of speakers and topics coming up in day two of the conference, some that have caught my attention include:
- Dr. Stan Uryasev, Editor-in-Chief of The Journal of Risk, who will be seeking alternatives for VAR as risk measure.
- A panel discussion on finding the right risk model and measurement system that is appropriate for your investment decision process, particularly interesting since my colleague covered this topic in a blog post last month.
- Peter Urbani, CIO at Infiniti Capital who will be revisiting risk management and performance measurement for hedge funds.
Please check back early next week for a recap on these interesting topics.
Don't miss the next post in this series. Receive new blogs by e-mail.
Notes from the IPARM conference in Hong Kong, part 1
In particular, I am looking forward to hearing from:
- Trevor Persaud, the Head of Investment Risk Oversight and Performance at Prudential Asset Management in Singapore (and Chair of the conference) who will discuss redefining the traditional role of risk managers.
- Dr. Lincoln Rathnam, CFA, Gloal Head of Investment Management at EM Capital Management who will be giving a talk on stress testing.
- Jean-Marc Sabatier, Head of Risk Management Asia for Credit Agricole Asset Management who will be speaking on data management best practices.
- Daniel Wallick, Principal in the Investment Strategy Group at Vanguard who will be examining some of the key challenges of risk adjusted performance measurement in the current market.
Watch this blog later in the coming week as I post on some of the interesting topics covered at the IPARM two day conference.
Tuesday, January 26, 2010
The illusion of stability, part 1
“The real trouble with this world of ours is not that it is an unreasonable world, nor even that it is a reasonable one. The commonest kind of trouble is that it is nearly reasonable, but not quite. Life is not an illogicality; yet it is a trap for logicians. It looks just a little more mathematical and regular than it is; its exactitude is obvious, but its inexactitude is hidden; its wildness lies in wait.” - G.K. ChestertonWe have seen this happen many times. A financial crisis erupts and everybody, including the New York Times, remembers risk management. There appear lengthy expositions of the falsity of assuming normally distributed returns, and everyone loudly wonders why the industry was not warned by risk models. However, as soon as the situation stabilizes – or rather appears to stabilize – risk management is again relegated back to the specialized conferences and, incredible as it may seem after the last twenty years, tracking error is again used to completely describe the risk profile of the portfolio.
This is not a conspiracy; rather it is the effect of what John Cassidy the illusion of stability. This illusion is supported by a few pillars, each of which I intent to discuss in this and the next few posts. The first pillar is the economic theorizing that comes from looking at the economy as a physician looks at the elementary particles or an astronomer at the galaxy. This is the logician’s trap taught in every institution of higher learning.
In January 2009, the Basel Committee on Banking Supervision finally attempted to disconnect the feeding tubes and get out of the matrix when it proclaimed that:
“most risk management models, including stress tests, use historical statistical relationships to assess risk. They assume that risk is driven by a known and constant statistical process. Given a long period of stability, backward-looking historical information indicated benign conditions so that these models did not pick up the possibility of severe shocks nor the build up of vulnerabilities within the system.”I know I have used the above quote before. Nevertheless, I keep using it because I believe that it not only provides in a capsule form many of the key misconceptions about risk management, but gives a glimpse of possible ways to deal with them. I have written before about the problems with the present paradigm and the ways of correcting for them. Now that I have put my logical cart before my imaginary horse, let me go back and find the horse. In other words, I would like to briefly discuss the sources of the misconception as I see them, mainly in the economic theories of equilibrium. Why did much of the financial industry believe that “risk is driven by a known and constant statistical process”? Surely, this is not an obvious observation; it requires a certain mindset, a view of the financial markets as a kind of galaxy that we can observe with the telescope and count that the resulting calculations will not need to be changed from day to day.
To understand the source of this view we need to look no further than Leon Valras, a brilliant French economist who was the first to attempt to create a mathematical equilibrium model of the economy. His friends later remembered that he was very inspired by the book on physics that he had then recently read. It fascinated him so much that he vociferously proclaimed his intention to create a new science of political economy, one that would governed by calculus equations just like physics. “Equilibrium” is derived from physics, and it was only natural to look for the equilibrium in the economic system given his premise. One key feature of his model had far reaching implications and it affects virtually every area of economic and financial thinking including risk management. This idea in French is called “totonemont” and it essentially defines the process of gradual adjustment by which participants in the economy slowly move it toward its equilibrium. The process roughly is as follows:
- Sellers and buyers announce the prices at which they are willing to transact.
- If the prices match, then the equilibrium is reached according to a set of equations written down by Valras.
- If the demand and supply are mismatched, prices are altered in increments until the balance is reached, a gradual process called “totonemont.”
As we can see, this model appears to simply follow common sense, something we observe in our daily lives. However, we need to ask a question that is extremely relevant today for any finance practitioner: what makes this change gradual? Why would we assume that the process is constant and stable? The basic answer to these questions given by quantitative economics and quantitative finance is the assumption that supply and demand are relatively stable.
It is interesting to know that Valras explicitly applied this model to financial markets, despite the fact that Europe saw a number of financial bubbles in the18th and 19th centuries. If we are talking about wheat or corn, it might be reasonable to suppose that neither the consumers’ desire to consume them nor the producers’ ability to produce them will change very quickly. The first is limited by the physiology of the human beings, and the second by the physiology of planet Earth. But the situation is quite different for financial assets. There is no obvious limitation on the amount of financial assets buyers are willing to purchase other than the supply of liquidity and credit in the economy. And as we recently saw, the supply of financial assets may grow very quickly if the demand is there (the MBS and CDS markets are only two recent examples).
More so, as German economist Gustav Schmaller observed more than a hundred years ago, demand can actually increase with price. By now it should be obvious that the situations of demand increasing with price or decreasing with it are so common in finance that they almost constitute the rule. This problem lies waste to any attempt to view financial markets as stable or constant. As risk practitioners, we should be aware that there have been and will be periods when supply and demand change drastically, making the models calibrated in normal times useless. When demand for financial assets falls with prices, it creates a wave of demand for and subsequent shortage of liquidity, which is really what is hiding behind the frequently mentioned “rise in correlations.”
The fact that we do not have a stable process that easily lends itself to modeling should not deter us from quantifying the risk of our portfolios and our exposure to such extreme situations. Our primary answer to this problem has been the development of the Event Weighted method of stress testing. This method assumes that the situations in which demand falls with price have many similarities across time, and therefore we can look for similar periods of liquidation to estimate how our portfolio will respond to future instances. This assumption has shown its validity in empirical tests and certainly does not require the leap of faith involved in assuming that financial markets are as orderly as the Solar galaxy in its motion. The financial system may not be stable and constant, but the reaction of the participants in times of instability can be modeled to supply a risk manager with valuable input for decision making.
Thursday, January 21, 2010
Considerations when implementing a risk management system, part 3
Here again are the questions I address:
- Who are the Stakeholders?
- Why do you need risk?
- What are your options?
- Where do you need to see risk?
Part Three:
- How should you implement your solution?
- When should this take place?
How should you implement your solution?
Let's first address this point: Goodness of Fit does not apply to models only (Risk model selection). We have taken the time to identify the stakeholders, our analytical requirements, and what we need from a risk model. Now all we need to do is pick a model. Here are some things to keep in mind during the evaluation process.
- Do you really understand how the model is constructed and what the output tells you?
As a risk practitioner it is imperative to have a firm grasp of how a particular model is constructed and how that model’s results are to be interpreted. It’s not enough to understand that a model uses pre-specified factors or principal component analysis to estimate risk. The reality is that any risk vendor worth its salt should be able to provide lots of details regarding model construction and interpretation. In the end, if you don’t understand the model how can you effectively communicate the results to your clients or apply them to your investment decision making process? - Is the provider open up about their methodologies?
In this day and age, if a model provider operates like a black box, I would look elsewhere. Certainly there may be elements of the risk model creation that are considered proprietary by a third party vendor, but there is really no excuse for a vendor limiting a client’s access to construction details. In my mind, the better you understand a risk model at a fundamental level, the better you can use it to understand your portfolio's risk. - If you have questions, do they have answers?
Documentation and transparency are important, but there will always be questions unique to your firm and risk provider needs to be able to help you answer and these questions. Ultimately behind every risk model there are people, and this means that during the model selection process you need to evaluate the relationship with the people behind the risk model as much as anything else. If you cannot have an open dialog with your risk provider you are not going to get the most out of the system.
Next up: data (unfortunately there is more to risk than risk models). Risk analysis requires an underlying set of data, which is to say, risk measures are only as good as the data they are based on. So we need to think about what data is actually needed to perform any risk analysis. This is one of the most overlooked points in risk analysis. It boils down to a simple question. Do I want to manage data or manage money? So what are the data sets we should be thinking about?
- Risk Model: Why we need this is should be self evident.
- Portfolio data, benchmark data, and pricing: As we all know, in order to generate portfolio risk we need portfolio and benchmark weights. That means we need portfolio and benchmark holdings and quantities and pricing to calculate accurate market values and weights. It is important that you are comfortable with the accuracy of the portfolio and benchmark data.
- Security-descriptive data: If there are securities (e.g., derivatives, unlisted, futures, trusts, real estate) in your portfolios that are not covered by the risk model(s) you use, you need a way to increase this coverage. One of the most common means is to supply security terms and conditions. Different firms have different levels of access to terms and conditions data. E.g., If you are Plan Sponsor you may not have a good source for this data and may have to rely on your managers to supply it. Where will you get this data if you need it and how will you store it?
- Fundamental and economic data: There are three good reasons to think about this type of data:
1) To give a clear picture of portfolio’s current situation in a way that makes sense to people unfamiliar with risk, it is often helpful to include other data in the analysis to illustrate a point. For example, if you have underexposure to something like “size” or “value,” it may help your cause to include market cap or valuation measures along with your risk analysis to help with the interpretation.
2) If you plan on optimizing, you will likely want to have the ability to incorporate market data into your models to tilt your portfolio(s) towards real world factors that are important to you investment process.
3) If you plan on applying any stress tests, you will undoubtedly need market data to create the scenarios you wish to test (e.g., rising oil prices, decreasing interest rates, changes in trading volume). - History and timeliness: Make sure that you have a good handle on what kind of history you need and will have access to as well as how often the data is updated. If you are concerned with historical ex-ante risk analysis or optimization, you will need historical data for the portfolio, benchmark, and risk model before you can move forward.
Choosing a risk provider is a big decision, but it should not be made in isolation, so consider the following. Risk models are no longer linked exclusively to the model providers; they are now available through a variety of platforms, integrated to varying degrees. Because of this you may have an opportunity to not only solve your risk needs, but to potentially also meet other needs or solve other problems at your firm unrelated to risk. This could mean consolidating services, saving money (or at least spreading the cost), and minimizing redundant processes.
Scalability and flexibility are particularly important because it may mean you can use one system for multiple purposes within a risk framework and potentially beyond. If you belong to a Risk Team, you may only care about risk itself, but many financial professionals wear multiple hats these days and are interested in several things (e.g., portfolio management, risk, performance, marketing). In the past, you may have needed more than one platform to meet all of these needs. Now if you can find a platform that is both scalable and flexible and still meets your core risk needs, there is a good chance that you can consolidate services, which in turn can lead to cost savings and distribution. In this environment every cost is being scrutinized so any service that allows you to get good value for cost is in demand.
Of course related to all of this is the data behind the scenes. Most investment firms would rather stay away from the business of managing data and stick to their core competencies. As such, make sure you understand how the platforms you are considering integrate data, from your own portfolio holdings to benchmark data to third-party data and beyond. You might find a system that does much of what you need but still requires you to plug in lots of different data sources to get the job done.
Kick the tires! You wouldn’t buy a car without looking under the hood and taking it for a test drive. Implementing a risk system can be difficult, so take advantage of trials, set some goals, and at the very least make sure you have satisfactory answers to the following:
- Is it easy to test out a simple situation?
If you can’t get results for a domestic equity portfolio easily, don’t hold your breath when it comes to large multi-asset class portfolios. - Is the support responsive?
If you don’t get the help you need during a trial, forget about when you are client. - How is the software?
If you can’t use the software, you can’t analyze risk.
When should all of this take place?
There is no perfect timeline for selecting and implementing a risk system, but here is a rough guideline of how it often works:
- Investigate the needs and requirements internally before anything else. Do as much leg work internally as you can before casting your net and looking at providers. Meetings and demos will be much more effective if you have a good grasp of what you think you need.
- Look at the options available in the marketplace. Do some research about the model types you might be interested in. If possible, attend relevant conferences. Contact vendors and ask for information.
- Meet with the providers and have them explain their solutions in the context of your needs. Risk providers have lots of experience; they should be able to do this and this may force you to re-evaluate your questions.
- Narrow the field. Based on meetings, demos, and conversations, you should have some comfort at this point about who you think are legitimate options.
- Request a trial of the top candidate(s). Keep your goals and objectives in mind. Start thinking about implementation. Perhaps part of the trial can be dedicated to moving forward in this regard.
- Don’t lose focus. Circle back to your original requirements. Are you still on target or have you drifted away from your core goals?
- Purchase approval. Depending on a firm’s purchase approval process this step can sometimes significantly delay or impede implementation. Costs should be discussed early so that there is no confusion or surprise at this stage.
- Full implementation. Fully implementing a risk system may take a while, so create a reasonable time line and start simple or with the key portfolios.
Finally, anything worthwhile tends to be difficult, and I believe that implementing a risk system falls into the category of something that is worthwhile. If you have specific questions, please contact me.
Sunday, January 17, 2010
Considerations when implementing a risk management system, part 2
Continuing from my previous post, I will address the next in our series of questions to consider when implementing a new risk management system.
- Who are the Stakeholders?
- Why do you need risk?
Part Two:
- What are your options?
- Where do you need to see risk?
Coming in Part Three:
- How should you implement your solution?
- When should this take place?
What are your options?
Now that we are clear about who we are trying to please and the reasons we need risk in the first place, we can tackle finding the right risk model for the job. To do this we need to be able answer three primary questions:
- What market(s) do you invest in?
Firms often try to use a single broad model to analyze many smaller markets that they care about. While this may be a more cost effective option then buying many market-specific models and the large model may, in fact, “cover” all of the securities they care about in the smaller markets, these firms are not taking advantage of the research and development performed by the risk vendors to design their models for specific purposes. For example, it seems to be more and more common to use a Global Equity model to analyze equity portfolios that invest only in single countries. While you will certainly be able to calculate some risk numbers, I would argue the value of these numbers is reduced. Most global models are designed to use or capture factors that apply across many diverse markets and therefore are ideal for global investors, while single country models typically use or pick up factors that are unique to a single market. Imagine using a Global model to analyze an Australian equities portfolio. Most global models will likely be dominated by factors that primarily affect a few large countries; is the predicted risk of such a model ideal for this situation? - What asset classes do you care about (equity, fixed income, derivatives, unlisted assets, etc.)?
You can easily extend the rationale from point 1 to multi-asset class models. If you could, would you use an equity model to analyze a REIT portfolio? - What are the right time horizons (Investment Horizon vs Model Horizon)?
What about the often overlooked time horizon? If some of your portfolios are short term by nature, looking at an estimation of risk based on 12-month time horizon would not be particularly useful. What if you have a long-term investment horizon, but you want to understand your short term risk exposures? There are certainly legitimate and very good reasons to use models calibrated for different time horizons; just make sure that you understand the limitations and applications of such models before making a decision about which model is right for you.
In the end, I think we need to keep sight of something we already know, but often push to the background: all risk models are estimates based on some simplifying assumptions. One model is not going to be ideal for all purposes and situations. We need to do our best to align the model assumptions with our view of the world and our reasons for analyzing risk in the first place. If I care about measuring sensitivity to short-term market volatility, then I should not expect a model with a long horizon to provide meaningful insights.
The ultimate goal should be to fit models to our purposes. In many cases this may mean using multiple risk models.
Where do you need to see risk?
At this point, we have identified (from our list of stakeholders) the people/groups that care about risk, but as I have mentioned, the degree to which these stakeholders care can certainly vary significantly across individuals and groups. My goal here is to help you think about communicating what you know in a way that is meaningful to the end consumer of the information.
I have seen numerous investment managers who already have a risk system in place follow a business model where a small team of risk professionals analyze, generate, and communicate risk results for everyone else in the firm. While there is no denying that having a risk team is still a great idea (who better to set up standards, objectively monitor risk, and communicate results then people who specialize in exactly this type of analysis?), there are definitely some good reasons to make this information more accessible throughout a investment firm. For example, technology and software have improved dramatically since risk systems first came into use, which should allow for much broader and more efficient distribution of risk analytics across a firm. It still may fall to a special risk team to monitor and manage risk across portfolios, teams, etc., but there is certainly no reason for others who might be interested in portfolio risk (e.g., PMs, CIOs, Analysts) to have little or no access to the same information on a regular or ad hoc basis. In other words, there is no technological reason for limiting access to risk data to a single person or team.
At the end of the day, there will likely always be a need to have some risk information “pushed” throughout many firms, but the ability for entirely separate, independent groups to dynamically “pull” data throughout an organization should become standard practice as time goes on. If nothing else, it should allow:
- Better integration of risk within the investment process (e.g., if fund managers can monitor their own risk, they should be able manage their portfolios in such a way that they can better justify investment decisions from a risk/return perspective)
- Improved dialog amongst teams (e.g., if a risk team, board, or CIO is able to monitor risk across portfolios, they can ask meaningful questions to PMs before risk becomes a problem)
- More frequent and timely dissemination of risk results internally and externally
The main takeaway from this section is the need to understand the means and options by which you can communicate analytics across the firm when you are working with your risk model vendor. Make sure that you have scalable solution that can grow as your needs grow and change.
I will be wrapping this subject up in the next installment.
Wednesday, January 13, 2010
Considerations when implementing a risk management system, part 1
Disclaimer: As a FactSet employee I am obviously biased in my views of risk providers, but my goal here will be to be present as objective a set of guidelines as possible. I have gone through this process enough times to know that the better prepared an investment management firm is (as it relates to risk systems decisions) the better off everyone involved in the process is as well.
Over the next two or three posts, I will address these questions:
- Who are the Stakeholders?
- Why do you need risk?
- What are your options?
- Where do you need to see risk?
- How should you implement your solution?
- When should this take place?
Who are the Stakeholders?
This is arguably one of the most important questions to answer early since in many ways it will dictate the answers to remaining questions. Common stakeholders include:
- CIO
- Board
- Risk Manager/Risk Team
- Portfolio Managers
- Performance/Reporting Team
- Marketing Team
Obviously the ways stakeholders will use risk data can vary dramatically across users/teams, and a good model and system should be able to provide relevant results to each type of end consumer in a meaningful way. For example, a Risk Team may care about the aggregated risk across a collection of portfolios and how each portfolio contributes to their overall equity risk exposure, whereas a Portfolio Manager within the same firm may want to determine which securities are contributing to his/her portfolio active risk.
I often think about the question of stakeholders from the perspective of how a particular stakeholder may actually think about or look at risk: at a summary level, detail level, or both.
In addition to these three vantage points, there is another dimension to risk that is often overlooked, and that is comprehension. By this I mean that the above stakeholders can usually be grouped into an additional two categories:
- Those who understand and care about the interpretation of risk results
- Those who do not (but this does not necessarily mean they are not a stakeholder)
Why is it important to understand these distinctions? If you care about details and comprehension, you probably need a different set of tools compared to someone who only cares about summary level information and is not bothered about the meaning of the results. An example of a risk stakeholder who needs results but doesn’t necessarily need to truly understand what the results mean would be reporting and marketing groups, many of whom need to report these numbers but are not required to really understand their meaning in depth.
Why do you need risk?
Choosing a risk model/system can be hard; why are you doing this to yourself?
Less than ideal stand alone reasons:
- Need to be able to tick this box RFPs
- Board reporting
- Seems like everyone else has one
Some good reasons:
- Determine if risks are aligned with expectations
- Measure/manage risk across portfolios, products, asset classes in a meaningful way
- Understand unintended exposures to factors in the market
This is probably obvious and in many ways the "why" should be an easy question to answer, but if you are part of the group looking at implementing a risk system, you need to have a good grasp as to why you are doing this in the first place. If it is simply to tick the box in an RFP or to include tracking error in some large amorphous report with little thought or emphasis on the meaning, then you are doing it for the wrong reasons and it is only a matter of time before you realize you are spending a lot and getting a little.
So my advice is to take the time to really understand why risk is important to your firm and how risk measurement and management can enhance your firm’s core competencies. There is nothing wrong with including the stand alone reasons I listed above as part of the overall answer to "why," so long as they are not the only reasons.
I will pick this up again soon with the next question in my list. Happy New Year!To be continued. . .
Don't miss the next post in this series. Receive new blogs by e-mail.
Monday, January 4, 2010
My predictions for the risk landscape of 2010
With the New Year I have no doubt you are also all looking to welcome in some changes, perhaps in the form of resolutions such as stopping smoking, losing a bit of weight, drinking less, spending more time with the family, more exercise, etc. Now we all know that past performance is no indicator of future performance, but on a personal level (and with the weight and exercise resolutions firmly in mind) I can easily see myself slipping back to the old ways all too soon and, therefore, am not going to go down the line of making these sort of pledges.
I decided instead that I might look forward to the year ahead and throw a few thoughts out there about how we may see the risk landscape change. There has been no new regulation as of yet from the SEC, FSA, or others, but I believe it is safe to say that the nature of any coming will not be of the "report less often and in less detail" variety.
Rationale: We have moved far from the old mantra "all risk is bad," but will regulation come in demanding justification any and all risks taken?
There is a level of risk inherent in any financial area, whether it comes from being exposed to a "split-strike conversion" strategy or just investing in a high yielding Icelandic bank account, so it is not the fact that there is risk per se that needs to justified, but that the risks being taken are reflected in the expected returns balanced with a complete picture of what the risks are. I hope that any new regulation in this area is focused on the explanatory angle rather than the justification one.
Responsibility: Will regulators look to firms to appoint individuals to take signatory control over portfolio risk levels in some kind of extension of the compliance departments, or will there be further encouragement towards a group responsibility?
Personally, I believe that we will see a move by firms themselves towards increased education, looking to ensure that everyone who has any interaction with a fund is aware of the risk characteristics of that fund. E.g., analysts providing recommendations for conservative, blue chip strategies should not be encouraging short term, alternative, emerging market exposures; portfolio construction tolerances should be set considering historic and comparative peer limitations; factsheets will report deeper variance contribution than just the top 10 holdings, etc.
Reporting: Will there be a push towards a "new" measure now that the "flaws of VaR" have been exposed?
Portfolio risk cannot be distilled down to a single number. Indeed some people would describe risk as more of a landscape than a point, but will providing a whole multitude of numbers really increase people’s understanding of risks and in what magnitude they exist? I have seen in a few clients already an acceptance that it will be necessary to embrace multiple methodologies that include different horizons (historical periods as well as both short term and long term ex-ante forecasts), different assumptions (normal vs. fat-tail, Stress Testing, Monte Carlo vs parametric, etc.) as well as different risk measures (Tracking Error, VaR, CVar, Expected Tail Loss). Making all of this available in an accessible, timely, and potentially interactive manner is a challenge that they are already looking to surmount.
These are just three areas that I wanted to comment on, and must (for legal reasons no doubt), stress that all comments made here are the views of me alone and do not in any way reflect those of FactSet Research Systems or FactSet Europe Ltd. But I encourage you all to reply directly or through the comments section below laying out your own thoughts to what possible changes to the risk landscape 2010 might bring.
To receive new posts by e-mail, subscribe to this blog.