Thursday, March 26, 2009

Four organizations that help you attend continuing education on a budget

With conference and T&E budgets down considerably in the industry this year, many may not have the opportunities that they did in previous years to attend presentations by leaders in the field, and to network and share ideas with colleagues at different firms from across the globe. Attendance at the traditional multi-day industry conferences is certainly down this year.

FactSet, along with many of the finance and risk associations, has responded by bringing more content right into your office. Webcasts have become much more popular this year and present a great way to “attend” a conference at no cost. In addition, several organizations are now offering low cost half day live events. As conference sessions are viewed by so many as an extremely important part of their continuing education, I have listed below some of the organizations offering sessions on current topics regarding the economy, quantitative finance, and risk that won’t break the bank.

  1. FactSet’s weekly live webcast series “Seeing Clearly in Turbulent Times,” will provide you insight on such issues as the global economic outlook, secular bear markets, the U.S. business cycle, and the future of the bond market. This five part webcast series runs each Wednesday at 2:00 p.m. Eastern. Sign up here.

  2. PRIMIA (Professional Risk Managers’ International Association) offers low cost conferences and webcasts. For example, they are holding a half day Credit Risk Forum in New York on April 2 that costs only $10 for members and $40 for non-members. Check out their website for information on their upcoming events.

  3. GARP (Global Association of Risk Professionals) also offers low or no cost webcasts and conferences throughout the world in locations such as Tokyo, London, New York, Chicago, and Zurich. Most are free for GARP members. Click here for a full list of their upcoming events.

  4. QWAFAFEW is an organization that holds regular monthly meetings in several cities across the US including New York, Boston, and Chicago. There are also less frequent meetings held in other cities throughout the world. The meetings are generally held at pubs, are very informal, and consist of one or two presentations from a quant professional along with food and drink. You can sign up on their website to receive emails about upcoming events in your city.
Of course, many organizations such as the CFA Institute are running their usual multi-day annual conferences this year. FactSet is a sponsor, exhibitor, and speaker this year so please come see us at our booth next month in Orlando.

Please share with any other organizations you know of that offer similar low or no cost events in the comments section.

To receive future posts by e-mail, subscribe to this blog.

Friday, March 20, 2009

How different are the risk model providers, part three: predicted tracking error

In my third and final post on the question of “How different are APT, Barra, and Northfield?” I’ll shift focus to the change in predicted tracking error over time. It is critical to understand whether the portfolio is becoming more risky or reducing risk to become more like the benchmark (risk direction). So, we should want to understand whether APT, Barra, and Northfield suggest comparable change.

Our framework is mostly similar to the previous blog posts (part one, part two). First, we use the same 300 U.S. equity mutual funds , focusing on the same three U.S. long term equity risk models. Our change over time will consider the six month and twelve month periods ending on 12/31/2008. So, our starting point is 2,700 predicted tracking errors (300 portfolios x 3 risk models x 3 points in time).

Comparing the change across models, we are most interested in both correlation and covariance. The correlation coefficient reveals the strength and direction of a linear relationship between the tracking errors for two models.
We see consistently positive correlations across models, periods, and styles. For the most part, the tracking error changes between the risk models are largely correlated. Clearly, the Small Cap Growth and Small Cap Value correlations are smaller. Also, Model X is less correlated with the other two models.

Hand in hand with correlation, we should consider the covariance. This metric will tell us how likely one tracking error change is to be unexpectedly large when another model’s change is unexpectedly large. Similarly, a negative covariance would tell us that an unexpectedly large change in one tracking error suggests an unexpectedly small change in the other tracking error. Finally, a covariance of zero indicates that an unexpectedly large change in one tracking error leads to no change in expectation of the change in the other tracking error. Covariance is a good complement to correlation because it offers the extra insight of suggesting the magnitude of how two tracking error changes move together.

In our comparison, the key observation is that the covariance of the twelve month change is consistently higher than the six month change. We didn’t observe this type of difference when looking at the correlation. As with correlation, though, the small cap growth and value styles are the most independent.

Overall, coming back to our initial question of “How different are the various risk model providers?” I would say that there are differences in how tracking error has changed between the three models, but there aren’t tremendous differences in the change. The models are least similar in small cap growth and small cap value. Lastly, model X is more different from models Y and Z which is an interesting conclusion considering what we observed when we compared the actual tracking errors in earlier blog posts.

To receive future posts by e-mail, subscribe to this blog.

Wednesday, March 18, 2009

Dan DiBartolomeo tells me how he recognized Madoff's fraud in 1999

Listen to the most recent episode of FactSet’s podcast to get a closer look at the Bernie Madoff scandal from the perspective of someone who recognized the fraud years before it was uncovered.

I interviewed Dan DiBartolomeo, President of Northfield Information Services, who performed initial analysis on Madoff fund data that pointed to suspicious activity more than eight years before the fund manager turned himself in. Dan explains:

We did about three hours of analysis on the return history and tried to match it up with the strategy as described in Manager B’s marketing material. And we concluded after a couple of hours that something was seriously wrong, that essentially there were three possibilities.

One, the strategy was something other than was being described and that the Manager B, whoever that was, was incredibly skillful. The second possibility was that the strategy was being misrepresented and was somehow being illegally enhanced through insider trading or front running or some other type of activity that would have improved it illegally. The third possibility is that the entity in question, Manager B, was a fraud.

Dan goes on to explain how he eliminated the first two possibilities, when he found out Manager B was Madoff, how the SEC handled this information, his ideas for regulation, and the responsibilities hedge fund managers have to their investors. You may have seen Dan on Fox News, or read his quotes in The Wall Street Journal.

Listen to our podcast in iTunes or on the web. Or, read the transcript in its entirety.

I invite you to continue the conversation about Bernie Madoff in the comments section below.

Get new Taking Risk entries delivered to your e-mail as they are posted.

Tuesday, March 10, 2009

Is execution risk the latest burden for small cap managers?

Last month I wrote about the structure of risk analysis and the manner in which optimisers and the risk models themselves coexist. I theorized that the rise in volatility was pushing portfolios closer to market cap biased benchmarks irrespective of manager conviction. This month I’ll want to examine the changes in a more simple risk measure, show that this cap drift exists, and question what this means for small cap managers in particular.

To demonstrate this effect, let’s consider the three U.S. indices most reflective of size -- the S&P500 Large Cap Index, the S&P400 Mid Cap Index, and the S&P600 Small Cap index -- and show how the trading profile for their underlying constituents has changed over the last year. For illustration purposes, I’ll examine the ability to trade $1Million in the constituents of these indices and the amount of time, on average, that a trade would take. We'll compare today to one year ago.

Days to Trade = Value of Trade / (Scalar x 20 Day Average Value Traded)

Where Scalar = a nominal value selected to model a zero market impact (e.g., 10%) and the Value Traded is Total Daily Volume x Closing Price *Based on market level at 2/2008

Now as we would expect, we see the average number of days to trade large caps far below that of mid caps and the same again for small caps. Don’t pay too much attention to the scale as this is directly related to the scalar selected in the formula above. But do pay attention to the one-year comparative levels as they reflect what we might not expect to see, i.e., a marked change in the time to trade within the small cap S&P600.

I have rebased the $1Million investment of February 2008 to a $550,000 investment today to reflect the general market movement over the last year, but whereas for the large- and mid-cap indices we see the average days as being unchanged, we can see a 50% increase in the average number of days to trade for the smaller capitalisation stocks. Looking further into the Market Cap quintiles of the S&P600 we see still further bias towards larger companies.

*Based on market level at 2/2008

Viewing this data further supports my belief that there is a general increase towards closer replication of the market cap biased benchmark indices. Furthermore, the increase in time to trade in any sizable volume combined with the increased volatility of the markets (see last month's blog entry) means that the execution risk of any position is probably the largest risk to account for right now for any small cap manager. All of this points towards current small cap holders maintaining their positions where possible (any margin/redemption call may overrule this) while nothing but the strongest conviction would tempt anyone into anything new.

Tell us: Are market characteristics dominating the way that you are managing your fund?

Get new Taking Risk entries delivered to your e-mail as they are posted.

Wednesday, March 4, 2009

What makes some risk models better than others, and is there a "best" model?

Recently I received a comment on my February 4 entry "Really, how different are the risk model providers?" that I think is worth addressing.

The anonymous comment read:

You refer here to the concept of a "best" risk model. Can you expand a bit on this? What does this mean to you? Surely the determination of "best" is in the interpretation and use of the model, rather than the model's construction?

In my post, I am careful to say, “My task isn’t to suggest which model is 'best.' Frankly, my analysis doesn’t offer genuine insight on that question.” I specifically picked these words because I don’t wish to imply that I do address the concept of “best.”

I believe that model construction is essential to the concept of a “best” risk model. Even though the analysis is ex ante, I strongly disagree with the notion that it is therefore wrong or impossible to judge a model. Investment managers should gravitate towards models that are transparently documented and supported by ongoing research efforts (e.g., industry conferences, white papers) because it demonstrates a provider’s confidence in and commitment to the model construction. Maybe there is or isn’t a clear “best” model, but there are undoubtedly “better” models and “lesser” risk models available to the investment management community.

I do agree that interpretation and use of the model is extremely important too.

Thank you for your comments. Please share your feedback below.

To receive future posts by e-mail, subscribe to this blog.

Monday, March 2, 2009

Taking Risk Bonus: Part two of our interview with "My Life as a Quant" author Emanuel Derman

The latest in our podcast series features Emanuel Derman, author of My Life As a Quant and formerly the head of the Quantitative Risk Strategies group at Goldman Sachs. In part two, Derman discusses common modeling misconceptions that he encountered during his time in industry, as well as lessons learned about the human element in financial modeling.

Derman addresses the difficulty of creating models from the standpoint that you cannot model the complexity of the real world. He discusses how, due to complexity, it can be best to use a variety of models rather than depend upon a single one:
The world is much more complicated than the model…to shoehorn the world into the model you have to get rid of a lot of things. I think the thing is actually to use a whole bunch of models, even to value the same product....There are at least three or four different ways you could model the same effect…all of the models are different and there isn’t really one true one.
Derman also talks about his time at Goldman Sachs, from the context of what he believes were the greatest lessons learned and the best practices he followed in his work there.

To hear more from Emanuel Derman, listen to our entire interview on iTunes. You can also listen to the full audio online, or read a transcript of the interview.

Did you miss part one?