ABOUT THE AUTHOR

Donald R. Van Deventer, Ph.D.

Don founded Kamakura Corporation in April 1990 and currently serves as Co-Chair, Center for Applied Quantitative Finance, Risk Research and Quantitative Solutions at SAS. Don’s focus at SAS is quantitative finance, credit risk, asset and liability management, and portfolio management for the most sophisticated financial services firms in the world.

Read More

ARCHIVES

Common Pitfalls in Risk Management, Part 1: Confusing Pseudo Monte Carlo with the Real Thing

08/25/2009 12:15 PM

One of the few virtues of being an aged (or more politely, “experienced”) risk manager is that one has made a lot of analytical mistakes that are easy to remember. Another virtue is that you have seen others make mistakes that proved fatal or near fatal, so we can remember the consequences of those errors too. In risk management, it would be nice if we never repeated the mistakes of the past, but, alas, we take two steps forward and one step backward. In this series, we discuss risk management errors by some of the world’s largest financial institutions and point out their consequences to help all of us avoid errors of the past and obvious errors going forward. In this post, we point out the consequences of basing risk assessment on a pseudo monte carlo approach instead of the real thing.

More than 30 years ago, one of the largest banks in the world set a very high standard by using a super computer to simulate its interest rate risk position.  Even at that time, the simulation was a true monte carlo simulation with a very large number of scenarios, not a simple approximation or an analysis of a small number of scenarios.  For that reason, it’s a surprise to find very large institutions today that fall very far short of that 1970s standard.  In this post we discuss a pseudo monte carlo market value-oriented approach.  The example is an altered version of analysis we’ve seen at many institutions but it is representative.

“We use principal component analysis to generate N time zero yield curves that are consistent with an arbitrage free evolution of the term structure.  N is a number less than 100.  We then calculate the NPV of cash flows using the forward rates for each scenario.  This gives us a table of N changes in market value.  We then draw 2 million random samples from this table and identify the 99.95th percentile to define our risk level.”

These comments sound good at first glance and they include a lot of terms used by sophisticated risk managers, but a dissection of what’s going on is extremely worrisome.  To put our comments in context, remember that many collateralized debt obligation analysts were doing 100,000 to 200,000 scenarios to measure the risk in CDO tranches (which obviously, with hindsight, were not realistic enough).  Robert Jarrow and Donald R. van Deventer, in a related paper, showed that as many as 10 million scenarios would have been necessary to value CDO tranches with the precision to be within the (then observable) bid-offered spread 99% of the time.

Let’s start with what this composite bank (“ABC Bank”) is not doing.  It is not doing N scenarios for time zero and M time periods forward.  All of the randomness is at time zero.  So if the bank is analyzing the interest rate risk of a 30 year prepayable mortgage, the decision to prepay or not prepay in 2 years depends only on the time zero scenario, rather than the potential volatility of interest rates from two years onward.  In addition, the bank is not employing an interest rate lattice that assigns a consistent and coherent lattice of short term interest rates out into the future.  With an up-down lattice of the short rate, for example, looking at the remaining 336 months of a 30 year mortgage, there would be 336+1=337 different interest rate nodes at the maturity date of the mortgage.  ABC Bank is looking at one node (the forward rate path), not 337.  This is a serious error and will result in substantial mispricing of the bank’s option related assets and liabilities (which makes up nearly the entire balance sheet).

To summarize, the bank is simulating market risk with only N time zero yield curves.  A true monte carlo for M periods forward would use N scenarios times (M+1) time periods (including time zero).  For banks with 30 year mortgages, a common choice for M is 360 months.  For life insurance companies which issue life policies to 20 year olds, an 80 year (960 month) simulation is not unusual.  For a risk level at the 99.95th percentile, that’s the 1999th out of 2000 scenarios for N.  Even at 2000 scenarios, there is very large sampling error in the estimate of the 99.95th percentile, but let’s assume that the bank can tolerate the sampling error. Accordingly, the minimum acceptable number of random yield curves is 2000 x (360+1)=722,000 if you are a bank and 2000 x (960+1)=1,922,000 if you are an insurance company.  If N at ABC Bank is 100, the bank’s simulation has only 0.01385% of the bare minimum random yield curves for a bank and 0.0052% of the bare minimum random yield curves for an insurance company.

Our composite bankers at ABC bank would say, “That’s not true, we’re then doing 2 million scenarios to get the 99.95th percentile scenario.”

To someone who has spent time with George Fishman’s classic work (Monte Carlo: Concepts, Algorithms and Applications, Springer, 2003), this comment is stunning.  There is no need to take 2 million random draws or 100 million draws from this set of N time zero yield curves and the associated net present values (as flawed as they are). Let’s assume N=100. Obviously as the secondary sampling number of scenarios, say S, gets very large, 1% of this secondary sampling will equal the present value of time zero scenario 1, PV(1).  1% will have time zero scenario 2 present value PV(2) and so on up to PV(100).  If the present values are ordered from biggest gain (scenario 1) to biggest loss (scenario 100), the 100th percentile loss is PV(100) and the 99th percentile loss is PV(99).  No matter how many scenarios S are run in this secondary sampling, the result won’t change except for sampling error.  It’s a complete a waste of time that provides no additional information.  In the end, to get to the 99.95th percentile scenario, interpolation of some sort will be necessary between PV(100) and PV(99).  Note also that in no case will the extreme gains or losses be outside of the 100 simulated values.  There is a very high probability that the exercise misses some very important gain events and loss events, because setting N = 100 or less and simulating time zero only is such a small fraction of the minimum simulation that most bankers would consider adequate.

For many readers of www.kamakuraco.com’s blog and www.riskcenter.com, there will be disbelief that this approach has been taken.  As David Letterman often says, “I am not making this up.”  We’ve seen variations on this approach at many institutions.

For senior managers where this approach is employed, we strongly urge the Board to immediately initiate a third party audit of risk management practices and procedures.  We would expect that audit to find many other areas of concern.  We would expect such an audit to lead to a long term plan to bring the institution back into compliance with best practice risk management standards.

Predrag Miocinovic and Donald R. van Deventer
Kamakura Corporation
Honolulu, August 25, 2009

 

ABOUT THE AUTHOR

Donald R. Van Deventer, Ph.D.

Don founded Kamakura Corporation in April 1990 and currently serves as Co-Chair, Center for Applied Quantitative Finance, Risk Research and Quantitative Solutions at SAS. Don’s focus at SAS is quantitative finance, credit risk, asset and liability management, and portfolio management for the most sophisticated financial services firms in the world.

Read More

ARCHIVES