1/20/2011 10:51 PM
The copula method has been much vilified as the “Formula that killed Wall Street,” and this criticism is extremely well deserved. The copula approach, however, is still widely used and an understanding of how it works is important for a key reason: once one understands how it works, one understands why one should not use it for credit portfolio management. This blog talks about one key issue in the practical use of the copula method: how to derive a pair-wise correlation matrix for all counterparties on the assumption that one knows the proper intra-industry and inter-industry correlations. We thank Kamakura Managing Director for Research Professor Robert A. Jarrow for his very helpful comments.
The copula method grossly underestimated the risk in the collateralized debt obligation market in the credit crisis for these key reasons:
- The fundamental assumptions of the Merton model of risky debt, to which the copula method is closely related, are too simple to be accurate
- The copula method holds default probabilities constant over the modeling period (or at best allows them to drift in a non-random way). This produces an estimate of credit losses which is too big in the best parts of the business cycle and too small in the worst of times. This happens because default probabilities are not constant, they are random, and they move up and down over the business cycle.
For articles in the popular press on the inaccuracy of the copula method, we recommend the following:
Mark Whitehouse, “Slices of Risk: How a Formula Ignited Market That Burned Some Big Investors,” Wall Street Journal, September 12, 2005, page 1.
Felix Salmon, “Recipe for Disaster: The Formula that Killed Wall Street,” Wired Magazine, February 23, 2009.
For background proving the inaccuracy of the Merton framework compared to a more modern reduced form/logistic regression approach, these articles have been in circulation since as early as 2002 and summarize the facts nicely:
S. Bharath and T. Shumway, “Forecasting Default with the Merton Distance to Default Model,” Review of Financial Studies, May 2008.
J. Y. Campbell, J. Hilscher, and J. Szilagyi, “In Search of Distress Risk,” Journal of Finance, December 2008.
R. Jarrow, M. Mesler, and D. R. van Deventer, Kamakura Default Probabilities
Technical Report, Kamakura Risk Information Services, Version 4.1, Kamakura Corporation memorandum, January 25, 2006.
D. R. van Deventer, L. Li and X. Wang, “Another Look at Advanced Credit Model Performance Testing to Meet Basel Requirements: How Things Have Changed,” The Basel Handbook: A Guide for Financial Practitioners, second edition, Michael K. Ong, editor, Risk Publications, 2006
For a comparison of the copula method with other approaches, these recent blog entries and articles are relevant:
Jarrow, Robert A. and Donald R. van Deventer, “Synthetic CDO Equity: Short or Long Correlation,” Journal of Fixed Income, Spring, 2008.
Jarrow, Robert A. and Donald R. van Deventer, “Learning Curve: Synthetic CDO Equity: Short or Long Correlation,” Derivatives Week, March 24, 2008, pp. 8-9.
Jarrow, Robert A., Li Li, Mark Mesler, and Donald R. van Deventer, “CDO Valuation: Fact and Fiction,” The Definitive Guide to CDOs, Gunter Meissner, Editor, RISK Publications, 2008.
van Deventer, Donald R. “The Copula Approach to CDO Valuation: A Post Mortem,” Kamakura blog, www.kamakuraco.com, April 9, 2009. Redistributed on www.riskcenter.com, April 13, 2009.
van Deventer, Donald R. “Modeling Default for Credit Portfolio Management and CDO Valuation: A Menu of Alternatives,” Kamakura blog, www.kamakuraco.com, April 19, 2009. Redistributed on www.riskcenter.com, April 21, 2009.
van Deventer, Donald. R. “Credit Portfolio Models: The Reduced Form Approach,” Kamakura blog, www.kamakuraco.com, June 5, 2009. Redistributed on www.riskcenter.com on June 9, 2009.
We now list the highly simplified assumptions of the copula method and explain their implications mathematically. We then summarize why one should reject these assumptions and move to a more realist approach.
Common Assumptions of the Copula Method
There are as many variations on the copula method as there are users of the method. The volume mentioned above, edited by Gunter Meissner, provides a good sampling of approaches. In this section, we summarize common assumptions that are frequently employed. Among the vendors that use these assumptions are Standard & Poor’s, in its CDO evaluator, Moody’s Investors Service in its products Portfolio Manager and Risk Frontier, and Kamakura’s Kamakura Risk Manager (“KRM”). KRM includes far superior techniques, and this blog entry is part of Kamakura’s on-going effort to make our clients and potential clients aware of the model risk in the copula approach. Tens of billions of dollars have been lost using this approach in the 2007-2010 credit crisis, and the reasons are deeply rooted in these common assumptions:
- Credit modeling is done for a single period of arbitrary length, not on a multi-period approach with a dynamic balance sheet
- The Merton approach is used as the framework. This implies that default probabilities are set at time zero and random values of the “value of company assets” at the end of the modeling period are simulated to determine if the default probability is zero (assets worth more than liabilities) or 100% (assets worth less than liabilities) at that time. No other outcomes are possible in a single period model.
- The return on the value of company assets is driven by one and only one common “macro” factor. This macro factor is not specifically identified, so no hedging with respect to movements in the factor is possible.
- The other contribution to random movements in the return on company assets is an idiosyncratic risk factor which is assumed not to be correlated with the idiosyncratic risk of any other counterparty
- All counterparties in a given industry sector are assumed to have the same pair wise correlation in the returns on the value of company assets (a common assumption, although the Kamakura Risk Manager implementation allows for every pair-wise correlation to be different). This single correlation parameter is called “intra-industry correlation” in the returns on the assets of each pair of companies.
- If there are N industries, there are N macro factors. In common application, the correlation between the returns on each pair of these macro factors is identical, a single correlation parameter. In the Kamakura Risk Manager implementation, these correlations can be different for each pair of macro factors and we use that more general implementation in this example.
- If the counterparty is not a company, it is assumed that its default can be modeled as if it were a company in the Merton framework, even if the “counterparty” is the tranche of a mortgage backed or asset backed security or a tranche of another CDO (which would be the case in a “CDO squared”). This assumption is a gross error and we will ignore the implications of this error because they are so obvious and well documented.
For an example of a paper which makes these assumptions, see
Oldrich Vasicek, “Limiting Loan Loss Distribution,” KMV Corporation working paper, August 9, 1991.
In the rest of this paper, we will use the same notation as Vasicek for consistency.
An Example of Common Copula Assumptions
We assume that our portfolio has five industry sectors. Within each industry sector, we assume the pair wise correlation between the returns on the values of company assets for companies j and k is the same for all values of j and k. Although it is very common for this “intra-industry” correlation coefficient to be assumed the same for all sectors, in this example we use give different correlation values:
Without loss of generality, we assume there are 20 counterparties in our portfolio, spread over the five industry sectors as in the table above.
As is typical, we assume there is one macro factor driving the returns on the value of company assets in each industry sector, so there are 5 macro factors at work in this example. Although it is common to assume a single correlation figure for “inter-industry” correlation, in this example we allow the correlation between each pair of macro factors to be different. The correlations used in this example are as follows:
We now need to generate a 20 x 20 matrix which gives the pair-wise correlations in the returns on company assets so we can simulate period-end asset values in a way that consistently reflects the assumptions above. In the next section, we provide the mathematical foundations for this matrix. After that, we use these foundations to produce the matrix.
Mathematical Foundations for the Pair-wise Correlation Matrix
We use Vasicek’s notation in this section. For any company j, the change in the value of company assets is written in stochastic process notation like this:
This “logarithmic Wiener process” has two terms. The first term is a drift term, which says that over time asset values will drift up at the (falsely) assumed constant rate of interest r. The second term induces random shocks from a Wiener process with a mean of zero, an instantaneous standard deviation of 1, and a constant correlation within industry sector m for any pairs of companies j and k. We write this formally using the expected values E as follows:
Because of these assumptions, the random shock term zj for company j is a linear combination of impacts from the macro factor driving industry sector m and an idiosyncratic risk factor unique to company j. Moreover, the weightings are a function of the correlation coefficient that is assumed to apply to all pairs of companies in industry sector m:
Here again, xm is the random (unspecified) macro factor that is the only driver of correlated movements in asset values for all companies in industry sector m. The epsilon is the idiosyncratic risk factor for company j. These two variables have mean zero, instantaneous standard deviation of 1, and they are uncorrelated with each other. The idiosyncratic risk factor is also uncorrelated with that of any other company. This is written formally as follows:
Then the product for two firms in the same industry sector m is
When we write out this product and take its expected value using the assumptions above, we confirm that indeed
We now ask, “What if companies j and k are not both industry sector m, but company j is in sector m and company k is in sector n?” Consistent with the approach taken above, we continue to assume that the macro factors are correlated with each other but not with any idiosyncratic risk factor. We also require that no idiosyncratic risk factor in industry sector m is correlated with any idiosyncratic risk factor in industry sector n. This insight is consistent with the conclusions of Jarrow, Lando and Yu provided that (a) only 1 macro factor drives each industry and (b) no macro factors have been omitted or misspecified. Our additional assumptions are written formally as follows, adding the industry sector to the company subscript to make the fact that industries are different more clear:
The parameter qmn is the correlation coefficient on the returns of macro factors driving industry sectors m and n. We then use the results above to write out the changes in the shock terms for firm j in industry sector m and firm k in industry sector n:
We then write out the product of these two changes in shock terms:
When we take the expected value of this expression using the assumptions above, we get the following expression for the correlation of asset returns on firms in industry sectors m and n:
We now use this expression to populate the pair-wise correlation in asset returns for the 20 firms in 5 industries in our worked example.
Continuing the Example
Using the expression for correlations between asset returns on firms in different industries, plus the inputs given above, the 20 x 20 correlation matrix for asset returns in our example can be calculated like this:
Once we have the matrix in hand, the usual methods for simulating correlated normally distributed variables can be used. In the matrix above, we have highlighted the correlations that are “intra-industry.” As this matrix gets larger, its inversion becomes more difficult from a computer science perspective. Kamakura Corporation announced in a recent press release that matrices that contain up to 999,999 random variables (counterparties in this case) can be processed, but this capability is not generally available in more basic software packages, particularly in common spreadsheet software. Even with this capability, however, the copula approach is so flawed that market participants lost tens of billions of dollars from its use in the 2007-2010 credit crisis.
What is wrong with the copula method? Almost all of its principal assumptions are false, and the impact of their “falseness” is a very large degree of inaccuracy. The copula method should only be used as a bench mark to measure the difference between pre-credit crisis “common practice” and a modern 21st century reduced form approach that recognizes that multiple macro factors drive risk, recognizes that defaults and cash flows appear at multiple points in time, recognizes that interest rates are random, and recognizes that balance sheets are dynamic. For details on that approach, please see the blog entries above and contact us about KRIS default probabilities and Kamakura Risk Manager at email@example.com.
Donald R. van Deventer
Honolulu, January 24, 2011