Our April 19, 2009 blog post “Modeling Default for Credit Portfolio Management and CDO Valuation: A Menu of Alternatives” outlined a number of different ways in which one can model default among many counterparties, from retail to corporate to sovereigns, in a correlated way. The most modern technology for doing this is the reduced form modeling technique first introduced by Robert Jarrow and Stuart Turnbull in 1995. This blog provides a simple worked example of how to simulate default probabilities forward in a correlated way using two counterparties and three macro factors.
Kamakura Risk Manager and Kamakura Risk Information Services both offer a multiple models approach to simulating default in a realistic way on a multiperiod basis. Common practice for many years was the copula approach, which has been blamed by publications from Mother Jones to Wired for the current credit crisis. We discussed the problems with the copula approach in our April 9, 2009 post “The Copula Approach to CDO Valuation: A Post Mortem (Updated April 27, 2009).”
In this post we discuss two hypothetical corporations, ABC Company and XYZ Company, but the same example with different explanatory variables would apply equally well to retail or sovereign counterparties. We assume that we are a close relative of God and we are told that ABC Company’s 1 year default probability is sensitive to the 2 year change in home prices and the real growth rate in gross domestic product, as shown in the following chart:
XYZ Company is sensitive to a separate set of macroeconomic factors and through divine intervention we are told that its one year default probability has the following sensitivity to the 2 year change in home prices and the 10 year yield on government bonds:
In a reduced form modeling context, theoretical default probabilities are usually modeled in continuous time, and the instantaneous default rate is normally given as a linear function of macroeconomic factors which move randomly. For simulation purposes, these instantaneous default probabilities are normally generated as default probabilities that apply for discrete monthly, quarterly or annual time periods. They can be derived either from historical default data bases, like that for Kamakura Risk Information Services, or they can be derived from observable risky securities prices. For simulation purposes, the logistic formula is an attractive choice for modeling because:
- It is the maximum likelihood estimator for 0/1 problems like default/no default
- It will never produce simulated default probabilities outside of the range from 0 to 100%
The logistic function takes the form:
where P[t] is the default probability over some discrete interval. The variables Xi are the explanatory variables and the alphas and the betas are the “best fitting” coefficients that produce the maximum likelihood estimates of default probabilities.
For ABC Company, the coefficients which produce the table of default probabilities above are consistent with this logistic function for one year time intervals:
X1 is the annual growth rate in real gross domestic product and X2 is the 2 year change in home prices, both expressed as a decimal.
For XYZ Company, the coefficients which produce the default probabilities above as a function of 10 year government yields and the two year change in home prices are given here:
X2, as before, is the 2 year change in home prices and X3 is the 10 year government yield, both expressed as a decimal.
We can simulate these default probabilities forward for M scenarios in N annual time steps by generating M scenarios for the three macro factors that drive defaults for these two firms:
- The growth in real gross domestic product
- The 2 year change in home prices
- The level of 10 year government yields
ABC Company and XYZ Company have default probabilities which we know are correlated because they have a common dependence on the 2 year change in home prices as given by the tables above. Assuming the current real GDP growth is 0% and that 10 year government yields are 5.00%, a stress test of default probabilities for both firms with respect to the 2 year change in home prices is as follows:
ABC Company and XYZ Company may also have other sources of implicit correlation if the growth in real domestic product (which directly affects only ABC Company) and the 10 year government yield (which directly affects only XYZ Company) are correlated.
Simulating these macro factors forward in an accurate way requires a careful analyst to specify:
- The probability distribution for each factor’s random movements. Often these distributions are not normally distributed, although that assumption is common.
- Whether or not the distribution of the factor in period N is in fact independent of its value in period N-1. Often, they are NOT independent, although that independence assumption is common.
- The correlations between the random factors themselves
In our N period simulations over M periods, we can value a portfolio of securities for ABC Company and XYZ Company if we know how spreads move, conditional on the level of default probabilities for each firm and the three macro factors above. That was the subject of our September 23, 2009 blog post.
This simple example shows that using reduced form default models as part of a comprehensive enterprise wide risk management simulation is powerful and straightforward. It allows the analyst to produce U.S. government mandated stress tests with respect to each macro factor that is important in driving defaults. The macro factor links are explicitly and derived from either historical data or current market prices and their history. Any simulation approach that uses ratings as part of the simulation will not have the same degree of accuracy or transparency with respect to the macro factors which drive risk. The reason is that the rating agencies have been unable to articulate even the time horizon over which ratings apply, let alone the exact formulas by which macro factors affect ratings.
With the reduced form approach, the linkages are clear and based on best practice econometric methodology.
Donald R. van Deventer
Honolulu, September 24, 2009