PDF
DOWNLOAD
Abstract
It is common practice among credit analysts to use historical default rates published by rating agencies as a proxy for a true forward-looking firm-by-firm set of modern corporate default probabilities. Part 1 of this series showed that such approximations of true portfolio losses are grossly inaccurate. In this installment of our three-part series, we try to remedy the situation by replacing historical default rates associated with credit ratings over the last quarter of a century with forward-looking “big data” default probabilities that represent best practice. Using the Kamakura Risk Information Services’ most accurate default probability model as a base, we show that ratings do an exceptionally poor job of explaining the variation in default probabilities over the universe of all 2,764 rated firms world-wide. We conclude again that the introduction of ratings into the credit portfolio management process seriously degrades the accuracy of risk assessment. Using the default probabilities themselves, ignoring ratings, provides the best practice standard.
Introduction
In part 1 of this three-part series, we found that using historical one-year default rates associated with each credit rating grade seriously understated portfolio credit risk for the universe of rated firms world-wide for a one to four-year horizon. Over a five to ten-year horizon, the use of historical ratings-based default rates dramatically understated the total portfolio risk by a very large amount. In this note, we seek to salvage the common use of legacy credit ratings, invented in 1860 and largely unchanged since then, by correcting two obvious sources of inaccuracy. The first source of inaccuracy was a lack of granularity, even by the standard of a rating system with 20 grades. When one restricts the data used to free public sources, the most recent reports on historical ratings grades (S&P Global, Inc., 2017) had data for 7 aggregated ratings grades: AAA, AA, A, BBB, BB and CCC/C. In this note, we employ all 20 grades. The second source of inaccuracy is that historical default rates are, well, historical. They look backwards for 25 years or more, rather than forward over the modeling period of interest. The aggregate credit risk of the rated universe will only be the same if, by coincidence, the future is exactly like the past. We ask the question, “Can we remedy the backward-looking nature of the default rates used by assigning forward-looking default probabilities for each ratings grade instead?”
Part 1 summarizes our methodology and provides visual evidence of the correlation between 20 ratings grades and modern big data default probabilities. Part 2 reports on the ability of ratings to explain the variation of default probabilities among the rated universe for all common modeling horizons. Part 3 previews the results of a 10-year forward-looking valuation of traded corporate bonds in the U.S. market using both the default probabilities directly and ratings-based valuation. We discuss that exercise in more depth in part 3 of this series. Part 4 summarizes our conclusions.
1. Methodology
In order to replace the backward-looking historical default rates, we use the most recent and most accurate version of the Kamakura Risk Information Services (“KRIS”) default probabilities, the Jarrow-Chava version 6.0 reduced form default probability model. While the default probabilities are based on 2.2 million observations and more than 2600 corporate failures since 1990, the default probabilities are forward looking in two senses: they incorporate the current values of the explanatory variables for each public firm and they incorporate the insights of Jarrow, Lando and Yu [2005] about the links between regression-based default probabilities and risk-neutral default probabilities used for valuation. On June 15, KRIS default probabilities were available for 38,931 public firms, but we use only those default probabilities for the 2,764 firms with credit ratings.
As in the initial stages of almost any complex analysis, we review the visual evidence first to assess the nature and the difficulty of the problem at hand. The graph below, taken directly from the KRIS daily updates, shows the break-down of 1-year default probabilities by default probability level (displayed by row) and credit rating (by column) on June 15.
The graph makes a number of things quite clear. First, there is a high degree of dispersion in default probabilities, even given that the maturity and rating are the same in a given column. Second, rather than a straight diagonal line showing perfect correlation between default probabilities and ratings grade, the display of the combination by rating and default probability is more square than linear. This means that we should expect to find a fairly low correlation between ratings and the best available KRIS default probabilities.
We seek to quantify that relationship in the next section.
2. Measuring Ratings’ Ability to Fit the Default Probabilities of the Rated Universe
As in section 2, we continue to use the 1-year time horizon for KRIS default probabilities in this section. Although 1 year is fairly short by traditional risk management standards, it is a very common time horizon for credit portfolio managers and we conform to that standard for our first example.
We seek to model predict the default probability of all rated firms as a function of their rating. There are two common choices for explanatory variables:
a. An ordinal variable, with values of 1 for AAA, 2 for AA+, 3 for AA and so on.
b. A series of dummy variables with a 1 if the firm is a BBB firm, for example, and 0 otherwise.
Most analysts who have tried this exercise find that the ordinal variable very often leads to predictions of negative default probabilities. To avoid that problem, we use the following relationship to fit the Kamakura Default Probabilities (KDP) by a series of dummy variables for each rating using a log form like this:
Here I is an indicator function that is 1 if the company has a given rating and 0 otherwise. For each of the 2,764 firms, we seek to find the 21 coefficients (the constant α and one value of the coefficient β for each of the 20 ratings grades) and measure the goodness of fit for all firms with ratings at a default probability maturity of one year. We use Stata version 15 and generalized linear methods with a log link function to perform this analysis. Note that the best estimate of the expected value of the fitted KDP is not exp(predicted ln[KDP]) with mean[ei] set to zero, because the variation in the error term, not just its expected value, affects the expected value E[KDPi].
The accuracy of this fitting exercise is reported in the graph below:
At a maturity of 1 year, ratings explain less than 26% of the variation in the KRIS default probabilities over the 2,764 firms with credit ratings. Note that the default probabilities are displayed after being sorted first by ratings grade and second by default probability level. Given that display, you can see that, at almost every ratings level, there is a skew in the default probabilities and that a single point estimate within each ratings grade does a very poor job of predicting the Kamakura Default Probability level.
The Kamakura Default Probabilities are constructed from a term structure of 120 monthly default probabilities. The first default probability is the probability that the firm fails in the first month. The second default probability is the probability that the firm fails in month 2, conditional on surviving month 1, and so on. From these monthly default probabilities, the annualized KRIS default probabilities for maturities of 1, 3, and 6 months and 1, 2, 3, 4, 5, 7 and 10 years are constructed.
The graph below shows that the accuracy of credit ratings’ prediction of KRIS default probabilities peaks at year 7. At no maturity, however, do the 20 credit rating levels explain more than 35% of the variation of default probabilities within each maturity category. This is a shocking indictment of the basic tenet of credit ratings: that a grouping of firms into 20 “buckets” is sufficient for credit portfolio management. The appendix shows the detailed results for each of the maturities summarized in the chart below. Please note that these graphics are produced daily by Kamakura Risk Information Services as part of Kamakura’s on-going model validation process. They are available upon request by clients and friends of the firm from info@kamakuraco.com.
3. Default Probabilities versus Ratings for Valuation and IFRS9
Above and beyond the ability of ratings to predict default probabilities, we are interested in the ability of ratings to determine “fair value” bond prices compared to the use of other methodologies, like the use of Kamakura Default Probabilities for valuation explained by Jarrow and van Deventer [2018]. Fair value is essential to credit risk-related accounting standards like International Financial Reporting Standard 9 and the U.S. standard regarding Current Expected Credit Losses. Over more than 70,000 observations of traded bond prices in the U.S. corporate bond market, valuation using Kamakura Default Probabilities is more accurate in over 89% of the comparisons. Ratings provide a very weak foundation for determining fair value. We address the reasons for this in part 3 of this series.
4. Conclusions
In this note, we ask the question “Can the replacement of historical default rates by ratings grade with forward-looking default probabilities by ratings grade significantly improve the accuracy of credit portfolio management?” The answer is clearly no. First, in order to map default probabilities to ratings, one must have the default probabilities. Second, ratings explain less than 35% of the variation in default probabilities at every maturity tested. Third, given that ratings simply obscure a clear view of default risk, it’s obvious that one should ignore them and simulate credit risk with the default probabilities directly. This is confirmed by valuation analysis that we discuss in the third and final installment in this series.
References
Hilscher, Jens and Mungo Wilson, “Credit Risk and Credit Ratings: Is One Measure Enough?” Management Science, October 17, 2016.
Jarrow, Robert, David Lando, and Fan Yu, “Default Risk and Diversification: Theory and Applications,” Mathematical Finance, January 2005, pp. 1-26.
Jarrow, Robert and Donald R. van Deventer, “The Ratings Chernobyl,” Kamakura Corporation blog at www.kamakuraco.com, reproduced by the Global Association of Risk Professionals and www.riskcenter.com, March 9, 2009.
Jarrow, Robert and Donald R. van Deventer, “The Valuation of Corporate Bonds,” Kamakura Corporation and Cornell University memorandum, May 1, 2018.
S&P Global Ratings, “Default, Transition, and Recovery: 2016 Annual Global Corporate Default Study And Rating Transitions,” April 13, 2017.
United States Senate Permanent Subcommittee on Investigations, Committee on Homeland Security and Governmental Affairs, “Wall Street and the Financial Crisis: Anatomy of a Financial Collapse,” April 13, 2011.
van Deventer, Donald R. “How Stale are Credit Ratings?,” www.seekingalpha.com, July 8, 2013.
van Deventer, Donald R. “’Point in Time’ versus ‘Through the Cycle’ Credit Ratings: A Distinction without a Difference,” Kamakura Corporation blog at www.kamakuraco.com, May 9, 2009.
van Deventer, Donald R. “A Quantitative Assessment Of Errors From The Use Of Credit Ratings In Credit Portfolio Management, Part 1, Kamakura Corporation blog at www.kamakuraco.com and www.seekingalpha.com, June 17, 2018.
Appendix
This appendix contains the visual confirmation of the ability of ratings to predict Kamakura Default Probabilities using the methodology described above for each maturity in the summary chart in Section 3.