The Ratings Chernobyl
Robert A. Jarrow and Donald R. van Deventer
March 9, 2009
Distributed by www.riskcenter.com and the Global Association of Risk Professionals
In the current credit crisis, the massive downgrades of collateralized debt obligations (CDOs) and Structured Investment Vehicles (SIVs), especially those backed by subprime mortgages, have received extensive media coverage. As a result, it is widely recognized that the original ratings of these complex structured products were in error, either due to flawed methodology or incentive conflicts. But, what is not well known, and which is an even more important development, is the increasing erosion in the accuracy of corporate debt ratings. For example, the receivership of the Federal National Mortgage Association and the Federal Home Loan Mortgage Corporation, while being rated AAA, reflects this erosion. Other examples include the government bail out of AIG, rated AAA as recently as 2005, and the government takeover of Kaupthing Bank in Iceland, rated AAA in 2007. These examples show that the unreliability in corporate rating accuracy is pervasive and significant. Our firm, Kamakura Corporation, publishes quantitative default probabilities on 22,000 public companies in 30 countries. The investment community often asks us to what extent this erosion will affect the long-term viability of the rating agencies and the use of these ratings. In 2006, three researchers at Kamakura (van Deventer, Li and Wang, 2006) published a study documenting in detail the accuracy advantage that quantitative models have over ratings. What is the reason for this decline in ratings accuracy? And what can management and regulators do about it? Here is our analysis.
The Good Ole Days
The use of “credit ratings,” as opposed to quantitative default probabilities, dates to an era in the early 1900s where simple addition and subtraction of the numbers in financial statements were a tedious and manual exercise and an era where financial disclosure was more limited. In the early days of the ratings organizations, the assignment of corporate debt to one of 20 or so ratings “grades” provided value-added by alleviating the need for each investor to make these tedious calculations individually.
Historically, governments and institutional investors protected the rating firms’ franchise in both overt and subtle ways. In the United States, for example, the concept of a “nationally recognized statistical ratings organization” served to fortify the oligopoly of the major rating agencies. Institutional investors inadvertently added to this franchise by expressing investment criteria as a function of ratings. The New Capital Accords put forth by the Basel Committee on Banking Supervision further fostered this rating franchise by emphasizing the use of credit ratings in the determination of capital requirements. But, times are changing.
Are Credit Ratings Outdated?
We argue “yes.” Quantitative default probability models are the “new ratings.” In the current era of continuous corporate disclosure and highly quantitative default models, the need for ratings is sharply diminished and its future is in doubt. Indeed, as the head of corporate ratings of one of the major agencies said five years ago, there is a high degree of concern about the viability of a rating firm’s franchise. There are three reasons for a decline in the need for ratings. The first is that now, quantitative analysis of public firms can be done in (almost) real time because of the advances in computing power and the electronic availability of accounting information and securities prices. The rating agencies have lost their “computation advantage.” The second reason is that the rating agencies have lost most (but not all) of their informational monopoly for reasons related to corporate governance and securities regulation: management generally no longer treats the rating agencies as favored insiders; financial information and forecasts are now required to be distributed to all concerned parties. Indeed, regulation FD adopted in August 2000 requires simultaneous disclosure of financial information, except if the disclosure is “to an entity whose primary business is the issuance of credit ratings, provided the information is disclosed solely for the purpose of developing a credit rating and the entity's ratings are publicly available.” Although Regulation FD allows selective disclosure of issuer information to rating agencies, we feel the usage of this selective disclosure has declined greatly, and this is confirmed by the third reason why ratings are becoming outdated: the increasingly obvious superiority of quantitative default models over ratings from an accuracy point of view.
Are Credit Ratings Inaccurate?
In 2006, van Deventer, Li and Wang published their study showing that quantitative default probabilities predict defaults more accurately than do credit ratings, both in the very short term and at longer time horizons out to five years. The reason for this accuracy differential is easy to understand. In an earlier 2006 study, Kamakura Corporation analyzed the factors that predicted credit ratings the best, without using the firm’s current credit rating as an input. The results were not a surprise to long time observers of the rating agencies. Even using default probabilities as explanatory factors, company size was the overwhelmingly dominant factor in determining a debt’s credit rating. All other things equal, big companies get better ratings than do small companies. This implicit assumption of “too big to fail” has unfortunately proven itself untrue in recent months.
The erosion of the rating’s predictive power relative to quantitative models so worried Moody’s Investors Service’s management that they acquired KMV LLC in 2002. KMV was a successful provider of quantitative default probabilities. Within Moody’s Investors Service, KMV has been kept isolated from the traditional ratings process and the quantitative models of KMV have never been compared to Moody’s ratings in the annual review of ratings performance.
The Ratings Chernobyl
Recent events show how ratings and politics have intertwined to create a ratings “Chernobyl.” Each year in February, Moody’s Investors Service and Standard & Poor’s Corporation publish an assessment of the accuracy of corporate ratings over various time horizons. Our bibliography cites the most recent of these studies.
To demonstrate the forthcoming decline in ratings accuracy, let us consider the following table from the February 2008 ratings performance study by Moody’s Investors Service:
The chart shows that the cumulative loss experience on companies rated Aaa by Moody’s Investors Service has been zero in the first three years of being rated Aaa and, it shows only cumulative losses of 0.19% over a twenty year time horizon. This chart is consistent with accurate and stable credit ratings.
However, the forthcoming February 2009 study for both major rating agencies will certainly contain a “ratings Chernobyl,” that will contaminate this table for years to come - just as radiation has contaminated the Ukrainian countryside around Chernobyl. Why? Consider the following examples:
- Bear Stearns, rated A+ as recently as October 2007, was rescued in a Federal Reserve-assisted transaction by JPMorganChase on March 14, 2008.
- AAA-rated FNMA was put into receivership on September 7, 2008
- AAA-rated FHLMC was put into receivership on September 7, 2008
- Lehman Brothers declared bankruptcy on September 15, 2008, but it had been rated A+ in May, 2008.
- AIG, which had been rated AAA as recently as 2005, was rescued by the U.S. government on September 16, 2008.
- Kaupthing Bank, rated AAA as recently as February 2007, was nationalized by the Icelandic government on October 9, 2008.
- Wachovia, rated AA- in June 2008, was ‘purchased’ by Citigroup on September 29 in a distressed merger, only to sell later at a higher price to Wells Fargo.
- Citigroup, rated AA as recently as December 2007, was rescued by the U.S. government on November 23, 2008.
And, this is only a partial list. When the full extent of the current credit crisis has been unveiled, it is possible that the traditional ratings may no longer meet the credit modeling standards of accuracy required by the New Capital Accords of the Basel Committee on Banking Supervision.
As we earlier noted, many of these rating inaccuracies were caused by the rating agencies’ bias toward large firms. Another significant contribution to these rating inaccuracies is the politics of the rating process.
The Political Ratings Process
At a February 2008 credit risk conference sponsored by DePaul University, a senior executive of a major rating agency said that investors don’t like volatile ratings. He went on to say that agencies do not change a rating until they are sure the change won’t be quickly reversed. Implicitly, he argued that stability in ratings is more important than accuracy. Most of our friends in the risk management business argue strongly that accuracy is far more important than stability, yet repeated statements by the major agencies in this regard confirm that stability is more highly prized among the ratings user group. These political considerations consequently (1) degrade the accuracy of the ratings process, (2) imply that ratings move only after significant changes in credit quality, and (3) impose a severe bias against reversing a ratings change. These political biases are in addition to those we discussed above. Yet, there is another political factor at work, and it is best illustrated by the FNMA and FHLMC incidents.
In September 2008, FNMA and FHLMC were in a highly distressed situation. Both agencies were still rated AAA. Yet, quantitative models documented this distress. Indeed, at this time the short term Kamakura Risk Information Services (“KRIS”) default probabilities for these government “sponsored” institutions were 20% and 18%, respectively, which put them near the bottom when compared to the other 21,000 institutions covered by the KRIS service.
Why were both rating agencies still rating FNMA and FHLMC AAA? Some would argue that national politics were the reason. A downgrade of the two “government sponsored enterprises” to BB (which the average behavior of the rating agencies over the 1995-2005 period would have dictated) might have--after the fact--been argued to be the “cause” of their failure, sure to be condemned by the CEOs of both firms and by senior government officials. Indeed, one very well respected risk expert said, “In the environment in the fall [of 2008] any rating change would have been a self fulfilling prophecy…Once the ratings are changed it's the beginning of the end (and it doesn't take a very long time) of that institution. In periods of extreme market pessimism and uncertainty the rating agencies are in a no win situation. Downgrade the ratings and critics say that the agencies over-reacted and caused the firms to go under. Do nothing and the rating agencies were asleep at the switch.” This risk expert went on to argue that the rating agencies feel that they have an obligation to avoid Type 1 error (incorrectly rating a safe firm as dangerous, leading to an accidental bankruptcy that would not otherwise have happened). If this risk expert is correct and the rating agencies err on the side of ratings that are “too good” to avoid Type 1 error, it provides a partial explanation for the high ratings listed above and the decline in accuracy that we discuss in this paper.
The reluctance to downgrade FNMA’s and FHLMC’s debt is also understandable from a political point of view— no business wants to be singled out for criticism by the government, even if they are correct in their credit assessment. Clearly, without government aid, the two institutions would have failed and the decision by the U.S. government to rescue them just prior to default was not a foregone conclusion. Quantitative default probabilities reflected this risk, and the credit ratings did not. We also note that it is the infrequency of the rating changes themselves, which makes any change, have so much impact. In contrast, for quantitative models where default probabilities are updated daily, there is a more smooth evolution of the risk assessment of an institution (not a large leap up or down).
With respect to FNMA and FHLMC, some argue that, since bond-holders have not yet suffered a delayed or defaulted payment; there has been no “fail.” Unfortunately, this is no solace to the equity holders who have effectively lost everything, just like they would in a normal bankruptcy situation. Only about 15% of the public firms in the United States have debt ratings, and when there are no debt ratings, the stock price dropping to zero shows the investing public that lenders to such a firm are at risk. Whether or not the lenders recover 100% of principal does not obscure the fact that a firm has failed and its owners (the common shareholders) have lost everything. Although the probability that shareholders lose everything may well be different from the probability that the debt holders lose their principal, they are closely linked.
What to Do?
The outright failure of so many highly rated institutions around the world is an intellectual and commercial problem of major proportions for the rating agencies. The role of rating agencies in creating and exacerbating the CDO crisis is obvious to all. The contamination of the accuracy of corporate debt ratings is less obvious, but more important. We document this inaccuracy herein. We believe that no serious investor or government regulatory agency can afford to rely on a credit ratings risk assessment where accuracy has been so seriously called into question. Quantitative models provide a useful and more accurate alternative that is continuously updated on a daily basis. Default probabilities have won the day and they are here to stay, so why not use them?
Moody’s Investors’ Service, “Corporate Default and Recovery Rates, 1920-2007,” February 2008.
Standard & Poor’s Corporation, “Default, Transition and Recovery: 2007 Annual Global Default Study and Rating Transitions,” February 2008.
van Deventer, Donald R., Li Li and Xiaoming Wang, “Another Look at Advanced Credit Model Performance Testing to Meet Basel Requirements: How Things Have Changed,” The Basel Handbook: A Guide for Financial Practitioners, second edition, Michael K. Ong, editor, Risk Publications, 2006.