As of October 25, 2012, the Federal Deposit Insurance Corporation provided deposit insurance for 7,184 banks and savings institutions with total assets of $14.1 trillion and total deposits of $10.4 trillion (source: FDIC). At the same time publicly listed firms in the United States held “cash” assets worth $10.2 billion on June 30, 2012 (source: Compustat and Kamakura Corporation), much of which was invested in the form of non-interest bearing deposits at financial institutions. Corporate treasurers will have to be especially vigilant about credit assessment of financial institutions credit risk since the blanket deposit insurance coverage on all non-interest bearing deposits expires on December 31, 2012 under provisions of the Dodd-Frank bill. This blog explains that modern reduced form default probabilities are the only accurate and readily available tool to assess financial institutions risk on a daily basis.
When corporate treasurers think about assessing the credit risk of the financial institutions at which they have deposits, three possible tools are at the top of the list:
- Legacy credit ratings
- Credit default swap quotes
- Modern “reduced form” quantitative default probabilities
In reality, credit default swap trading is nearly non-existent on U.S. financial institutions. We showed (van Deventer, October 5, 2012) that in the two years ended June 30, 2012, there were actual CDS trades on only 13 legal entities and 11 bank “families” in the United States with an average of only 3 trades per day where at least one of the parties to the trade is a true “non-dealer” end user:
The credit default swap market does not have the volume, transparency, or depth of competition to be a reliable guide to bank credit risk.
That leaves legacy credit ratings and modern quantitative default probabilities like those provided by Kamakura Risk Information Services as the only realistic tools for bank credit risk assessment. Information on KRIS default probabilities for publicly listed bank holding companies is available here:
Information on KRIS default probabilities for unlisted bank subsidiaries of a bank holding company is available at this link:
KRIS default probabilities have become the best practice tool for bank credit assessment. Prof. Robert A. Jarrow, Managing Director of Research at Kamakura, was the lead author of the FDIC’s 2003 Loss Distribution Model, which is based on the reduced form approach to credit risk assessment. The Office of the Comptroller of the Currency made the announcement on February 18, 2012 that it had renewed its contract for default probabilities from Kamakura Corporation for another five years:
Finally, perhaps the leading central banking organization in Europe with heavy responsibilities for banking safety and soundness has been a subscriber to the KRIS default probability service for a number of years.
Why have organizations with responsibility for supervising the riskiness of financial institutions turned so heavily toward modern default probabilities like the KRIS service? The answer is best summarized by the following chart that compares the accuracy and granularity of KRIS default probabilities with legacy credit ratings:
Rating agencies call their legacy credit assessment tool “ratings.” Kamakura Risk Information Services’ credit assessment tool is a set of default probabilities with explicit maturities at all major time horizons. Unlike KRIS default probabilities, which have maturities that run from one month to ten years, ratings are not associated with any explicit term or maturity. There are 10,000 grades in the KRIS default probability service, which run from a default probability of 0.00% (the best credit grade) to 100.00% (the worst grade) in one basis point increments. Ratings, by contrast, have only 21 grades which run from AAA (the best grade) to D (the worst grade). Unlike KRIS default probabilities, there is no default probability explicitly associated with a rating. Unlike KRIS default probabilities, there is also no term structure of ratings by maturity. KRIS default probabilities are updated every business day. As of October 29, 2012, the median time since the last change in rating for 2,265 rated companies was 815 days, or 2.23 years (about 2 years and 3 months).
The KRIS default probabilities are based on the best statistical analysis available. “Ratings,” by contrast, are described as “expert judgment” credit assessment tools. We discuss 50 years of research on the differences between the two approaches in later sections of this blog. KRIS default probabilities offer full disclosure of a technical guide that includes the mathematical formula, inputs and coefficients used to determine the KRIS default probabilities. There is no such disclosure for ratings. The Kamakura Risk Information Services default probability service is an “investor pays” business model. Legacy credit ratings, however, are an “issuer pays” business model with a huge conflict of interest fully documented by the U.S. Senate (see below).
There is an extensive set of accuracy tests for KRIS default probabilities (see van Deventer, Imai and Mesler, Advanced Financial Risk Management, second edition, 2013, chapter 16 for examples). Rating agencies report on their “own performance” but they do so in a biased way that omits their own errors. For documentation of this phenomenon, see van Deventer (January 12, 2012). When one corrects for inappropriately omitted corporate failures, the actual default rate of firms rated AAA was an annual rate of 4.55% while the actual default rate of BB- rated companies was 0.34% over the period from 2007 to 2009 (source: van Deventer, January 12, 2012). Default rates from that paper are reproduced here:
The credit model tests offered by Kamakura Risk Information Services come with a full right of audit by financial institutions regulators and KRIS clients, subject only to a confidentiality agreement and the requirement that the tests be conducted in Kamakura offices. Since the launching of the KRIS service in 2002, no errors in Kamakura test results have been detected by this audit process.
By contract, the lack of accuracy in agency ratings has been extensively documented. For recent examples, see Hilscher and Wilson’s 2012 paper “Credit ratings and credit risk,” which is available at this link:
The authors conclude “We find that credit ratings are dominated as predictors of corporate failure by a simple model based on publicly available financial information (failure score), indicating that ratings are poor measures of raw default probability.” Van Deventer, Imai and Mesler (2013, chapter 18) summarize similar results that have been documented in Kamakura Risk Information Technical Guides versions 3, 4.1 and 5 beginning in 2003.
These results are a surprise to many traditional credit analysts who were not involved in the collateralized debt obligation market or residential mortgage backed securities market during the credit crisis of 2006 to 2011. From a social science perspective, however, the superiority of state of the art statistical methods over “expert judgment” is well known. Soneji and King (“Statistical Security for Social Security,” Harvard University working paper, 2011, page 2) summarize the reasons:
“This is especially advantageous because, as is well known, informal forecasts may be intuitively appealing (Morera and Dawes, 2006), but they suffer from humans’ well-known poor abilities to judge and weight information informally (Dawes, Faust and Meehl, 1989). Indeed, a large literature covering diverse fields extending over fifty years has shown that formal statistical procedures regularly outperform informal intuition-based approaches of even the wisest and most well-trained experts (Meehl, 1954; Grove, 2005). (There are now even popular books on the subject, such as Ayres 2008.)”
The superiority of Kamakura Risk Information Services default probabilities and statistical methods in general over credit ratings was particularly stark during the 2006 to 2011 credit crisis. This experience alone should motivate corporate treasurers to add statistical default probabilities to their credit assessment tools when the blanket provision of FDIC deposit insurance for non-interest bearing deposits expires on December 31, 2012.
We close with the key conclusions of the Levin report (United States Senate, 2011) regarding the performance of the rating agencies and how the conclusions of the report were derived:
“For more than one year, the Subcommittee conducted an in-depth investigation of the role of credit rating agencies in the financial crisis, using as case histories Moody’s and S&P. The Subcommittee subpoenaed and reviewed hundreds of thousands of documents from both companies including reports, analyses, memoranda, correspondence, and email, as well as documents from a number of financial institutions that obtained ratings for RMBS and CDO securities. The Subcommittee also collected and reviewed documents from the SEC and reports produced by academics and government agencies on credit rating issues. In addition, the Subcommittee conducted nearly two dozen interviews with current and former Moody’s and S&P executives, managers, and analysts, and consulted with credit rating experts from the SEC, Federal Reserve, academia, and industry. On April 23, 2010, the Subcommittee held a hearing and released 100 hearing exhibits. In connection with the hearing, the Subcommittee released a joint memorandum from Chairman Levin and Ranking Member Coburn summarizing the investigation into the credit rating agencies and the problems with the credit ratings assigned to RMBS and CDO securities.
“The memorandum contained joint findings regarding the role of the credit rating agencies in the Moody’s and S&P case histories, which this Report reaffirms. The findings of fact are as follows.
“1. Inaccurate Rating Models. From 2004 to 2007, Moody’s and S&P used credit rating models with data that was inadequate to predict how high risk residential mortgages, such as subprime, interest only, and option adjustable rate mortgages, would perform.
“2. Competitive Pressures. Competitive pressures, including the drive for market share and need to accommodate investment bankers bringing in business, affected the credit ratings issued by Moody’s and S&P.
“3. Failure to Re-evaluate. By 2006, Moody’s and S&P knew their ratings of RMBS and CDOs were inaccurate, revised their rating models to produce more accurate ratings, but then failed to use the revised model to re-evaluate existing RMBS and CDO securities, delaying thousands of rating downgrades and allowing those securities to carry inflated ratings that could mislead investors.
“4. Failure to Factor in Fraud, Laxity, or Housing Bubble. From 2004 to 2007, Moody’s and S&P knew of increased credit risks due to mortgage fraud, lax underwriting standards, and unsustainable housing price appreciation, but failed adequately to incorporate those factors into their credit rating models.
“5. Inadequate Resources. Despite record profits from 2004 to 2007, Moody’s and S&P failed to assign sufficient resources to adequately rate new products and test the accuracy of existing ratings.
“6. Mass Downgrades Shocked Market. Mass downgrades by Moody’s and S&P, including downgrades of hundreds of subprime RMBS over a few days in July 2007, downgrades by Moody’s of CDOs in October 2007, and actions taken (including downgrading and placing securities on credit watch with negative implications) by S&P on over 6,300 RMBS and 1,900 CDOs on one day in January 2008, shocked the financial markets, helped cause the collapse of the subprime secondary market, triggered sales of assets that had lost investment grade status, and damaged holdings of financial firms worldwide, contributing to the financial crisis.
“7. Failed Ratings. Moody’s and S&P each rated more than 10,000 RMBS securities from 2006 to 2007, downgraded a substantial number within a year, and, by 2010, had downgraded many AAA ratings to junk status.
“8. Statutory Bar. The SEC is barred by statute from conducting needed oversight into the substance, procedures, and methodologies of the credit rating models.
“9. Legal Pressure for AAA Ratings. Legal requirements that some regulated entities, such as banks, broker-dealers, insurance companies, pension funds, and others, hold assets with AAA or investment grade credit ratings, created pressure on credit rating agencies to issue inflated ratings making assets eligible for purchase by those entities.”
For more information on the use of state of the art default probabilities, please contact us at firstname.lastname@example.org.
Ayres, Iain. 2008. Super crunchers: why thinking-by-numbers is the new way to be smart. Bantam.
Dawes, Robyn M., David Faust and Paul E. Meehl. 1989. “Clinical Versus Actuarial Judgment.” Science 243(4899, March):1668–1674.
Grove, William M. 2005. “Clinical Versus Statistical Prediction: The Contribution of Paul E. Meehl.” Journal of Clinical Psychology 61(10):1233–1243.
Hilscher, Jens and Mungo Wilson, “Credit Risk and Credit Ratings,” Brandeis University Working paper, January 2012.
Meehl, Paul E. 1954. Clinical Versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence. Minneapolis: University of Minnesota Press.
Morera, Osvaldo and Robyn Dawes. 2006. “Clinical and Statistical Prediction After 50 Years: A Dedication to Paul Meehl.” Journal of Behavioral Decision Making 19:409–412.
Soneji, Samir and Gary King, “Statistical Security for Social Security,” Harvard University working paper, January 30, 2011
United States Senate Permanent Subcommittee on Investigations (Carl Levin, Chairman), Wall Street and the Financial Crisis: Anatomy of a Financial Collapse, Majority and Minority Staff Report, April 13, 2011.
van Deventer, Donald R. “The Dangers of Using Rating Agency Default Rates in Credit Risk Management,” Kamakura blog, www.kamakuraco.com, January 31, 2012.
van Deventer, Donald R. “Credit Default Swaps and Deposit Insurance,” Kamakura blog, www.kamakuraco.com, October 5, 2012. Redistributed by Riskcenter.com on October 5, 2012.
van Deventer, Donald R., Kenji Imai and Mark Mesler, Advanced Financial Risk Management, 2nd Edition, John Wiley & sons, forthcoming in 2013.
Donald R. van Deventer
Honolulu, November 7, 2012
© Donald R. van Deventer, 2012. All rights reserved