ABOUT THE AUTHOR

Donald R. Van Deventer, Ph.D.

Don founded Kamakura Corporation in April 1990 and currently serves as Co-Chair, Center for Applied Quantitative Finance, Risk Research and Quantitative Solutions at SAS. Don’s focus at SAS is quantitative finance, credit risk, asset and liability management, and portfolio management for the most sophisticated financial services firms in the world.

Read More

ARCHIVES

Improving on the Fed’s Supervisory Capital Assessment Program, Step by Step

04/28/2009 08:27 AM

On April 24, 2008, the Board of Governors of the Federal Reserve System released a 21 page description of its stress tests for the top 19 banks in the U.S.  This program, “The Supervisory Capital Assessment Program: Design and Implementation,” is a large step forward from the Fed’s 1993 retreat from tying bank capital levels to stress tests of a 200 basis point shift in yield curves.  Still the SCAP program doesn’t meet the standards of the FDIC’s loss distribution model (published December 10, 2003 by Kamakura’s Robert A. Jarrow and four co-authors) nor the standards of our April 27, 2009 post on a pass-fail test for bank CEOs and Board members.  Given the tight time deadlines for SCAP, this is not surprising and we have a lot of sympathy for the compromises that were necessary.  This post talks about how to move forward from here, how to make SCAP consistent with best practice and emerging best practice in risk management.

The SCAP as outlined in the April 24, 2009 document has a number of key features that we summarize briefly here:

1.  The analysis is generally focused on GAAP accounting standards, not mark to market nor mark to model

2.  The analysis is a cross-sectional one in that the regulatory teams have been split by bank product category, with significant differences in the analytical approaches taken by product line

3.  Only 2 scenarios were analyzed

4.  The scenarios spanned only 2009 and 2010

5.  Only 3 macro factors were specified as being key to the analysis

6.  The provision of data by banks to regulators was by necessity ad hoc and improvisational

How does this approach compare to the FDIC Loss Distribution Model and the April 27 pass-fail test for bank CEOs and directors?

Before answering that question, let’s turn the clock back to 1979 as the U.S. was plunging down a slippery slope of high interest rates and another trillion dollar bail out, the savings and loan crisis.  What if a savings and loan at the time had funded a balance sheet filled with 30 year fixed rate mortgages with 2 year certificates of deposit?  Would the SCAP process outlined in the April 24, 2009 document have detected the capital needed to save the S&L? No, not even close.  One might say, “this is a credit crisis, not an interest rate risk crisis, and the two are not comparable.”  A smarter comment could go like this: “The S&L crisis was a credit crisis triggered by changes in interest rates, and this crisis is a credit crisis triggered by changes in home prices.  The only difference is which macro factor is driving the crisis.  The process for measuring safety, soundness, and capital needs should be consistent.”  Clearly there are hundreds of smart financial people at the Fed and the other supervisory agencies who know this.  They need our support in moving SCAP forward so that it provides a general supervisory framework relevant to both the current crisis and the next crisis.  In that light, we offer a number of suggestions.

Suggestion 1: Expand SCAP to include all key macro factors, not just three

The 2003 FDIC Loss Distribution Model identified home prices, interest rates, and bank stock prices (as a catchall) as the three key macro factors driving correlated default of commercial banks in the U.S.  Kamakura’s March 23, 2009 press release described how a list of 40 macro factors were candidate variables that were used to derive 24,000 separate statistical functions that link macro factors to the default probabilities of 24,000 public firms in 30 countries.  See www.kamakuraco.com for that press release, and see Kamakura’s blog post of March 19, 2009 on “reduced reduced form models” for background.  This change would address the first question for bank CEOs and directors in our April 27 pass-fail test blog entry.  “At least we got three right,” one prominent U.S. bank regulator said to me last month after the Kamakura press release came out.

Suggestion 2: Expand the SCAP time horizon from 2 years to 30 years

Clearly the political pressures on U.S. regulators led them to confine the requests they were making on banks to two years forward.  We understand that and we recognize that’s all that was possible given the time constraints.  Still, that means the S&L crisis, in the example above, is undetectable by SCAP.  That needs to be fixed.  Since 30 year mortgages are common and bank trust preferred securities and related CDOs are typically 30-35 years in original maturity, for best practice we need at least a 30 year time horizon for the analysis.  Again, the Fed staff knows this.

Suggestion 3: Do a full monte carlo simulation on all macro factors over the full time horizon, not just 2 scenarios over two years.

This begins to address questions 2 and 3 in yesterday’s pass-fail test for bank CEOs and directors–what is the probability of failure, using both internal data and external data only, for the institution going a long ways forward.  With only 2 scenarios, there are only 3 probabilities of failure that can be derived: 0% (“failure in neither scenario), 50% (failure in 1 scenario) or 100% (failure in both scenarios).  The U.S. taxpayers, regulators, and bank management needs more granularity in the risk assessment than 2 scenarios provide.

Suggestion 4: Emphasize both mark to market/mark to model and accounting projections, with the heaviest weight on valuation

Recognition of the S&L crisis was delayed because of a regulatory and management focus on financial accounting even in the face of a fixed rate mortgage portfolio destined to bleed red ink for 30 years.  Let’s not repeat that mistake.  It took the Fed staff 20 years to fix that misplaced emphasis, and, even then, the 1993 interest rate risk stress tests were defeated politically.  Let’s not forget what it cost $1 trillion to learn in the 1980s and 1990s.  Accounting rules are designed to postpone loss recognition.  The individual mortgage loans held by the S&Ls, by definition, had no observable market prices then, but that was not a good reason for ignoring the fact that their net present value (“mark to model”) was well “underwater” at the time.  The same is true now.  A lack of observable market prices is only an excuse to do nothing.  The taxpayers and the Fed staff making this argument internally deserve better.

Suggestion 5: Set up a routine electronic data feed from banks to regulators and a routine risk processing system with inputs and outputs fully transparent to regulators and the banks being regulated.

This is what the FDIC does in its loss distribution model using aggregated data. It’s time to move the analysis to a more granular, higher volume processing platform like Kamakura Risk Manager, a true enterprise wide scale system now processing more than 92 million transactions daily at one of the world’s largest banks.  Kamakura suggested such an implementation to a major overseas bank regulatory agency in the mid-1990s, but the regulator demurred on the grounds that their banking system had no risk (!).

Suggestion 6: Determine capital needs with a stress test of bank value with respect to shifts in each macro factor, just like delta hedging in Black-Scholes and the proposed 1993 interest rate stress tests

This is pure motherhood and apple pie to the economists at the bank regulatory agencies–it’s straightfoward best practice, and it completes question 4 of the bank CEO and director pass-fail test from our April 27 blog.  With a full monte carlo simulation, it’s easy.  With only 2 scenarios, it’s a completely arbitrary exercise.  Hundreds of people at the Fed will agree with this.  When time permits, let’s do it.

Suggestion 7: The corporate exposure simulation ignores macro factors and invokes legacy technology that failed in the credit crisis.  Best practice simulation is necessary.

The Fed document’s description of corporate risk analysis uses the trade-marked rating agency phrase “expected default frequency” and talks about simulation around a “central tendency” of defaults.  See our April 19, 2009 post “Default Probability Modeling for Credit Portfolio Management: A Menu of Alternatives” for a best practice ranking of methodologies, many of which are far superior to holding legacy default probabilities constant and simulating default/no default. The key is that macro factors drive default probabilities up and down, and modern reduced form default probabilities provide the best framework for doing such simulation.  For two recent articles on why the reduced form approach is superior to the legacy Merton approach,  see Campbell et al (Journal of Finance, 2008) and Bharath and Shumway (Review of Financial Studies, 2008).

The SCAP program is a big, positive step forward for the U.S. bank regulators after the defeat of the 1993 interest rate risk proposals.  Those proposals were too complex for the 10,000 U.S. banks that existed at the time.  The current SCAP program is appropriately focused at first on only 19 banks.  It needs some improvement to reach best practice, a fact well known to most of the staff at the regulatory agencies.  Let’s hope we get there in a steady, step by step manner.  The taxpayers deserve to see this done right.

 

ABOUT THE AUTHOR

Donald R. Van Deventer, Ph.D.

Don founded Kamakura Corporation in April 1990 and currently serves as Co-Chair, Center for Applied Quantitative Finance, Risk Research and Quantitative Solutions at SAS. Don’s focus at SAS is quantitative finance, credit risk, asset and liability management, and portfolio management for the most sophisticated financial services firms in the world.

Read More

ARCHIVES