Last week I was asked to give a presentation to a group of senior bank risk managers on the state of the art in “balance sheet optimization.” Today’s blog is part one of a summary of that talk. For bankers, balance sheet optimization has a lot in common with building a software system that can beat any human being in chess. It’s very easy to talk about, and it’s very hard to do. This blog explains why.
Let’s put the task ahead of us in context by keeping in mind the dilemma faced by former Citigroup chairman Charles Prince, pictured at the left, in 2006. We want to design an optimization capability that would have safely steered Citigroup clear of the home price reef on which the good ship Citigroup ran aground. That’s the kind of practical and useful output that should emerge from a portfolio optimization system in banking.
One of the other implications of the 2007-2010 credit crisis is that the quality of the optimization system is more important than ever. As this humorous pseudo advertisement illustrates, the cost of failure in the design of optimization is very high.
Risk management technology is not a commodity, and the quality of risk systems determines whether the firm is successful or whether the firm is rendered so worthless that it’s given away for free when you purchase a toaster, the opposite of banking in the 1950s. In fact, one of the most important questions to ask any vendor, particularly a “legacy vendor,” these days is “How many of your clients failed or needed a rescue in the credit crisis?” Clearly, when we are designing an optimization system for banking, a necessary condition for that system is that it would have helped the firms listed in this advertisement avoid their fate if the CEO had been serious about risk management.
In order to do that, it’s helpful to reread one of our quotes from our “great quotations from the credit crisis” blog entry of April 2009. This quote, from Citigroup chief executive Vikram Pandit, nicely summarizes how Citigroup got into trouble. “What went wrong is we had tremendous concentration in the sense we put a lot of our money to work against U.S. real estate,” Pandit said in an interview on PBS’ Charlie Rose show. “We got here by lending money, and putting money to work in the U.S. real estate market, in a size that was probably larger than what we ought to have done on a diversification basis,“ quoted on www.reuters.com on November 25, 2008. There are a number of implications of this quote that are critical for designing a portfolio optimization system that would have helped Citigroup avoid the over-exposure to home prices. First, the system would have to measure the mark to market sensitivity of Citigroup capital to changes in home prices. Second, the system would have to be able to measure this “delta” with respect to home price risk and compare the home price exposure with the firm’s limit (which Citigroup appears not to have had) with respect to home price risk. Finally, the system would have to be capable of dynamically re-orienting the asset strategy of the firm as home price exposure approached the home price risk limits. Clearly, this is a big job. It’s easy to talk about and hard to do, just as building a software system to beat any human being in chess.
One of the necessary conditions for a bank to take advantage of this potential ability to use balance sheet optimization is a motivated CEO. This chart, taken from the Economist, notes that 9 out of 10 CEOs of the firms with the biggest losses in the credit crisis have lost their jobs. This makes a good CEO uniquely motivated to improve risk management, something that’s long been lacking.
Another interesting development is the fact that members of the Board of Directors are now motivated like never before, as shown in this quote from our great quotations blog:
“Given our recent economic capital problems here at _______, our Board has expressed their desire to acquire the Risk Management architectures that were deemed unnecessary by our ex-CEO and emphasized their desire to dedicate the necessary funding to this effort.”
Client e-mail, March 18, 2008
When we design the characteristics of a balance sheet optimization capability, it’s reassuring to see that the need to measure and respond to macro-factor movements is not new. My partner Robert A. Jarrow was the lead author on the “Loss Distribution Model” made public by the Federal Deposit Insurance Corporation on December 10, 2003, a full 3 ½ years before the onset of the credit crisis. This study, which was mandated by Congress, was intended to avoid another $1 trillion bail-out of the U.S. financial services sector like the savings and loan crisis of the 1980s and 1990s. The Loss Distribution Model is available by request on www.fdic.gov or from firstname.lastname@example.org. It states in the appendices that the three key macro factors driving correlated default of U.S. banks were home prices, interest rates, and (as a catch-all factor) bank stock prices. Professor Jarrow and his colleagues were able to mark to market the FDIC’s future insurance obligations from the perspective of 2003. One can imagine that the projected losses, using a modern macro-factor driven approach to bank default, showed much higher losses than were expected at the time.
We want to design an optimization system that can even overcome the lamest excuse for failure that one can hear from management and risk managers at the institutions that failed or were rescued: “No person and no risk system could have avoided this outcome.” We beg, as we have often done before, to differ. Even Robert Rubin himself has been quoted as saying “how could we have known” that this decline in home prices was a risk worth worrying about. Speaking personally, I bought a house in Japan in 1988, at the peak of the bubble economy, followed by an amazing education in home price movements: the value of the home declined every year for 16 years in a row. For those who haven’t had the pleasure of this tuition payment, congratulations. In answer to the question “Who could have known this would happen, we have to start with the 126 million residents of Japan who watched it unfold.
To this 126 million people, we want to add the 2 million readers of the Economist, who were treated to this graph on the home price bubble in the U.S., the U.K. and Australia on June 16, 2005. Finally, let’s add 76 million people in Mexico, where almost 50% of home mortgages defaulted when interest rates neared 100% at the height of the tequila crisis in 1994-1995. Maybe I’m uniquely sensitive to this as my great great grandmother came across the border long ago.
In short, just because home prices hadn’t declined by 40-50 percent in the United States previously, there was no reason to assume (as many did) that one’s U.S. passport would guarantee that such a thing would not happen here. The balance sheet optimization has to be flexible enough to deal with risks that haven’t happened to the analyst in his or her personal experience.
In mathematical terms, failure to simulate this kind of possibility was not one of Nassim Taleb’s famous “black swans,” like the Icelandic volcano that erupted this month. Instead, a simple math mistake led many people to underestimate the degree to which home prices could move. What was that math mistake? The extremely common assumption that returns, in this case on home prices, are normally distributed and independent from month to month. 30 minutes with a statistical package and the time series of the Case-Shiller home price index will prove to an analyst that the assumption of independent returns is simply an assumption that is not consistent with the facts, including the pre-crisis time series of home price returns. To be accurate, a portfolio management optimization capability has to be flexible enough to handle a wide range of statistical assumptions about returns on various assets.
The graphic below adds another requirement to our ideal optimization package. It is a simple one-that macro economic factors like home prices can have a very non-linear impact on the probability of failure. The month by month history of the number of bank failures in the United States shows that, when the fall in home prices over a 2 year period exceeds 20%, bank failures increase significantly. This should be obvious to an experienced mortgage lender. If one makes an $80,000 loan on a $100,000 house, a 20% home price decline means that loan principal and collateral values have equalized and the danger of default is sharply higher. To avoid what happened to Citigroup, as highlighted by Vik Pandit’s quote above, we need to capture these non-linear responses in our optimization capability.
This macro factor sensitivity was explicitly recognized in 2009’s Supervisory Capital
Assessment Program (“SCAP”). That program cited home prices, real gross domestic product, and the civilian unemployment rate as key risk factors. A practical and realistic optimization capability should take as a given that there will be risk limits on the deltas with respect to a longer list of risk factors than interest rate risk alone. Indeed, many of the banks now given away with the purchase of a toaster had a legacy interest rate risk systems that could not calculate the deltas with respect to any other macro risk factor.
Another factor needed for realism is something that we mentioned in our December 15, 2009 blog on why “core deposits” are not “core.” The graph at the left shows that Countrywide Financial Corporation lost 94% of its commercial paper supply when the credit default swaps quoted on Countrywide jumped above an annual rate of 100 basis points. This means that our optimization capability has to include a rich interaction between the value of the firm’s assets and the probability that it can in fact pay off its liabilities. The December 15, 2009 blog illustrates this further with the example of Northern Rock PLC, which lost 63% of “customer accounts” between June 30, 2007 and December 31, 2007 even though its financial statements on June 30 showed no losses and even though the Bank of England had announced its explicit support for the firm.
With this introduction of what we expect our balance sheet optimization system to do, we turn in the next part of this series to whether or not this can be done in practice.
Donald R. van Deventer
April 26, 2010