As explained in part 1 of this blog, last week I was asked to give a presentation to a group of senior bank risk managers on the state of the art in “balance sheet optimization.” Optimization in banking is a lot like building a software program to beat any human at chess. It’s very easy to talk about, and it’s very hard to do. In this installment of our optimization series, we start with a general framework for optimization of the balance sheet on a fully integrated risk basis—interest rate risk, market risk, and credit risk in a common framework. We then look at how optimization is done in the funds management business. Finally we turn to the existing types of optimization that are already common in high quality risk management systems.
In part 1 of this blog, we described how a long list of macro-economic factors drives the value of assets that a financial institution might own. The value of these assets, in turn, determines the willingness of potential liability suppliers to lend money to the financial institution.
The diagram above shows how macro factors like oil prices, correlated corporate defaults, interest rates, commercial real estate prices, and home prices combine to influence asset values. When their impact is negative and asset values decline, liability holders run away as shown in these 2007 (Northern Rock) and 2008 (IndyMac) photos. This is true even with well-publicized government support of troubled institutions, like that provided by the Federal Deposit Insurance Corporation in the USA and, in the case of Northern Rock, by the Bank of England. Our optimization needs to be done in light of this risk transmission process on a multiperiod basis.
The next graphic shows in a simple way that the traditional “risk silos” in financial services no longer have a reason to exist from a systems or financial mathematics point of view. Indeed, corporate politics is probably the only reason that risk management has not been fully integrated at most institutions.
There are really only two dimensions in which the major silos of risk management differ. The first is the periodicity of the analysis. The general case is a multiperiod analysis where the length of the periods is designated by the analyst. The special case is a single period analysis. The other dimension in which risk silos differ is whether or not “insurance events,” like “default/no default” or “pay on an insurance policy/don’t pay” are turned “on.” In order to deal realistically with the risk framework outlined in the prior graphic, we need to be in the bottom right hand corner of this square. In this case, we are generating valuations, cash flows and financial accruals on a multiperiod default-adjusted basis.
If we can do that general case, it is easy to simplify by turning default “off.” This is traditional asset and liability management, which typically ignores default. This approach, unfortunately, was the primary risk measure of firms like IndyMac, Countrywide, Washington Mutual, FNMA, and many other firms that failed or were rescued in the 2007-2010 credit crisis. Traditional ALM is necessary but not sufficient to meet our optimization goals.
If we can do the lower right hand square and we simplify to one period, we get traditional credit-adjusted value at risk or a variation on the copula method that is widely blamed for erroneous valuations of collateralized debt obligations. Again, this calculation is necessary but not sufficient to meet the goals of our optimization.
Finally, if we simplify in both dimensions, we turn default off and go to single period analysis. This is the traditional approach to market risk. Given the simplicity of the framework, it is no surprise that firms whose primary risk measures are founded on this calculation cannot possibly be measuring risk accurately. Obviously, Bear Stearns, Lehman Brothers and Merrill Lynch were all using VAR measures in risk management but the firms either failed or were rescued. Again, the calculation is necessary but not sufficient for a sophisticated optimization.
We can briefly summarize the features we want in our optimization framework:
- Many random macro factors drive risk, not just interest rates
- Asset value declines lead to deposit run-off
- Multiperiod analysis
- Both market value and financial accounting based simulation is done
- Insurance “events” like defaults are turned “on”
- Recoveries (i.e. home prices) depend on macro factors
- Spreads depend on macro factors as well
One of the questions I am commonly asked by clients is whether Kamakura Risk Manager does optimization. After answering “yes,” I then ask the client what they want to optimize. Continuing our chess software analogy, for the chess system it’s easy to state the objective function that we want to maximize: “I want to maximize the probability of winning every chess match against a human being.” For a complex financial services firm, the choice of what to optimize is much less obvious. There are many alternatives, and the selection from this menu of objective functions has a big influence on the difficulty and accuracy of the optimization itself. Here are just three alternatives from the many alternatives that a group of risk managers might suggest:
- Maximize cumulative net income over ten years subject to risk constraints never being violated
- Maximize the market value of the banking franchise over ten years subject to risk constraints never being violated
- Maximize market value of the banking franchise subject to default probability at any point in time being x% or less
Which one should we choose? None of these alternatives would be unanimously selected. Rather than reinventing the optimization “wheel,” let’s give ourselves a head start by looking at the fund management business where optimization has been common for four decades.
In the fund management business, most observers would say the optimization is designed to maximize “alpha” in risk adjusted performance measurement. Risk adjusted performance measurement is done by defining a benchmark index which one seeks to outperform, typically while taking on risk that is no greater than that of the benchmark itself. This is a very simple and practical objective function to optimize. The problem in banking and in most other wings of the financial services business is that there is much confusion over what the “benchmark” index is.
Common steps in the optimization process for fund managers are the following:
- Define the benchmark index against which performance and risk are measured
- Define the universe of securities that can be purchased
- Impose constraints on risk levels, typically stated in terms of risk of the benchmark index
- “Simulated” returns are drawn from historical experience, either in sequence or randomly
- “Best” portfolio is derived
There are a number of strengths and weaknesses in this kind of optimization. The positives are clear:
- There is a small universe of potential investments (i.e. 7,000 U.S. common stocks)
- There is lots of historical data to sample from
- It is simple to implement the return generation process
- Macro factors implicitly have an impact on the portfolio
There are some concerns about this approach that are not trivial:
- Nothing that hasn’t happened in the past will be simulated
- The analysis is usually a single period “efficient frontier” kind of analysis
- Default is very often ignored within equity portfolio analysis. The assumption that common stock returns are normally distributed, for example, implies (using historical volatility of returns) that the probability that the common stock price of Lehman Brothers or Bear Stearns going to zero in a one month period is 0.000000%.
- Most analysts are unable to measure explicitly impacts of changes in macro factors
- A naïve definition of “best” leads to stupid conclusions, such as “Buy the stock that went up the most and no other stocks.” This ridiculous conclusion is normally disguised by imposing a constraint that the “best” portfolio must have N or more common stocks.
- If one samples randomly from history, rather than sequentially, one scrambles the business cycle. This understates risk by minimizing the probability that returns are consistently negative or positive.
We now turn to some very important differences in the optimization process that a banking firm would have to do.
The first difference facing a banking firm, rather than a fund manager dealing in traded securities, is huge volume. For example, there are only 9.1 million transactions in S&P’s entire CUSIP data base. A major bank in China, by contrast, has more than 700 million existing assets and liabilities. The volume of potential future assets and liabilities is larger still.
The second major difference is on the liability side of banking. Fund managers have no liabilities except to faithfully pass asset returns on to the holders of the fund’s shares. As long as they do this, the fund itself (as opposed to the fund manager) will not go bankrupt. As the Countrywide and Northern Rock examples show, bankers have a much more complex set of liabilities and the volume of those liabilities is a complex function of the value of the assets of the banking firm.
The next major difference is deciding on the objective function to be optimized. For a fund manager, “best” is pretty simple to define: “Our objective is to outperform returns on the S&P 500 while holding risk levels equal to the S&P 500.” As noted earlier, there are many alternative objectives one can define for banks and they can often be in conflict with one another.
The next major difference is the difficulty in setting up the initial “time 0” assumptions that one uses as a base in starting the simulation. Fund managers simulate returns, taking initial market values as an observable given. Bankers need to mark to market the entire balance sheet, 99.9% (by transaction count) of which has no observable market values. If this is not done correctly, the optimization simulation simply says “Buy the asset at par that’s worth more than par and sell short at par the asset that’s worth less than par.” This isn’t a very useful simulation result except in the sense that it will point out errors in the initial market conditions used at time zero.
Still another major difference is the need in most of the financial services industry for the optimization to include GAAP and (in insurance) statutory accounting. Fund managers care only about market valuation because it is exceedingly rare for them to deal in assets that don’t have observable market values (recent experience excepted!). Bankers and insurance firms face many constraints that are stated in term of GAAP accounting, or statutory accounting, so banks and insurance companies need optimization that may depend on BOTH market valuation and GAAP or statutory accounting figures. That makes the optimization much more complex and time consuming.
There is some good news, however, when we think about optimization in banking:
- There is a long history of optimization in various aspects of finance and banking
- There is a long tradition of multiperiod simulation, not the single period analysis most fund managers are focused on when it comes to the trade-off between risk and return
- There is good information technology infrastructure in the banking industry
The other good news is that various “mini-optimizations” and “mid-sized optimizations” have been used in banking and finance for decades. Here are a few examples:
- Yield to maturity: find the constant discount rate that minimizes the squared error of bond pricing
- Black-Scholes implied volatility: find the constant volatility that minimizes the squared error of option pricing
- Option-Adjusted Spread: find the constant credit spread, which, conditional on assumptions about default and prepayment, minimizes the squared error of bond or mortgage-backed securities pricing
When it comes to larger optimizations that are found in sophisticated risk management software, yield curve and credit spread smoothing is a classic example:
- Yield curve and credit curve fitting: iterate N points on the zero coupon curve until squared pricing error is minimized on all securities that are priced off that yield curve:
Another mid-sized optimization is this one:
- Value at Risk example: what trading positions should I take at the margin to reduce the value of risk for my current portfolio from x to y?
Note the common features of this with the fund manager’s problem:
- The analysis is single period
- [Almost] all relevant securities are traded with observable prices
- The analysis is often based on historical price movements, not forward-simulated price movements
In the third and final installment of this series, we turn back the clock to December 31, 2006 and ask the question that Charles Prince, then-CEO of Citigroup, should have posed to his risk managers: What should we do as an organization to maximize returns while keeping risk (interest rate risk, home price risk, commercial real estate risk, etc.) within modest boundaries? We talk about how physicists approach the problem, what’s realistic now, and what is coming in the future. Stay tuned.
Donald R. van Deventer
Kamakura Corporation
Honolulu, Hawaii
April 27, 2010