This is part 3 in a series on “balance sheet optimization.” As we pointed out in the first two parts of the series, optimization in banking is a lot like building a software program to beat any human at chess. It’s very easy to talk about, and it’s very hard to do. In the final installment in this series, we pose the kind of practical and realistic problem like the one former Citigroup CEO might have faced pre-credit crisis on December 31, 2006. Like chess software, we talk about what’s practical and what’s not practical given the current state of knowledge in applied mathematics, computer hardware and enterprise risk management software. We get the astrophysicists and physicists involved to keep it real.
What kind of optimization problem would Charles Prince liked to have posed at the end of 2006? Here’s a practical optimization problem whose answer probably would have been useful to Mr. Prince:
Optimization objective: Maximize cumulative GAAP net income monthly over 10 years
Allowable investment universe: 70,000 structured products in the libraries of Intex or Markit Partners, which maintain electronic libraries of the waterfalls in indentures which specify the order in which interest and principal are allocated to the various tranches in a structured product.
We assume the liability structure remains stable, which is not realistic. More on this later.
Assumptions about risk limits
Interest rate risk rules are as specified in the Office of Thrift Supervision “Thrift Bulletin 13a,” 1998. For a copy of these risk limits, use this link:
http://www.ots.treas.gov/_files/84074.pdf
Home price exposure limit assumptions: A 25% drop in home prices shall not reduce the mark to market value of capital by more than 5% on a default adjusted basis
In short, can we tell Mr. Prince what portfolio of structured products will produce the most net income over the next 10 years given prudent limits on interest rate risk and home price exposure? To maximize the probability that we can do this, we ignore the Countrywide/Northern Rock/IndyMac phenomenon explained earlier in the series, where the supply of liabilities is a function of the market value of the assets of the firm. If it turns out this is an easy optimization problem, we can add in that possibility later.
Now we need to analyze the magnitude of this very practical problem from a computer science and applied math point of view. How many inputs are there to the objective function in this simulation? 70,000? No-we need values for all 70,000 potential investments at 120 points in time over N scenarios, because Mr. Prince needs a dynamic portfolio strategy that varies as we approach interest rate and home price risk limits. From 120 periods x 70,000 potential investments x N scenarios, we have to derive the best portfolio not just at time zero but at each of the 120 months in the forecast period. We assume Mr. Prince insists on N=10,000 scenarios.
If we are lucky, the optimization can be separated from the simulation of asset values. In this simple example, we have assumed stable liabilities and the separation is possible. Standard tools are easy to apply, in theory, because we can use risk management software to simulate the outcomes for each of the 70,000 securities in each period and in each scenario, and then apply third party optimization tools to find the “best” (i.e. largest cumulative net income) strategy based on this input data.
If, however, we allow the credit risk of our assets to affect the supply of liabilities, the optimization has to be deeply embedded in the risk management simulation engine. For this reason Kamakura implemented the Levenberg-Marquardt non-linear optimization approach in 1992. As a general statement, only in the easiest of risk management optimization problems can the simulation of outcomes and the construction of the best portfolio be done separately, instead of simultaneously.
Let’s continue to assume that this separation is possible for Mr. Prince’s hypothetical optimization problem. There are many third party tools for the general solution of optimization subject to constraints. As mentioned above, Kamakura Risk Manager has relied on the Levenberg-Marquardt non-linear solution algorithm for more than 18 years. Wikipedia.com has a nice explanation of this approach here:
http://en.wikipedia.org/wiki/Levenberg-Marquardt_algorithm
Under the topic “optimization algorithms,” Wikipedia lists these 93 methods used in applied mathematics, so we have lots of choices for solving Mr. Prince’s puzzle:
- Alpha-beta pruning
- Artificial Bee Colony Algorithm
- Artificial neural network
- Artificial immune system
- Auction algorithm
- Automatic label placement
- BFGS method
- Bees algorithm
- Bin packing problem
- Biologically inspired algorithms
- Bland’s rule
- Boid (computer science)
- Branch and bound
- Branch and cut
- CMA-ES
- Compositional pattern-producing network
- Conjugate gradient method
- Cooperative optimization
- Crew scheduling
- Cross-entropy method
- Cuckoo search
- Cutting-plane method
- Davidon–Fletcher–Powell formula
- Delayed column generation
- Derivation of the conjugate gradient method
- Destination dispatch
- Differential evolution
- Divide and conquer algorithm
- Dynamic programming
- Evolutionary algorithm
- Evolutionary multi-modal optimization
- Evolutionary programming
- Expectation-maximization algorithm
- Extremal optimization
- Firefly algorithm
- Fourier–Motzkin elimination
- Frank–Wolfe algorithm
- Gauss–Newton algorithm
- Genetic algorithm
- Genetic algorithm in economics
- Golden section search
- Gradient descent
- Graduated optimization
- Great Deluge algorithm
- Greedy algorithm
- Group method of data handling
- Guided Local Search
- Harmony search
- Hill climbing
- Intelligent Water Drops
- Interior point method
- Job Shop Scheduling
- K-approximation of k-hitting set
- Karmarkar’s algorithm
- L-BFGS
- Levenberg–Marquardt algorithm
- Local search (optimization)
- MCS algorithm
- Matrix chain multiplication
- Maximum subarray problem
- Mehrotra predictor-corrector method
- Meta-optimization
- Minimax
- Nearest neighbor search
- Negamax
- Nelder–Mead method
- Newton’s method
- Newton’s method in optimization
- Nonlinear conjugate gradient method
- Ordered subset expectation maximization
- Parallel tempering
- Particle swarm optimization
- Penalty method
- Powell’s method
- Quasi-Newton method
- Radial basis function network
- Random optimization
- Reactive search optimization
- Rosenbrock methods
- Rprop
- SR1 formula
- Sequence-dependent setup
- Sequential Minimal Optimization
- Simplex algorithm
- Simulated annealing
- Stochastic hill climbing
- Successive parabolic interpolation
- Swarm intelligence
- Tabu search
- User:Tomalec wiki/Sandbox
- Tree rearrangement
- Very large-scale neighborhood search
- Zionts–Wallenius method
Because optimization is common to almost all quantitative disciplines, it comes as no surprise that my colleagues with the broadest exposure to optimization come from astrophysics and particle physics. I called on two specialists to advise on Mr. Prince’s dilemma:
- Dr. M is Ph.D.in astrophysics from UC Berkeley and a former NASA post-doctoral fellow.
- Dr. T is a Ph.D. in physics from UC Berkeley and a former associate research physicist at Princeton, Lawrence Berkeley Laboratory, and the Stanford Linear Accelerator
My colleagues made the point that the calculation time for an optimization algorithm can be summarized by the following statement:
For most algorithms, the time it takes to optimize a function on m floating parameters given n (great than or equal to m) data points is proportional to nm2.
Dr. T described Mr. Prince’s dilemma this way:
“m” is the number of unknowns in your model. In this case, the “free”, or “floating”, parameters that you are trying to optimize for are the numbers of each of the 70,000 structured products that you want to hold on any given day. If you are willing to trade once a month, you would have 70,000 securities x 120 months= 8.4 million free parameters. If you want to build your portfolio once and to never trade within this 120-month period, the number of free parameters would be 70,000.
Whether it’s the time zero investment strategy only or the 120 months of dynamic positions in our 70,000 candidate securities, it’s easy to see that Mr. Prince’s dilemma is not a trivial exercise.
Dr. T went on after hearing that we’d like to do 10,000 monte carlo scenarios as the basis for the optimization:
“n” is the number of individual calculations you would have to make to compute the optimand given one particular portfolio composition (one particular set of free parameters). In this case, it would be proportional to the number of products, the number of time periods, and the number of scenarios, 70,000 x 120 x 10,000 = 84 million
If there is any good news, it is that the closer the initial parameter values are to the optimal ones and the more “benign” the function we are maximizing (10 years of net income in our case), the less time the optimization takes and the smaller is the chance the optimization might fail to converge.
I asked Dr. T to describe the strategy he had taken toward optimization as an expert in particle physics. He said,
“My personal experience is largely limited to the optimization/minimization tools developed by CERN, that is, the MINUIT library. Its flagship algorithm, called MIGRAD, is a variable-metric method with inexact line search and checks for positive-definiteness. It uses only first derivatives (numerical ones by default). The Levenberg-Marquardt algorithm, on the other hand, assumes that the model is twice differentiable and calculates the Hessian explicitly (numerically). Levenberg-Marquardt should therefore be the method of choice for analytical models.”
I then asked him, from the perspective of particle physics, what was the largest optimization problem that he’d seen someone take on? His answer was humbling in light of Mr. Prince’s dilemma:
“The highest I (or anyone I’ve had a chance to meet) have had to go was m=117 and n=500,000 to minimize a highly non-linear maximum-likelihood function. The fit was not parallelizable across multiple CPUs due to the particularly complex maximum likelihood function choice, and it took 3 days to converge. Optimization itself, however, accounted for less than 0.01% of that time; the rest was taken by calculating the function values and shuffling numbers back and forth in memory.”
The bad news is that the dimensions of the fairly simple simulation we’ve outlined for optimization are currently beyond the ability of the world’s best mathematicians and computer scientists if we do the analysis at the transaction level in a brute force way. This is the financial equivalent of building software to play chess by evaluating every single possible strategy as step 1 and then choosing the best strategy as step 2. This strategy doesn’t work in chess software, and it clearly won’t work in an enterprise risk management context. In both cases, we have to be smarter. Even if we could do it at the transaction level, Dr. M points out “Computers do what you tell them to do, not what you want them to do.” That is, a poorly benchmarked set of assumptions could still produce an unusable “best” strategy.
Now for the good news. As Dr. T pointed out in his last quote, only 0.01% of the time involved in the optimization exercise he mentioned was mathematics—the rest of the time was moving numbers back and forth between computer memory and the hard disk, because memory would otherwise have exceeded capacity. With 64 bit systems now months from implementation, this need to move numbers back and forth to the hard drive will be dramatically reduced.
There’s more good news. In our April 2009 blog on great quotations from the credit crisis, the New York Times once quoted a senior Citigroup executive as saying “Charles Prince didn’t know the difference between a CDO and a grocery list.” For most risk managers, however, we can isolate the key dimensions of the problem and make a practical simulation doable with the current state of the art in risk management software. This correlates to the chess software strategy of skipping simulations of chess strategies that an expert chess player knows even before the simulation to be inferior strategies.
One way to make the problem more manageable is to summarize the data into fewer transactions. This is a popular but dangerous choice if the nature of the data is not well understood by the user. For instance, two portfolios of mortgage loans may have current loan to value ratios that are 40% in one portfolio and 120% in the other. If we say this is equivalent to one big portfolio with an average current loan to value of 80%, we’re mathematically correct but the measured risk will understate the risk we would perceive if we analyzed the portfolios separately.
We do understand the true economics of the home price risk that ultimately did Citigroup in, per the Vik Pandit quote earlier in this blog series. We also understand the other major risks most banks typically face:
- If you have a $100,000 house with an $80,000 loan on it, you need home price put options to hedge your risk, starting at an $80,000 strike price initially and falling over time has the mortgage payments are received.
- Interest rate risk is well understood
- Commercial real estate values and home values are highly correlated
For both residential real estate and commercial real estate, a loan by loan analysis of current loan to value ratios makes clear these key risks even before we initiate a simulation and optimization.
These practical steps will simplify the optimization problem and mimic what happens in day to day transaction generation:
- Set maximum limits on the “delta” with respect to home prices and commercial real estate values with management
- Within those constraints, identify the best opportunities that the line groups can find in those sectors
- Constantly monitor the deltas to see if macro factor movements have changed the risk, and hedge when necessary
For risk managers, like chess masters, the machines cannot replace us yet. We need the machines and the machines need us to help short cut aspects of the optimization that take lots of machine time but which don’t add to the quality of the answer. For more details on how to make day-to-day practical optimization a reality, please contact us at info@kamakuraco.com.
Donald R. van Deventer
Kamakura Corporation
Honolulu, Hawaii
April 28, 2010