ABOUT THE AUTHOR

Donald R. Van Deventer, Ph.D.

Don founded Kamakura Corporation in April 1990 and currently serves as its chairman and chief executive officer where he focuses on enterprise wide risk management and modern credit risk technology. His primary financial consulting and research interests involve the practical application of leading edge financial theory to solve critical financial risk management problems. Don was elected to the 50 member RISK Magazine Hall of Fame in 2002 for his work at Kamakura.

Read More

ARCHIVES

Kamakura Blog: Portfolio Optimization in Banking and Fund Management, Part 3

04/28/2010 10:43 AM

This is part 3 in a series on “balance sheet optimization.”  As we pointed out in the first two parts of the series, optimization in banking is a lot like building a software program to beat any human at chess. It’s very easy to talk about, and it’s very hard to do.  In the final installment in this series, we pose the kind of practical and realistic problem like the one former Citigroup CEO might have faced pre-credit crisis on December 31, 2006.  Like chess software, we talk about what’s practical and what’s not practical given the current state of knowledge in applied mathematics, computer hardware and enterprise risk management software.  We get the astrophysicists and physicists involved to keep it real.

What kind of optimization problem would Charles Prince liked to have posed at the end of 2006?  Here’s a practical optimization problem whose answer probably would have been useful to Mr. Prince:

Optimization objective: Maximize cumulative GAAP net income monthly over 10 years
Allowable investment universe: 70,000 structured products in the libraries of Intex or Markit Partners, which maintain electronic libraries of the waterfalls in indentures which specify the order in which interest and principal are allocated to the various tranches in a structured product.

We assume the liability structure remains stable, which is not realistic.  More on this later.

Assumptions about risk limits
Interest rate risk rules are as specified in the Office of Thrift Supervision “Thrift Bulletin 13a,” 1998.  For a copy of these risk limits, use this link:
http://www.ots.treas.gov/_files/84074.pdf
Home price exposure limit assumptions: A 25% drop in home prices shall not reduce the mark to market value of capital by more than 5% on a default adjusted basis

In short, can we tell Mr. Prince what portfolio of structured products will produce the most net income over the next 10 years given prudent limits on interest rate risk and home price exposure?  To maximize the probability that we can do this, we ignore the Countrywide/Northern Rock/IndyMac phenomenon explained earlier in the series, where the supply of liabilities is a function of the market value of the assets of the firm.  If it turns out this is an easy optimization problem, we can add in that possibility later.

Now we need to analyze the magnitude of this very practical problem from a computer science and applied math point of view. How many inputs are there to the objective function in this simulation? 70,000? No-we need values for all 70,000 potential investments at 120 points in time over N scenarios, because Mr. Prince needs a dynamic portfolio strategy that varies as we approach interest rate and home price risk limits. From 120 periods x 70,000 potential investments x N scenarios, we have to derive the best portfolio not just at time zero but at each of the 120 months in the forecast period.  We assume Mr. Prince insists on N=10,000 scenarios.

If we are lucky, the optimization can be separated from the simulation of asset values.  In this simple example, we have assumed stable liabilities and the separation is possible. Standard tools are easy to apply, in theory, because we can use risk management software to simulate the outcomes for each of the 70,000 securities in each period and in each scenario, and then apply third party optimization tools to find the “best” (i.e. largest cumulative net income) strategy based on this input data.

If, however, we allow the credit risk of our assets to affect the supply of liabilities, the optimization has to be deeply embedded in the risk management simulation engine. For this reason Kamakura implemented the Levenberg-Marquardt non-linear optimization approach in 1992.  As a general statement, only in the easiest of risk management optimization problems can the simulation of outcomes and the construction of the best portfolio be done separately, instead of simultaneously.

Let’s continue to assume that this separation is possible for Mr. Prince’s hypothetical optimization problem. There are many third party tools for the general solution of optimization subject to constraints. As mentioned above, Kamakura Risk Manager has relied on the Levenberg-Marquardt non-linear solution algorithm for more than 18 years.  Wikipedia.com has a nice explanation of this approach here:
http://en.wikipedia.org/wiki/Levenberg-Marquardt_algorithm

Under the topic “optimization algorithms,” Wikipedia lists these 93 methods used in applied mathematics, so we have lots of choices for solving Mr. Prince’s puzzle:

  1. Alpha-beta pruning
  2. Artificial Bee Colony Algorithm
  3. Artificial neural network
  4. Artificial immune system
  5. Auction algorithm
  6. Automatic label placement
  7. BFGS method
  8. Bees algorithm
  9. Bin packing problem
  10. Biologically inspired algorithms
  11. Bland’s rule
  12. Boid (computer science)
  13. Branch and bound
  14. Branch and cut
  15. CMA-ES
  16. Compositional pattern-producing network
  17. Conjugate gradient method
  18. Cooperative optimization
  19. Crew scheduling
  20. Cross-entropy method
  21. Cuckoo search
  22. Cutting-plane method
  23. Davidon–Fletcher–Powell formula
  24. Delayed column generation
  25. Derivation of the conjugate gradient method
  26. Destination dispatch
  27. Differential evolution
  28. Divide and conquer algorithm
  29. Dynamic programming
  30. Evolutionary algorithm
  31. Evolutionary multi-modal optimization
  32. Evolutionary programming
  33. Expectation-maximization algorithm
  34. Extremal optimization
  35. Firefly algorithm
  36. Fourier–Motzkin elimination
  37. Frank–Wolfe algorithm
  38. Gauss–Newton algorithm
  39. Genetic algorithm
  40. Genetic algorithm in economics
  41. Golden section search
  42. Gradient descent
  43. Graduated optimization
  44. Great Deluge algorithm
  45. Greedy algorithm
  46. Group method of data handling
  47. Guided Local Search
  48. Harmony search
  49. Hill climbing
  50. Intelligent Water Drops
  51. Interior point method
  52. Job Shop Scheduling
  53. K-approximation of k-hitting set
  54. Karmarkar’s algorithm
  55. L-BFGS
  56. Levenberg–Marquardt algorithm
  57. Local search (optimization)
  58. MCS algorithm
  59. Matrix chain multiplication
  60. Maximum subarray problem
  61. Mehrotra predictor-corrector method
  62. Meta-optimization
  63. Minimax
  64. Nearest neighbor search
  65. Negamax
  66. Nelder–Mead method
  67. Newton’s method
  68. Newton’s method in optimization
  69. Nonlinear conjugate gradient method
  70. Ordered subset expectation maximization
  71. Parallel tempering
  72. Particle swarm optimization
  73. Penalty method
  74. Powell’s method
  75. Quasi-Newton method
  76. Radial basis function network
  77. Random optimization
  78. Reactive search optimization
  79. Rosenbrock methods
  80. Rprop
  81. SR1 formula
  82. Sequence-dependent setup
  83. Sequential Minimal Optimization
  84. Simplex algorithm
  85. Simulated annealing
  86. Stochastic hill climbing
  87. Successive parabolic interpolation
  88. Swarm intelligence
  89. Tabu search
  90. User:Tomalec wiki/Sandbox
  91. Tree rearrangement
  92. Very large-scale neighborhood search
  93. Zionts–Wallenius method

Because optimization is common to almost all quantitative disciplines, it comes as no surprise that my colleagues with the broadest exposure to optimization come from astrophysics and particle physics.  I called on two specialists to advise on Mr. Prince’s dilemma:

  • Dr. M is Ph.D.in astrophysics from UC Berkeley and a former NASA post-doctoral fellow.
  • Dr. T is a Ph.D. in physics from UC Berkeley and a former associate research physicist at Princeton, Lawrence Berkeley Laboratory, and the Stanford Linear Accelerator

My colleagues made the point that the calculation time for an optimization algorithm can be summarized by the following statement:

For most algorithms, the time it takes to optimize a function on m floating parameters given n (great than or equal to m) data points is proportional to nm2.

Dr. T described Mr. Prince’s dilemma this way:

“m” is the number of unknowns in your model. In this case, the “free”, or “floating”, parameters that you are trying to optimize for are the numbers of each of the 70,000 structured products that you want to hold on any given day. If you are willing to trade once a month, you would have 70,000 securities x 120 months= 8.4 million free parameters. If you want to build your portfolio once and to never trade within this 120-month period, the number of free parameters would be 70,000.

Whether it’s the time zero investment strategy only or the 120 months of dynamic positions in our 70,000 candidate securities, it’s easy to see that Mr. Prince’s dilemma is not a trivial exercise.

Dr. T went on after hearing that we’d like to do 10,000 monte carlo scenarios as the basis for the optimization:

“n” is the number of individual calculations you would have to make to compute the optimand given one particular portfolio composition (one particular set of free parameters). In this case, it would be proportional to the number of products, the number of time periods, and the number of scenarios, 70,000 x 120 x 10,000 = 84 million

If there is any good news, it is that the closer the initial parameter values are to the optimal ones and the more “benign” the function we are maximizing (10 years of net income in our case), the less time the optimization takes and the smaller is the chance the optimization might fail to converge.

I asked Dr. T to describe the strategy he had taken toward optimization as an expert in particle physics.  He said,

“My personal experience is largely limited to the optimization/minimization tools developed by CERN, that is, the MINUIT library. Its flagship algorithm, called MIGRAD, is a variable-metric method with inexact line search and checks for positive-definiteness. It uses only first derivatives (numerical ones by default). The Levenberg-Marquardt algorithm, on the other hand, assumes that the model is twice differentiable and calculates the Hessian explicitly (numerically). Levenberg-Marquardt should therefore be the method of choice for analytical models.”

I then asked him, from the perspective of particle physics, what was the largest optimization problem that he’d seen someone take on?  His answer was humbling in light of Mr. Prince’s dilemma:

“The highest I (or anyone I’ve had a chance to meet) have had to go was m=117 and n=500,000 to minimize a highly non-linear maximum-likelihood function. The fit was not parallelizable across multiple CPUs due to the particularly complex maximum likelihood function choice, and it took 3 days to converge. Optimization itself, however, accounted for less than 0.01% of that time; the rest was taken by calculating the function values and shuffling numbers back and forth in memory.”

The bad news is that the dimensions of the fairly simple simulation we’ve outlined for optimization are currently beyond the ability of the world’s best mathematicians and computer scientists if we do the analysis at the transaction level in a brute force way. This is the financial equivalent of building software to play chess by evaluating every single possible strategy as step 1 and then choosing the best strategy as step 2.  This strategy doesn’t work in chess software, and it clearly won’t work in an enterprise risk management context.  In both cases, we have to be smarter. Even if we could do it at the transaction level, Dr. M points out “Computers do what you tell them to do, not what you want them to do.”  That is, a poorly benchmarked set of assumptions could still produce an unusable “best” strategy.

Now for the good news.  As Dr. T pointed out in his last quote, only 0.01% of the time involved in the optimization exercise he mentioned was mathematics—the rest of the time was moving numbers back and forth between computer memory and the hard disk, because memory would otherwise have exceeded capacity.  With 64 bit systems now months from implementation, this need to move numbers back and forth to the hard drive will be dramatically reduced.

There’s more good news.  In our April 2009 blog on great quotations from the credit crisis, the New York Times once quoted a senior Citigroup executive as saying “Charles Prince didn’t know the difference between a CDO and a grocery list.” For most risk managers, however, we can isolate the key dimensions of the problem and make a practical simulation doable with the current state of the art in risk management software.  This correlates to the chess software strategy of skipping simulations of chess strategies that an expert chess player knows even before the simulation to be inferior strategies.

One way to make the problem more manageable is to summarize the data into fewer transactions.  This is a popular but dangerous choice if the nature of the data is not well understood by the user.  For instance, two portfolios of mortgage loans may have current loan to value ratios that are 40% in one portfolio and 120% in the other.  If we say this is equivalent to one big portfolio with an average current loan to value of 80%, we’re mathematically correct but the measured risk will understate the risk we would perceive if we analyzed the portfolios separately.

We do understand the true economics of the home price risk that ultimately did Citigroup in, per the Vik Pandit quote earlier in this blog series.  We also understand the other major risks most banks typically face:

  • If you have a $100,000 house with an $80,000 loan on it, you need home price put options to hedge your risk, starting at an $80,000 strike price initially and falling over time has the mortgage payments are received.
  • Interest rate risk is well understood
  • Commercial real estate values and home values are highly correlated

For both residential real estate and commercial real estate, a loan by loan analysis of current loan to value ratios makes clear these key risks even before we initiate a simulation and optimization.

These practical steps will simplify the optimization problem and mimic what happens in day to day transaction generation:

  • Set maximum limits on the “delta” with respect to home prices and commercial real estate values with management
  • Within those constraints, identify the best opportunities that the line groups can find in those sectors
  • Constantly monitor the deltas to see if macro factor movements have changed the risk, and hedge when necessary

For risk managers, like chess masters, the machines cannot replace us yet.  We need the machines and the machines need us to help short cut aspects of the optimization that take lots of machine time but which don’t add to the quality of the answer.  For more details on how to make day-to-day practical optimization a reality, please contact us at info@kamakuraco.com.

Donald R. van Deventer
Kamakura Corporation
Honolulu, Hawaii
April 28, 2010

ABOUT THE AUTHOR

Donald R. Van Deventer, Ph.D.

Don founded Kamakura Corporation in April 1990 and currently serves as its chairman and chief executive officer where he focuses on enterprise wide risk management and modern credit risk technology. His primary financial consulting and research interests involve the practical application of leading edge financial theory to solve critical financial risk management problems. Don was elected to the 50 member RISK Magazine Hall of Fame in 2002 for his work at Kamakura.

Read More

ARCHIVES