Journal of Risk Model Validation

Steve Satchell
Trinity College, University of Cambridge

This issue of The Journal of Risk Model Validation differs from previous ones in that it contains three papers rather than four. Those three papers are substantial in various ways, however – quality, not quantity.

In the issue’s first paper, “Overfitting in portfolio optimization”, Matteo Maggiolo and Oleg Szehr measure the out-of-sample performance of sample-based rollingwindow neural network portfolio optimization strategies. They show that if neural network strategies are evaluated using the holdout (train/test split) technique, high out-of-sample performance scores can frequently be achieved. Although this phenomenon is often employed to validate neural network portfolio models, the authors argue that this does not constitute true outperformance/validation. To assess whether overfitting is present, they set up a dedicated methodology based on combinatorial symmetric cross validation that involves performance measurement across different holdout periods and varying portfolio compositions. Their method is named as the random-asset-stabilized combinatorially symmetric cross-validation methodology. Maggiolo and Szehr compare a variety of neural network strategies with classical extensions of the mean–variance model and the 1/N strategy. They find that outperforming classical models is by no means trivial. While certain neural network strategies outperform the 1/N benchmark, of the nearly 30 models that Maggiolo and Szehr evaluate explicitly, none is consistently better than the short-sale constrained minimum-variance rule in terms of the Sharpe ratio or the certainty equivalent of returns. It can be argued that the certainty equivalent of returns has more credibility than the Sharpe ratio, which is an inappropriate measure except for in the classical model. This is a valuable paper, and Maggiolo and Szehr shed light on concerns that many of us have about neural networks.

The second paper in the issue, “On the mitigation of valuation uncertainty risk: the importance of a robust proxy for the ‘cumulative state of market incompleteness”’ by Oghenovo Adewale Obrimah, addresses an important practical issue that I will describe broadly as “valuation risk”. In much research and practical finance, modeling valuation risk places a great deal of blind faith in dividend discount models and their relatives. Practitioners and regulators are often blindsided by their apparent forward-looking nature. Very little research addresses this prevalent form of risk, so we warmly welcome Obrimah’s paper. It is a challenging theoretical read, but the gist of the paper is that valuation will depend on the order in which assets enter a market. Obrimah also looks for mechanisms that will reduce valuation risk in this context. We hope to receive more papers in this area.

In many spheres of risk management, there is an ongoing debate about whether humans are more effective than software. Examples from diverse areas include credit card risk, mortgage default and wine vintage quality. Generalizing rather recklessly, automated systems tend to do better. With those thoughts in mind I discuss the third and final paper, “A new automated model validation tool for financial institutions” by Lingling Fan, Alex Schneider and Mazin Joumaa, which presents a new automated validation tool for financial organizations to use with predictive models based on the regulatory requirements of the Federal Reserve and the Office of the Comptroller of the Currency. This automated tool is designed to help validate linear and logistic regression models, and it automatically completes validation processes for seven areas: data sets; model algorithm assumptions; model coefficients and performance; model stability; backtesting; sensitivity testing; and stress testing. The software is packaged as a PYTHON library and can validate models developed in any language, such as PYTHON, R and the SAS language. Further, it can automatically generate a validation report as a portable document format (PDF) file while saving all the generated tables and charts in separate EXCEL and portable network graphic (PNG) files. With this automated tool, validators can standardize model validation procedures, improve efficiency and reduce human error. It can also be used during model development. The design of experiments to validate validators appears to be an area of ongoing research and product development.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here