Three adjustments in calibrating models with neural networks

New research addresses fundamental issues with ANN approximation of pricing models

To calibrate a pricing model to market data, one needs to set up an optimisation problem in which the difference between model prices and market prices is minimised. Because this process has to be repeated frequently and it is computationally expensive, more efficient solutions are in high demand.

Artificial neural network (ANN) approaches, given their natural suitability to solve optimisation problems, are seen as a promising way of cutting some corners and delivering model parameters much quicker than standard approaches. Quants have been exploring this method for some years.

Typically, the ANN calibration procedure is made up of two steps. First, the neural network is trained with market data to set its node weights. Second, the parameters of some standard pricing models, such as the Black-Scholes model, are calibrated using the trained network, so the option prices can be produced. The speed improvement comes from the fact that the first step is done only once, and its results can be used to calibrate the pricing model multiple times more quickly than in the standard approach.

However, this solution is not optimal and can be slimmed down to be more efficient, believes Andrey Itkin, an adjunct professor at NYU’s department of risk and financial engineering and senior research associate at Bank of America.

In his paper, Deep learning calibration of option pricing models: some pitfalls and solutions, Itkin aims to improve on the existing ANN calibration methods on multiple fronts, namely efficiency, applicability and completeness.

One suggestion is to eliminate the second phase of calibration of a pricing model altogether, which he considers unnecessary and can be done as part of the first phase. “When we trained the neural network, we had actually already done this job – we had already found the optimisation solution,” he says.

Itkin achieves this by using the inverse map approach, a method that allows him to track the input variables from the output of the neural network.

The computational cost and time required to perform the two-step calibration are known problems and attempts to eliminate the second step have been proposed before, but the results were not deemed to be satisfactorily stable.

A drawback of the artificial neural network approach to calibration is that prices obtained this way are not guaranteed to be arbitrage free. This is because the ANN is merely an approximator and as such the approximations to the model prices, even when they are very close to the real prices, can produce some arbitrage opportunities, especially for out-of-sample data.

To avoid arbitrage, Itkin suggests switching from unconstrained optimisation to one with soft constraints. These constraints take the form of a penalty function incorporated into the objective function of the optimisation. This guarantees to keep the price almost always within given boundaries. “That practically eliminates the arbitrage from the in-sample set,” Itkin says.

A further improvement in efficiency comes with the calculation of Greeks, or the sensitivities of option prices to other variables such as volatility or changes in interest rates. To generate Greeks of good quality, one needs to pay attention to the activation function of the neural network – the function that determines its output by activating nodes and transforming inputs into outputs.

Itkin reviews some common types of activation functions, highlighting the pros and cons of each of them. He stresses, however, that they need to be continuous and twice differentiable in order to be suitable for the calculation of second-order sensitivities, such as gamma, and proposes a modified activation function so the whole ANN is twice differentiable.

Currently, some software packages can provide estimates of Greeks. But because they normally use activation functions that are not twice differentiable, the Greeks obtained that way might be of poor quality, especially second-order ones.

The calibration of pricing models is a novel and promising application of ANNs. A paper released last year by Blanka Horvath, Aitor Muguruza and Mehdi Tomas applied image recognition deep learning techniques to the problem, earning the trio Risk.net’s inaugural Rising Star in Quant Finance award.

Itkin himself is looking at advancing his research in the field. He and fellow academic Peter Carr are looking to build on a model they proposed in a previous paper by calibrating it to market data using the approach explained above.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

Most read articles loading...

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here