Regularisation

Terry Benzschawel

Regularisation refers to techniques used to calibrate machine learning models in order to minimise the error function while preventing overfitting or underfitting. Overfitting is said to occur when a model learns both the detail and noise in the training data, negatively impacting the performance of the model on new data. Regularisation techniques adjust model learning algorithms during training such that the model generalises better. Regularisation is “any modification we make to a learning algorithm that is intended to reduce its generalization error, but not its training error.” (Goodfellow et al 2016)

7.1 REGULARISATION, OPTIMISATION AND DEEP LEARNING

Consider a neural network that is overfitting on the training data, as shown in Figure 7.1. This model fits the training data well, but will not generalise to cases not used for training. Neural networks are notoriously tricky to optimise. We cannot compute a global optimum for weight parameters nor guarantee convergence to a global optimum. We must seek acceptable solutions while trying to minimise overfitting to the data.

Popular regularisation methods include: L1 and L2 regulation; dropout; data augmentation; generative

Sorry, our subscription options are not loading right now

Please try again later. Get in touch with our customer services team if this issue persists.

New to Risk.net? View our subscription options

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here