How to Avoid Overfitting in Deep Learning Neural Networks?

 How to Avoid Overfitting in Deep Learning Neural Networks?

I will present five techniques to stop overfitting while training neural networks.

1. Simplifying The Model

The first step when handling overfitting is to decrease the complexity of the model. We will simply remove layers or reduce the number of neurons to form the network smaller to decrease the complexity, While doing this, it’s important to calculate the input and output dimensions of the varied layers involved within the neural network. There’s no general rule on what proportion to get rid of or how large your network should be. But, try making it smaller, if your neural network is overfitting, 

2. Early Stopping

Early stopping may be a sort of regularization while training a model with an iterative method, like gradient descent. Since all the neural networks learn exclusively by using gradient descent, early stopping may be a technique applicable to all or any issues. This method update the model so on make it better fit the training data with each iteration. Up to some extent, this improves the model’s performance on data on the test set. Past that time, however, improving the model’s fit the training data results in increased generalization error. Early stopping rules provide guidance on what percentage iterations are often run before the model begins to overfit.

Overfitting in deep learning neural networks

This technique is shown within the above diagram. As we will see, after some iterations, test error has begun to increase while the training error remains decreasing. Hence the model is overfitting. So to combat this, we stop the model at the purpose when this starts to happen.

3. Use Data Augmentation

In the case of neural networks, data augmentation simply means increasing the size of the info that’s increasing the number of images present within the dataset. a number of the favored image augmentation techniques are flipping, translation, rotation, scaling, changing brightness, adding noise etcetera. For a more complete reference, be happy to check out argumentations and imagaug.

Data Augmentation

This technique is shown within the above diagram. As we will see, using data augmentation tons of comparable images are often generated. This helps in increasing the dataset size and thus reduces overfitting. the rationale is that, as we add more data, the model is unable to overfit all the samples, and is forced to generalize.

4. Use Regularization

Regularization may be a technique to scale back the complexity of the model. It does so by adding a penalty term to the loss function. the foremost common techniques are referred to as L1 and L2 regularization:

The L1 penalty aims to attenuate absolutely the value of the weights. this is often mathematically shown within the below formula.

Use Regularization

The L2 penalty aims to attenuate the squared magnitude of the weights. this is often mathematically shown within the below formula.

Use Regularization

The below table compares both the regularization techniques.

Use Regularization

So which technique is best at avoiding overfitting? the solution is — it depends. If the info is just too complex to be modeled accurately then L2 may be a more sensible choice because it is in a position to find out inherent patterns present within the data. While L1 is best if the info is straightforward enough to be modeled accurately. for many of the pc vision problems that I even have encountered, L2 regularization nearly always gives better results. However, L1 has another advantage of being robust to outliers. therefore the correct choice of regularization depends on the matter that we try to unravel.

5. Use Dropouts

Dropout may be a regularization technique that forestalls neural networks from overfitting. Regularization methods like L1 and L2 reduce overfitting by modifying the value function. Dropout on the opposite hand, modify the network itself. It randomly drops neurons from the neural network during training in each iteration. once we drop different sets of neurons, it’s like training different neural networks. the various networks will overfit in several ways, therefore the net effect of dropout is going to be to scale back overfitting.

Use Dropouts

This technique is shown within the above diagram. As we will see, dropouts are wont to randomly remove neurons while training of the neural network. this system has proven to scale back overfitting to a spread of problems involving image classification, image segmentation, word embeddings, semantic matching etcetera.

Conclusion

As a fast recap, I explained what overfitting is and why it’s a standard problem in neural networks. I followed it up by presenting five of the foremost common ways to stop overfitting while training neural networks — simplifying the model, early stopping, data augmentation, regularization, and dropouts.

Leave a Comment