ExamGecko
Question list
Search
Search

List of questions

Search

Related questions











Question 64 - MLS-C01 discussion

Report
Export

A Machine Learning Specialist built an image classification deep learning model. However the Specialist ran into an overfitting problem in which the training and testing accuracies were 99% and 75%r respectively.

How should the Specialist address this issue and what is the reason behind it?

A.
The learning rate should be increased because the optimization process was trapped at a local minimum.
Answers
A.
The learning rate should be increased because the optimization process was trapped at a local minimum.
B.
The dropout rate at the flatten layer should be increased because the model is not generalized enough.
Answers
B.
The dropout rate at the flatten layer should be increased because the model is not generalized enough.
C.
The dimensionality of dense layer next to the flatten layer should be increased because the model is not complex enough.
Answers
C.
The dimensionality of dense layer next to the flatten layer should be increased because the model is not complex enough.
D.
The epoch number should be increased because the optimization process was terminated before it reached the global minimum.
Answers
D.
The epoch number should be increased because the optimization process was terminated before it reached the global minimum.
Suggested answer: B

Explanation:

The best way to address the overfitting problem in image classification is to increase the dropout rate at the flatten layer because the model is not generalized enough. Dropout is a regularization technique that randomly drops out some units from the neural network during training, reducing the co-adaptation of features and preventing overfitting. The flatten layer is the layer that converts the output of the convolutional layers into a one-dimensional vector that can be fed into the dense layers. Increasing the dropout rate at the flatten layer means that more features from the convolutional layers will be ignored, forcing the model to learn more robust and generalizable representations from the remaining features.

The other options are not correct for this scenario because:

Increasing the learning rate would not help with the overfitting problem, as it would make the optimization process more unstable and prone to overshooting the global minimum. A high learning rate can also cause the model to diverge or oscillate around the optimal solution, resulting in poor performance and accuracy.

Increasing the dimensionality of the dense layer next to the flatten layer would not help with the overfitting problem, as it would make the model more complex and increase the number of parameters to be learned. A more complex model can fit the training data better, but it can also memorize the noise and irrelevant details in the data, leading to overfitting and poor generalization.

Increasing the epoch number would not help with the overfitting problem, as it would make the model train longer and more likely to overfit the training data. A high epoch number can cause the model to converge to the global minimum, but it can also cause the model to over-optimize the training data and lose the ability to generalize to new data.

References:

Dropout: A Simple Way to Prevent Neural Networks from Overfitting

How to Reduce Overfitting With Dropout Regularization in Keras

How to Control the Stability of Training Neural Networks With the Learning Rate

How to Choose the Number of Hidden Layers and Nodes in a Feedforward Neural Network?

How to decide the optimal number of epochs to train a neural network?

asked 16/09/2024
Kr Sk
33 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first