Huawei H13-311_V3.5 Practice Test - Questions Answers, Page 4
List of questions
Related questions
The derivative of the Rectified Linear Unit (ReLU) activation function in the positive interval is always:
In a fully-connected structure, a hidden layer with 1000 neurons is used to process an image with the resolution of 100 x 100. Which of the following is the correct number of parameters?
The global gradient descent, stochastic gradient descent, and batch gradient descent algorithms are gradient descent algorithms. Which of the following is true about these algorithms?
Sigmoid, tanh, and softsign activation functions cannot avoid vanishing gradient problems when the network is deep.
Single-layer perceptrons and logistic regression are linear classifiers that can only process linearly separable data.
Convolutional neural networks (CNNs) cannot be used to process text data.
Which of the following activation functions may cause the vanishing gradient problem?
Which of the following are use cases of generative adversarial networks?
DRAG DROP
Match the input and output of a generative adversarial network (GAN).
Question