ExamGecko
Home Home / Huawei / H13-311_V3.5

Huawei H13-311_V3.5 Practice Test - Questions Answers, Page 5

Question list
Search
Search

DRAG DROP

Correctly connect the layers in the architecture of an Ascend AI Processor.


Question 41
Correct answer: Question 41

Which of the following statements is false about feedforward neural networks?

A.
A unidirectional multi-layer structure is adopted. Each layer includes several neurons, and those in the same layer are not connected to each other. Only unidirectional inter-layer information transmission is supported.
A.
A unidirectional multi-layer structure is adopted. Each layer includes several neurons, and those in the same layer are not connected to each other. Only unidirectional inter-layer information transmission is supported.
Answers
B.
Nodes at each hidden layer represent neurons that provide the computing function.
B.
Nodes at each hidden layer represent neurons that provide the computing function.
Answers
C.
Input nodes do not provide the computing function and are used to represent only the element values of an input vector.
C.
Input nodes do not provide the computing function and are used to represent only the element values of an input vector.
Answers
D.
Each neuron is connected to all neurons at the previous layer.
D.
Each neuron is connected to all neurons at the previous layer.
Answers
Suggested answer: D

Explanation:

This statement is false because not all feedforward neural networks follow this architecture. While fully-connected layers do have this type of connectivity (where each neuron is connected to all neurons in the previous layer), feedforward networks can include layers like convolutional layers, where not every neuron is connected to all previous neurons. Convolutional layers, common in convolutional neural networks (CNNs), only connect to a local region of the input, preserving spatial information.

Which of the following are feedforward neural networks?

A.
Fully-connected neural networks
A.
Fully-connected neural networks
Answers
B.
Recurrent neural networks
B.
Recurrent neural networks
Answers
C.
Boltzmann machines
C.
Boltzmann machines
Answers
D.
Convolutional neural networks
D.
Convolutional neural networks
Answers
Suggested answer: A, D

Explanation:

Feedforward neural networks (FNNs) are networks where information moves in only one direction---forward---from the input nodes through hidden layers to the output nodes. Both fully-connected neural networks (where each neuron in one layer connects to every neuron in the next) and convolutional neural networks (CNNs) (which have a specific architecture for image data) are examples of feedforward networks.

However, recurrent neural networks (RNNs) and Boltzmann machines are not feedforward networks. RNNs include loops where information can be fed back into previous layers, and Boltzmann machines involve undirected connections between units, making them a form of a stochastic network rather than a feedforward structure.

The mean squared error (MSE) loss function cannot be used for classification problems.

A.
TRUE
A.
TRUE
Answers
B.
FALSE
B.
FALSE
Answers
Suggested answer: A

Explanation:

The mean squared error (MSE) loss function is primarily used for regression problems, where the goal is to minimize the difference between the predicted and actual continuous values. For classification problems, where the target output is categorical (e.g., binary or multi-class labels), loss functions like cross-entropy are more suitable, as they are designed to handle the probabilistic interpretation of outputs in classification tasks.

Using MSE for classification could lead to inefficient training because it doesn't capture the probabilistic relationships required for classification tasks.

Which of the following statements is false about gradient descent algorithms?

A.
Each time the global gradient updates its weight, all training samples need to be calculated.
A.
Each time the global gradient updates its weight, all training samples need to be calculated.
Answers
B.
When GPUs are used for parallel computing, the mini-batch gradient descent (MBGD) takes less time than the stochastic gradient descent (SGD) to complete an epoch.
B.
When GPUs are used for parallel computing, the mini-batch gradient descent (MBGD) takes less time than the stochastic gradient descent (SGD) to complete an epoch.
Answers
C.
The global gradient descent is relatively stable, which helps the model converge to the global extremum.
C.
The global gradient descent is relatively stable, which helps the model converge to the global extremum.
Answers
D.
When there are too many samples and GPUs are not used for parallel computing, the convergence process of the global gradient algorithm is time-consuming.
D.
When there are too many samples and GPUs are not used for parallel computing, the convergence process of the global gradient algorithm is time-consuming.
Answers
Suggested answer: B

Explanation:

The statement that mini-batch gradient descent (MBGD) takes less time than stochastic gradient descent (SGD) to complete an epoch when GPUs are used for parallel computing is incorrect. Here's why:

Stochastic Gradient Descent (SGD) updates the weights after each training sample, which can lead to faster updates but more noise in the gradient steps. It completes an epoch after processing all samples one by one.

Mini-batch Gradient Descent (MBGD) processes small batches of data at a time, updating the weights after each batch. While MBGD leverages the computational power of GPUs effectively for parallelization, the comparison made in this question is not about overall computation speed, but about completing an epoch.

MBGD does not necessarily complete an epoch faster than SGD, as MBGD processes multiple samples in each batch, meaning fewer updates per epoch compared to SGD, where weights are updated after every individual sample.

Therefore, the correct answer is B. FALSE, as MBGD does not always take less time than SGD for completing an epoch, even when GPUs are used for parallelization.

HCIA AI

AI Development Framework: Discussion of gradient descent algorithms and their efficiency on different hardware architectures like GPUs.

All kernels of the same convolutional layer in a convolutional neural network share a weight.

A.
TRUE
A.
TRUE
Answers
B.
FALSE
B.
FALSE
Answers
Suggested answer: B

Explanation:

In a convolutional neural network (CNN), each kernel (also called a filter) in the same convolutional layer does not share weights with other kernels. Each kernel is independent and learns different weights during training to detect different features in the input data. For instance, one kernel might learn to detect edges, while another might detect textures.

However, the same kernel's weights are shared across all spatial positions it moves across the input feature map. This concept of weight sharing is what makes CNNs efficient and well-suited for tasks like image recognition.

Thus, the statement that all kernels share weights is false.

HCIA AI

Deep Learning Overview: Detailed description of CNNs, focusing on kernel operations and weight sharing mechanisms within a single kernel, but not across different kernels.

The core of the MindSpore training data processing engine is to efficiently and flexibly convert training samples (datasets) to MindRecord and provide them to the training network for training.

A.
TRUE
A.
TRUE
Answers
B.
FALSE
B.
FALSE
Answers
Suggested answer: A

Explanation:

MindSpore, Huawei's AI framework, includes a data processing engine designed to efficiently handle large datasets during model training. The core feature of this engine is the ability to convert training samples into a format called MindRecord, which optimizes data input and output processes for training. This format ensures that the data pipeline is fast and flexible, providing data efficiently to the training network.

The statement is true because one of MindSpore's core functionalities is to preprocess data and optimize its flow into the neural network training pipeline using the MindRecord format.

HCIA AI

Introduction to Huawei AI Platforms: Covers MindSpore's architecture, including its data processing engine and the use of the MindRecord format for efficient data management.

When using the following code to construct a neural network, MindSpore can inherit the Cell class and rewrite the __init__ and construct methods.

A.
TRUE
A.
TRUE
Answers
B.
FALSE
B.
FALSE
Answers
Suggested answer: A

Explanation:

In MindSpore, the neural network structure is defined by inheriting the Cell class, which represents a computational node or a layer in the network. Users can customize the network by overriding the __init__ method (for initializing layers) and the construct method (for defining the forward pass of the network). This modular design allows for easy and flexible neural network construction.

Thus, the statement is true because MindSpore's framework allows developers to build neural networks by extending the Cell class and defining custom behavior through the __init__ and construct methods.

HCIA AI

AI Development Framework: Detailed coverage of building neural networks in MindSpore, including how to inherit from the Cell class and rewrite key methods for custom network architecture.

Which of the following is NOT a key feature that enables all-scenario deployment and collaboration for MindSpore?

A.
Data and computing graphs are transmitted to Ascend AI Processors.
A.
Data and computing graphs are transmitted to Ascend AI Processors.
Answers
B.
Federal meta-learning enables real-time, coordinated model updates between different devices, and across the device and cloud.
B.
Federal meta-learning enables real-time, coordinated model updates between different devices, and across the device and cloud.
Answers
C.
Unified model IR delivers a consistent deployment experience.
C.
Unified model IR delivers a consistent deployment experience.
Answers
D.
Graph optimization based on a software-hardware synergy shields the differences between scenarios.
D.
Graph optimization based on a software-hardware synergy shields the differences between scenarios.
Answers
Suggested answer: B

Explanation:

While MindSpore supports all-scenario deployment with features like data and computing graph transmission to Ascend AI processors, unified model IR for consistent deployment, and graph optimization based on software-hardware synergy, federal meta-learning is not explicitly a core feature of MindSpore's deployment strategy. Federal meta-learning refers to a distributed learning paradigm, but MindSpore focuses more on efficient computing and model optimization across different environments.

In MindSpore, the basic unit of the neural network is nn.Cell.

A.
TRUE
A.
TRUE
Answers
B.
FALSE
B.
FALSE
Answers
Suggested answer: A

Explanation:

In MindSpore, nn.Cell is the basic unit of a neural network. It represents layers, models, and other neural network components, encapsulating the forward logic of the network. It allows users to define, organize, and manage neural network layers in MindSpore, making it a core building block in neural network construction.

Total 60 questions
Go to page: of 6