ExamGecko
Home Home / Huawei / H13-311_V3.5

Huawei H13-311_V3.5 Practice Test - Questions Answers

Question list
Search
Search

List of questions

Search

When learning the MindSpore framework, John learns how to use callbacks and wants to use it for AI model training. For which of the following scenarios can John use the callback?

A.
Early stopping
A.
Early stopping
Answers
B.
Adjusting an activation function
B.
Adjusting an activation function
Answers
C.
Saving model parameters
C.
Saving model parameters
Answers
D.
Monitoring loss values during training
D.
Monitoring loss values during training
Answers
Suggested answer: A, C, D

Explanation:

In MindSpore, callbacks can be used in various scenarios such as:

Early stopping: To stop training when the performance plateaus or certain criteria are met.

Saving model parameters: To save checkpoints during or after training using the ModelCheckpoint callback.

Monitoring loss values: To keep track of loss values during training using LossMonitor, allowing interventions if necessary.

Adjusting the activation function is not a typical use case for callbacks, as activation functions are usually set during model definition.

When you use MindSpore to execute the following code, which of the following is the output?

python

Copy code

x = Tensor(np.array([[1, 2], [3, 4]]), dtype.int32)

x.dtype

A.
2
A.
2
Answers
B.
mindspore.int32
B.
mindspore.int32
Answers
C.
4
C.
4
Answers
D.
(2,2)
D.
(2,2)
Answers
Suggested answer: B

Explanation:

In MindSpore, when you define a tensor using Tensor(np.array([[1, 2], [3, 4]]), dtype.int32), the dtype attribute returns the data type of the tensor, which in this case is mindspore.int32. This specifies the type of elements in the tensor.

AI inference chips need to be optimized and are thus more complex than those used for training.

A.
TRUE
A.
TRUE
Answers
B.
FALSE
B.
FALSE
Answers
Suggested answer: B

Explanation:

AI inference chips are generally simpler than training chips because inference involves running a trained model on new data, which requires fewer computations compared to the training phase. Training chips need to perform more complex tasks like backpropagation, gradient calculations, and frequent parameter updates. Inference, on the other hand, mostly involves forward pass computations, making inference chips optimized for speed and efficiency but not necessarily more complex than training chips.

Thus, the statement is false because inference chips are optimized for simpler tasks compared to training chips.

HCIA AI

Cutting-edge AI Applications: Describes the difference between AI inference and training chips, focusing on their respective optimizations.

Deep Learning Overview: Explains the distinction between the processes of training and inference, and how hardware is optimized accordingly.

Huawei Cloud EI provides knowledge graph, OCR, machine translation, and the Celia (virtual assistant) development platform.

A.
TRUE
A.
TRUE
Answers
B.
FALSE
B.
FALSE
Answers
Suggested answer: A

Explanation:

Huawei Cloud EI (Enterprise Intelligence) provides a variety of AI services and platforms, including knowledge graph, OCR (Optical Character Recognition), machine translation, and the Celia virtual assistant development platform. These services enable businesses to integrate AI capabilities such as language processing, image recognition, and virtual assistant development into their systems.

Which of the following are covered by Huawei Cloud EIHealth?

A.
Drug R&D
A.
Drug R&D
Answers
B.
Clinical research
B.
Clinical research
Answers
C.
Diagnosis and treatment
C.
Diagnosis and treatment
Answers
D.
Genome analysis
D.
Genome analysis
Answers
Suggested answer: A, B, C, D

Explanation:

Huawei Cloud EIHealth is a comprehensive platform that offers AI-powered solutions across various healthcare-related fields such as:

Drug R&D: Accelerates drug discovery and development using AI.

Clinical research: Enhances research efficiency through AI data analysis.

Diagnosis and treatment: Provides AI-based diagnostic support and treatment recommendations.

Genome analysis: Uses AI to analyze genetic data for medical research and personalized medicine.

Which of the following is the order of tensor [[0,1],[2,3]]?

A.
6
A.
6
Answers
B.
3
B.
3
Answers
C.
2
C.
2
Answers
D.
4
D.
4
Answers
Suggested answer: C

Explanation:

The order of a tensor refers to its rank, which is the number of dimensions it has. For the tensor [[0,1],[2,3]], the rank is 2 because it is a 2x2 matrix, meaning it has 2 dimensions.

When you use MindSpore to execute the following code, which of the following is the output?

from mindspore import ops

import mindspore

shape = (2, 2)

ones = ops.Ones()

output = ones(shape, dtype=mindspore.float32)

print(output)

A.
[[1 1] [1 1]]
A.
[[1 1] [1 1]]
Answers
B.
[[1. 1.] [1. 1.]]
B.
[[1. 1.] [1. 1.]]
Answers
C.
1
C.
1
Answers
D.
[[1. 1. 1. 1.]]
D.
[[1. 1. 1. 1.]]
Answers
Suggested answer: B

Explanation:

In MindSpore, using ops.Ones() with a specified shape and dtype=mindspore.float32 will create a tensor of ones with floating-point values. The output will be a 2x2 matrix filled with 1.0 values. The floating-point format (with a decimal point) ensures that the output is in the form of [[1. 1.], [1. 1.]].

In MindSpore, mindspore.nn.Conv2d() is used to create a convolutional layer. Which of the following values can be passed to this API's 'pad_mode' parameter?

A.
pad
A.
pad
Answers
B.
same
B.
same
Answers
C.
valid
C.
valid
Answers
D.
nopadding
D.
nopadding
Answers
Suggested answer: B, C

Explanation:

The pad_mode parameter in mindspore.nn.Conv2d() can take values such as:

same: Ensures the output has the same spatial dimensions as the input.

valid: Performs convolution without padding, resulting in an output smaller than the input.

Other values like 'pad' and 'nopadding' are not valid options for the pad_mode parameter.

As we understand more about machine learning, we will find that its scope is constantly changing over time.

A.
TRUE
A.
TRUE
Answers
B.
FALSE
B.
FALSE
Answers
Suggested answer: A

Explanation:

Machine learning is a rapidly evolving field, and its scope indeed changes over time. With advancements in computational power, the introduction of new algorithms, frameworks, and techniques, and the growing availability of data, the capabilities of machine learning have expanded significantly. Initially, machine learning was limited to simpler algorithms like linear regression, decision trees, and k-nearest neighbors. Over time, however, more complex approaches such as deep learning and reinforcement learning have emerged, dramatically increasing the applications and effectiveness of machine learning solutions.

In the Huawei HCIA-AI curriculum, it is emphasized that AI, especially machine learning, has become more powerful due to these continuous developments, allowing it to be applied to broader and more complex problems. The framework and methodologies in machine learning have evolved, making it possible to perform more sophisticated tasks such as real-time decision-making, image recognition, natural language processing, and even autonomous driving.

As technology advances, the scope of machine learning will continue to shift, providing new opportunities for innovation. This is why it is important to stay updated on recent developments to fully leverage machine learning in various AI applications.

As the cornerstone of Huawei's full-stack, all-scenario AI solution, it provides modules, boards, and servers powered by the Ascend AI processor to meet customer demand for computing power in all scenarios.

A.
Atlas
A.
Atlas
Answers
B.
CANN
B.
CANN
Answers
C.
MindSpore
C.
MindSpore
Answers
D.
ModelArts
D.
ModelArts
Answers
Suggested answer: A

Explanation:

Atlas is a key part of Huawei's full-stack, all-scenario AI solution. It provides AI hardware resources in the form of modules, boards, edge stations, and servers powered by Huawei's Ascend AI processors. The Atlas series is designed to meet customer demands for AI computing power in a variety of deployment scenarios, including cloud, edge, and device environments.

Huawei's full-stack AI solution aims to deliver comprehensive AI capabilities across different levels. The Atlas series supports a wide range of industries by offering scalable AI computing resources, which are critical for industries dealing with large volumes of data and needing high-performance computing.

Total 60 questions
Go to page: of 6