ExamGecko
Question list
Search
Search

List of questions

Search

Related questions











Question 246 - MLS-C01 discussion

Report
Export

A machine learning (ML) specialist is using the Amazon SageMaker DeepAR forecasting algorithm to train a model on CPU-based Amazon EC2 On-Demand instances. The model currently takes multiple hours to train. The ML specialist wants to decrease the training time of the model.

Which approaches will meet this requirement7 (SELECT TWO )

A.
Replace On-Demand Instances with Spot Instances
Answers
A.
Replace On-Demand Instances with Spot Instances
B.
Configure model auto scaling dynamically to adjust the number of instances automatically.
Answers
B.
Configure model auto scaling dynamically to adjust the number of instances automatically.
C.
Replace CPU-based EC2 instances with GPU-based EC2 instances.
Answers
C.
Replace CPU-based EC2 instances with GPU-based EC2 instances.
D.
Use multiple training instances.
Answers
D.
Use multiple training instances.
E.
Use a pre-trained version of the model. Run incremental training.
Answers
E.
Use a pre-trained version of the model. Run incremental training.
Suggested answer: C, D

Explanation:

The best approaches to decrease the training time of the model are C and D, because they can improve the computational efficiency and parallelization of the training process. These approaches have the following benefits:

C: Replacing CPU-based EC2 instances with GPU-based EC2 instances can speed up the training of the DeepAR algorithm, as it can leverage the parallel processing power of GPUs to perform matrix operations and gradient computations faster than CPUs12.The DeepAR algorithm supports GPU-based EC2 instances such as ml.p2 and ml.p33.

D: Using multiple training instances can also reduce the training time of the DeepAR algorithm, as it can distribute the workload across multiple nodes and perform data parallelism4.The DeepAR algorithm supports distributed training with multiple CPU-based or GPU-based EC2 instances3.

The other options are not effective or relevant, because they have the following drawbacks:

A: Replacing On-Demand Instances with Spot Instances can reduce the cost of the training, but not necessarily the time, as Spot Instances are subject to interruption and availability5.Moreover, the DeepAR algorithm does not support checkpointing, which means that the training cannot resume from the last saved state if the Spot Instance is terminated3.

B: Configuring model auto scaling dynamically to adjust the number of instances automatically is not applicable, as this feature is only available for inference endpoints, not for training jobs6.

E: Using a pre-trained version of the model and running incremental training is not possible, as the DeepAR algorithm does not support incremental training or transfer learning3.The DeepAR algorithm requires a full retraining of the model whenever new data is added or the hyperparameters are changed7.

References:

1:GPU vs CPU: What Matters Most for Machine Learning? | by Louis (What's AI) Bouchard | Towards Data Science

2:How GPUs Accelerate Machine Learning Training | NVIDIA Developer Blog

3:DeepAR Forecasting Algorithm - Amazon SageMaker

4:Distributed Training - Amazon SageMaker

5:Managed Spot Training - Amazon SageMaker

6:Automatic Scaling - Amazon SageMaker

7:How the DeepAR Algorithm Works - Amazon SageMaker

asked 16/09/2024
Youssef El Akhal
39 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first