ExamGecko
Question list
Search
Search

List of questions

Search

Related questions











Question 49 - Professional Machine Learning Engineer discussion

Report
Export

You are training an LSTM-based model on Al Platform to summarize text using the following job submission script:

You want to ensure that training time is minimized without significantly compromising the accuracy of your model. What should you do?

A.
Modify the 'epochs' parameter
Answers
A.
Modify the 'epochs' parameter
B.
Modify the 'scale-tier' parameter
Answers
B.
Modify the 'scale-tier' parameter
C.
Modify the batch size' parameter
Answers
C.
Modify the batch size' parameter
D.
Modify the 'learning rate' parameter
Answers
D.
Modify the 'learning rate' parameter
Suggested answer: B

Explanation:

The training time of a machine learning model depends on several factors, such as the complexity of the model, the size of the data, the hardware resources, and the hyperparameters. To minimize the training time without significantly compromising the accuracy of the model, one should optimize these factors as much as possible.

One of the factors that can have a significant impact on the training time is the scale-tier parameter, which specifies the type and number of machines to use for the training job on AI Platform.The scale-tier parameter can be one of the predefined values, such as BASIC, STANDARD_1, PREMIUM_1, or BASIC_GPU, or a custom value that allows you to configure the machine type, the number of workers, and the number of parameter servers1

To speed up the training of an LSTM-based model on AI Platform, one should modify the scale-tier parameter to use a higher tier or a custom configuration that provides more computational resources, such as more CPUs, GPUs, or TPUs. This can reduce the training time by increasing the parallelism and throughput of the model training.However, one should also consider the trade-off between the training time and the cost, as higher tiers or custom configurations may incur higher charges2

The other options are not as effective or may have adverse effects on the model accuracy. Modifying the epochs parameter, which specifies the number of times the model sees the entire dataset, may reduce the training time, but also affect the model's convergence and performance. Modifying the batch size parameter, which specifies the number of examples per batch, may affect the model's stability and generalization ability, as well as the memory usage and the gradient update frequency.Modifying the learning rate parameter, which specifies the step size of the gradient descent optimization, may affect the model's convergence and performance, as well as the risk of overshooting or getting stuck in local minima3

asked 18/09/2024
Kefash White
38 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first