ExamGecko
Question list
Search
Search

List of questions

Search

Related questions











Question 289 - MLS-C01 discussion

Report
Export

A tourism company uses a machine learning (ML) model to make recommendations to customers. The company uses an Amazon SageMaker environment and set hyperparameter tuning completion criteria to MaxNumberOfTrainingJobs.

An ML specialist wants to change the hyperparameter tuning completion criteria. The ML specialist wants to stop tuning immediately after an internal algorithm determines that tuning job is unlikely to improve more than 1% over the objective metric from the best training job.

Which completion criteria will meet this requirement?

A.

MaxRuntimelnSeconds

Answers
A.

MaxRuntimelnSeconds

B.

TargetObjectiveMetricValue

Answers
B.

TargetObjectiveMetricValue

C.

CompleteOnConvergence

Answers
C.

CompleteOnConvergence

D.

MaxNumberOfTrainingJobsNotlmproving

Answers
D.

MaxNumberOfTrainingJobsNotlmproving

Suggested answer: C

Explanation:

In Amazon SageMaker, hyperparameter tuning jobs optimize model performance by adjusting hyperparameters. Amazon SageMaker's hyperparameter tuning supports completion criteria settings that enable efficient management of tuning resources. In this scenario, the ML specialist aims to set a completion criterion that will terminate the tuning job as soon as SageMaker detects that further improvements in the objective metric are unlikely to exceed 1%.

The CompleteOnConvergence setting is designed for such requirements. This criterion enables the tuning job to automatically stop when SageMaker determines that additional hyperparameter evaluations are unlikely to improve the objective metric beyond a certain threshold, allowing for efficient tuning completion. The convergence process relies on an internal optimization algorithm that continuously evaluates the objective metric during tuning and stops when performance stabilizes without further improvement.

This is supported by AWS documentation, which explains that CompleteOnConvergence is an efficient way to manage tuning by stopping unnecessary evaluations once the model performance stabilizes within the specified threshold.

asked 31/10/2024
Rahul Manikpuri
36 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first