List of questions
Related questions
Question 253 - Professional Machine Learning Engineer discussion
You created an ML pipeline with multiple input parameters. You want to investigate the tradeoffs between different parameter combinations. The parameter options are
* input dataset
* Max tree depth of the boosted tree regressor
* Optimizer learning rate
You need to compare the pipeline performance of the different parameter combinations measured in F1 score, time to train and model complexity. You want your approach to be reproducible and track all pipeline runs on the same platform. What should you do?
A.
1 Use BigQueryML to create a boosted tree regressor and use the hyperparameter tuning capability 2 Configure the hyperparameter syntax to select different input datasets. max tree depths, and optimizer teaming rates Choose the grid search option
B.
1 Create a Vertex Al pipeline with a custom model training job as part of the pipeline Configure the pipeline's parameters to include those you are investigating 2 In the custom training step, use the Bayesian optimization method with F1 score as the target to maximize
C.
1 Create a Vertex Al Workbench notebook for each of the different input datasets 2 In each notebook, run different local training jobs with different combinations of the max tree depth and optimizer learning rate parameters 3 After each notebook finishes, append the results to a BigQuery table
D.
1 Create an experiment in Vertex Al Experiments 2. Create a Vertex Al pipeline with a custom model training job as part of the pipeline. Configure the pipelines parameters to include those you are investigating 3. Submit multiple runs to the same experiment using different values for the parameters
Your answer:
0 comments
Sorted by
Leave a comment first