List of questions
Related questions
Question 220 - Professional Machine Learning Engineer discussion
You are developing a training pipeline for a new XGBoost classification model based on tabular data The data is stored in a BigQuery table You need to complete the following steps
1. Randomly split the data into training and evaluation datasets in a 65/35 ratio
2. Conduct feature engineering
3 Obtain metrics for the evaluation dataset.
4 Compare models trained in different pipeline executions
How should you execute these steps'?
A.
1 Using Vertex Al Pipelines, add a component to divide the data into training and evaluation sets, and add another component for feature engineering 2. Enable auto logging of metrics in the training component. 3 Compare pipeline runs in Vertex Al Experiments
B.
1 Using Vertex Al Pipelines, add a component to divide the data into training and evaluation sets, and add another component for feature engineering 2 Enable autologging of metrics in the training component 3 Compare models using the artifacts lineage in Vertex ML Metadata
C.
1 In BigQuery ML. use the create model statement with bocstzd_tree_classifier as the model type and use BigQuery to handle the data splits. 2 Use a SQL view to apply feature engineering and train the model using the data in that view 3. Compare the evaluation metrics of the models by using a SQL query with the ml. training_infc statement.
D.
1 In BigQuery ML use the create model statement with boosted_tree_classifier as the model type, and use BigQuery to handle the data splits. 2 Use ml transform to specify the feature engineering transformations, and train the model using the data in the table ' 3. Compare the evaluation metrics of the models by using a SQL query with the ml. training_info statement.
Your answer:
0 comments
Sorted by
Leave a comment first