ExamGecko
Question list
Search
Search

List of questions

Search

Related questions











Question 243 - Professional Machine Learning Engineer discussion

Report
Export

You are developing an ML model in a Vertex Al Workbench notebook. You want to track artifacts and compare models during experimentation using different approaches. You need to rapidly and easily transition successful experiments to production as you iterate on your model implementation. What should you do?

A.
1 Initialize the Vertex SDK with the name of your experiment Log parameters and metrics for each experiment, and attach dataset and model artifacts as inputs and outputs to each execution. 2 After a successful experiment create a Vertex Al pipeline.
Answers
A.
1 Initialize the Vertex SDK with the name of your experiment Log parameters and metrics for each experiment, and attach dataset and model artifacts as inputs and outputs to each execution. 2 After a successful experiment create a Vertex Al pipeline.
B.
1. Initialize the Vertex SDK with the name of your experiment Log parameters and metrics for each experiment, save your dataset to a Cloud Storage bucket and upload the models to Vertex Al Model Registry. 2 After a successful experiment create a Vertex Al pipeline.
Answers
B.
1. Initialize the Vertex SDK with the name of your experiment Log parameters and metrics for each experiment, save your dataset to a Cloud Storage bucket and upload the models to Vertex Al Model Registry. 2 After a successful experiment create a Vertex Al pipeline.
C.
1 Create a Vertex Al pipeline with parameters you want to track as arguments to your Pipeline Job Use the Metrics. Model, and Dataset artifact types from the Kubeflow Pipelines DSL as the inputs and outputs of the components in your pipeline. 2. Associate the pipeline with your experiment when you submit the job.
Answers
C.
1 Create a Vertex Al pipeline with parameters you want to track as arguments to your Pipeline Job Use the Metrics. Model, and Dataset artifact types from the Kubeflow Pipelines DSL as the inputs and outputs of the components in your pipeline. 2. Associate the pipeline with your experiment when you submit the job.
D.
1 Create a Vertex Al pipeline Use the Dataset and Model artifact types from the Kubeflow Pipelines. DSL as the inputs and outputs of the components in your pipeline. 2. In your training component use the Vertex Al SDK to create an experiment run Configure the log_params and log_metrics functions to track parameters and metrics of your experiment.
Answers
D.
1 Create a Vertex Al pipeline Use the Dataset and Model artifact types from the Kubeflow Pipelines. DSL as the inputs and outputs of the components in your pipeline. 2. In your training component use the Vertex Al SDK to create an experiment run Configure the log_params and log_metrics functions to track parameters and metrics of your experiment.
Suggested answer: A

Explanation:

Vertex AI is a unified platform for building and managing machine learning solutions on Google Cloud. It provides various services and tools for different stages of the machine learning lifecycle, such as data preparation, model training, deployment, monitoring, and experimentation. Vertex AI Workbench is an integrated development environment (IDE) that allows you to create and run Jupyter notebooks on Google Cloud. You can use Vertex AI Workbench to develop your ML model in Python, using libraries such as TensorFlow, PyTorch, scikit-learn, etc. You can also use the Vertex SDK, which is a Python client library for Vertex AI, to track artifacts and compare models during experimentation. You can use theaiplatform.initfunction to initialize the Vertex SDK with the name of your experiment. You can use theaiplatform.start_runandaiplatform.end_runfunctions to create and close an experiment run. You can use theaiplatform.log_paramsandaiplatform.log_metricsfunctions to log the parameters and metrics for each experiment run. You can also use theaiplatform.log_datasetsandaiplatform.log_modelfunctions to attach the dataset and model artifacts as inputs and outputs to each experiment run. These functions allow you to record and store the metadata and artifacts of your experiments, and compare them using the Vertex AI Experiments UI. After a successful experiment, you can create a Vertex AI pipeline, which is a way to automate and orchestrate your ML workflows. You can use theaiplatform.PipelineJobclass to create a pipeline job, and specify the components and dependencies of your pipeline. You can also use theaiplatform.CustomContainerTrainingJobclass to create a custom container training job, and use therunmethod to run the job as a pipeline component. You can use theaiplatform.Model.deploymethod to deploy your model as a pipeline component. You can also use theaiplatform.Model.monitormethod to monitor your model as a pipeline component. By creating a Vertex AI pipeline, you can rapidly and easily transition successful experiments to production, and reuse and share your ML workflows. This solution requires minimal changes to your code, and leverages the Vertex AI services and tools to streamline your ML development process.Reference: The answer can be verified from official Google Cloud documentation and resources related to Vertex AI, Vertex AI Workbench, Vertex SDK, and Vertex AI pipelines.

Vertex AI | Google Cloud

Vertex AI Workbench | Google Cloud

Vertex SDK for Python | Google Cloud

Vertex AI pipelines | Google Cloud

asked 18/09/2024
Garvey Butler
44 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first