ExamGecko
Question list
Search
Search

List of questions

Search

Related questions











Question 269 - Professional Machine Learning Engineer discussion

Report
Export

You are using Kubeflow Pipelines to develop an end-to-end PyTorch-based MLOps pipeline. The pipeline reads data from BigQuery, processes the data, conducts feature engineering, model training, model evaluation, and deploys the model as a binary file to Cloud Storage. You are writing code for several different versions of the feature engineering and model training steps, and running each new version in Vertex Al Pipelines.

Each pipeline run is taking over an hour to complete. You want to speed up the pipeline execution to reduce your development time, and you want to avoid additional costs. What should you do?

A.
Delegate feature engineering to BigQuery and remove it from the pipeline.
Answers
A.
Delegate feature engineering to BigQuery and remove it from the pipeline.
B.
Add a GPU to the model training step.
Answers
B.
Add a GPU to the model training step.
C.
Enable caching in all the steps of the Kubeflow pipeline.
Answers
C.
Enable caching in all the steps of the Kubeflow pipeline.
D.
Comment out the part of the pipeline that you are not currently updating.
Answers
D.
Comment out the part of the pipeline that you are not currently updating.
Suggested answer: C

Explanation:

Kubeflow Pipelines allows for efficient use of compute resources through parallel task execution and caching, which eliminates redundant executions1. By enabling caching in all the steps of the Kubeflow pipeline, you can avoid re-running the same steps when you execute the pipeline multiple times.This can significantly speed up the pipeline execution and reduce your development time without incurring additional costs

asked 18/09/2024
Amidou Florian TOURE
33 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first