ExamGecko
Question list
Search
Search

List of questions

Search

Related questions











Question 229 - Professional Machine Learning Engineer discussion

Report
Export

You have developed an application that uses a chain of multiple scikit-learn models to predict the optimal price for your company's products. The workflow logic is shown in the diagram Members of your team use the individual models in other solution workflows. You want to deploy this workflow while ensuring version control for each individual model and the overall workflow Your application needs to be able to scale down to zero. You want to minimize the compute resource utilization and the manual effort required to manage this solution. What should you do?

A.
Expose each individual model as an endpoint in Vertex Al Endpoints. Create a custom container endpoint to orchestrate the workflow.
Answers
A.
Expose each individual model as an endpoint in Vertex Al Endpoints. Create a custom container endpoint to orchestrate the workflow.
B.
Create a custom container endpoint for the workflow that loads each models individual files Track the versions of each individual model in BigQuery.
Answers
B.
Create a custom container endpoint for the workflow that loads each models individual files Track the versions of each individual model in BigQuery.
C.
Expose each individual model as an endpoint in Vertex Al Endpoints. Use Cloud Run to orchestrate the workflow.
Answers
C.
Expose each individual model as an endpoint in Vertex Al Endpoints. Use Cloud Run to orchestrate the workflow.
D.
Load each model's individual files into Cloud Run Use Cloud Run to orchestrate the workflow Track the versions of each individual model in BigQuery.
Answers
D.
Load each model's individual files into Cloud Run Use Cloud Run to orchestrate the workflow Track the versions of each individual model in BigQuery.
Suggested answer: C

Explanation:

The option C is the most efficient and scalable solution for deploying a machine learning workflow with multiple models while ensuring version control and minimizing compute resource utilization. By exposing each model as an endpoint in Vertex AI Endpoints, it allows for easy versioning and management of individual models. Using Cloud Run to orchestrate the workflow ensures that the application can scale down to zero, thus minimizing resource utilization when not in use. Cloud Run is a service that allows you to run stateless containers on a fully managed environment or on Google Kubernetes Engine. You can use Cloud Run to invoke the endpoints of each model in the workflow and pass the data between them. You can also use Cloud Run to handle the input and output of the workflow and provide an HTTP interface for the application.Reference:

Vertex AI Endpoints documentation

Cloud Run documentation

Preparing for Google Cloud Certification: Machine Learning Engineer Professional Certificate

asked 18/09/2024
Paul Tierney
41 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first