ExamGecko
Question list
Search
Search

List of questions

Search

Related questions











Question 189 - Professional Machine Learning Engineer discussion

Report
Export

Your company manages an ecommerce website. You developed an ML model that recommends additional products to users in near real time based on items currently in the user's cart. The workflow will include the following processes.

1 The website will send a Pub/Sub message with the relevant data and then receive a message with the prediction from Pub/Sub.

2 Predictions will be stored in BigQuery

3. The model will be stored in a Cloud Storage bucket and will be updated frequently

You want to minimize prediction latency and the effort required to update the model How should you reconfigure the architecture?

A.
Write a Cloud Function that loads the model into memory for prediction Configure the function to be triggered when messages are sent to Pub/Sub.
Answers
A.
Write a Cloud Function that loads the model into memory for prediction Configure the function to be triggered when messages are sent to Pub/Sub.
B.
Create a pipeline in Vertex Al Pipelines that performs preprocessing, prediction and postprocessing Configure the pipeline to be triggered by a Cloud Function when messages are sent to Pub/Sub.
Answers
B.
Create a pipeline in Vertex Al Pipelines that performs preprocessing, prediction and postprocessing Configure the pipeline to be triggered by a Cloud Function when messages are sent to Pub/Sub.
C.
Expose the model as a Vertex Al endpoint Write a custom DoFn in a Dataflow job that calls the endpoint for prediction.
Answers
C.
Expose the model as a Vertex Al endpoint Write a custom DoFn in a Dataflow job that calls the endpoint for prediction.
D.
Use the Runlnference API with watchFilePatterr. in a Dataflow job that wraps around the model and serves predictions.
Answers
D.
Use the Runlnference API with watchFilePatterr. in a Dataflow job that wraps around the model and serves predictions.
Suggested answer: D

Explanation:

According to the web search results, RunInference API1is a feature of Apache Beam that enables you to run models as part of your pipeline in a way that is optimized for machine learning inference. RunInference API supports features like batching, caching, and model reloading.RunInference API can be used with various frameworks, such as TensorFlow, PyTorch, Sklearn, XGBoost, ONNX, and TensorRT1.Dataflow2is a fully managed service for running Apache Beam pipelines on Google Cloud. Dataflow handles the provisioning and management of the compute resources, as well as the optimization and execution of the pipelines. Therefore, option D is the best way to reconfigure the architecture for the given use case, as it allows you to use the RunInference API with watchFilePattern in a Dataflow job that wraps around the model and serves predictions.This way, you can minimize prediction latency and the effort required to update the model, as the RunInference API will automatically reload the model from the Cloud Storage bucket whenever there is a change in the model file1. The other options are not relevant or optimal for this scenario.Reference:

RunInference API

Dataflow

Google Professional Machine Learning Certification Exam 2023

Latest Google Professional Machine Learning Engineer Actual Free Exam Questions

asked 18/09/2024
Jorge Andres Gutierrez
30 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first