ExamGecko
Question list
Search
Search

List of questions

Search

Related questions











Question 209 - Professional Machine Learning Engineer discussion

Report
Export

You work at a mobile gaming startup that creates online multiplayer games Recently, your company observed an increase in players cheating in the games, leading to a loss of revenue and a poor user experience. You built a binary classification model to determine whether a player cheated after a completed game session, and then send a message to other downstream systems to ban the player that cheated Your model has performed well during testing, and you now need to deploy the model to production You want your serving solution to provide immediate classifications after a completed game session to avoid further loss of revenue. What should you do?

A.
Import the model into Vertex Al Model Registry. Use the Vertex Batch Prediction service to run batch inference jobs.
Answers
A.
Import the model into Vertex Al Model Registry. Use the Vertex Batch Prediction service to run batch inference jobs.
B.
Save the model files in a Cloud Storage Bucket Create a Cloud Function to read the model files and make online inference requests on the Cloud Function.
Answers
B.
Save the model files in a Cloud Storage Bucket Create a Cloud Function to read the model files and make online inference requests on the Cloud Function.
C.
Save the model files in a VM Load the model files each time there is a prediction request and run an inference job on the VM.
Answers
C.
Save the model files in a VM Load the model files each time there is a prediction request and run an inference job on the VM.
D.
Import the model into Vertex Al Model Registry Create a Vertex Al endpoint that hosts the model and make online inference requests.
Answers
D.
Import the model into Vertex Al Model Registry Create a Vertex Al endpoint that hosts the model and make online inference requests.
Suggested answer: D

Explanation:

Online inference is a process where you send a single or a small number of prediction requests to a model and get immediate responses1. Online inference is suitable for scenarios where you need timely predictions, such as detecting cheating in online games.Online inference requires that the model is deployed to an endpoint, which is a resource that provides a service URL for prediction requests2.

Vertex AI Model Registry is a central repository where you can manage the lifecycle of your ML models3.You can import models from various sources, such as custom models or AutoML models, and assign them to different versions and aliases3.You can also deploy models to endpoints, which are resources that provide a service URL for online prediction2.

By importing the model into Vertex AI Model Registry, you can leverage the Vertex AI features to monitor and update the model3. You can use Vertex AI Experiments to track and compare the metrics of different model versions, such as accuracy, precision, recall, and AUC. You can also use Vertex AI Explainable AI to generate feature attributions that show how much each input feature contributed to the model's prediction.

By creating a Vertex AI endpoint that hosts the model, you can use the Vertex AI Prediction service to serve online inference requests2.Vertex AI Prediction provides various benefits, such as scalability, reliability, security, and logging2.You can use the Vertex AI API or the Google Cloud console to send online inference requests to the endpoint and get immediate classifications4.

Therefore, the best option for your scenario is to import the model into Vertex AI Model Registry, create a Vertex AI endpoint that hosts the model, and make online inference requests.

The other options are not suitable for your scenario, because they either do not provide immediate classifications, such as using batch prediction or loading the model files each time, or they do not use Vertex AI Prediction, which would require more development and maintenance effort, such as creating a Cloud Function or a VM.

Online versus batch prediction | Vertex AI | Google Cloud

Deploy a model to an endpoint | Vertex AI | Google Cloud

Introduction to Vertex AI Model Registry | Google Cloud

Get online predictions | Vertex AI | Google Cloud

asked 18/09/2024
Muhammad Atif Tasneem
36 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first