ExamGecko
Question list
Search
Search

List of questions

Search

Related questions











Question 164 - Professional Machine Learning Engineer discussion

Report
Export

You work for a social media company. You want to create a no-code image classification model for an iOS mobile application to identify fashion accessories You have a labeled dataset in Cloud Storage You need to configure a training workflow that minimizes cost and serves predictions with the lowest possible latency What should you do?

A.
Train the model by using AutoML, and register the model in Vertex Al Model Registry Configure your mobile application to send batch requests during prediction.
Answers
A.
Train the model by using AutoML, and register the model in Vertex Al Model Registry Configure your mobile application to send batch requests during prediction.
B.
Train the model by using AutoML Edge and export it as a Core ML model Configure your mobile application to use the mlmodel file directly.
Answers
B.
Train the model by using AutoML Edge and export it as a Core ML model Configure your mobile application to use the mlmodel file directly.
C.
Train the model by using AutoML Edge and export the model as a TFLite model Configure your mobile application to use the tflite file directly
Answers
C.
Train the model by using AutoML Edge and export the model as a TFLite model Configure your mobile application to use the tflite file directly
D.
Train the model by using AutoML, and expose the model as a Vertex Al endpoint Configure your mobile application to invoke the endpoint during prediction.
Answers
D.
Train the model by using AutoML, and expose the model as a Vertex Al endpoint Configure your mobile application to invoke the endpoint during prediction.
Suggested answer: B

Explanation:

AutoML Edgeis a service that allows you to train and deploy custom image classification models for mobile devices12.It supports exporting models asCore MLfiles, which are compatible with iOS applications3.

Using a Core ML model directly on the device eliminates the need for network requests and reduces prediction latency. It also minimizes the cost of serving predictions, as there is no need to pay for cloud resources or network bandwidth.

Option A is incorrect because sending batch requests during prediction does not reduce latency, as the requests still need to be processed by the cloud service. It also incurs more cost than using a local model on the device.

Option C is incorrect because TFLite models are not compatible with iOS applications.TFLite models are designed for Android and other platforms that support TensorFlow Lite4.

Option D is incorrect because exposing the model as a Vertex AI endpoint requires network requests and cloud resources, which increase latency and cost. It also does not leverage the benefits of AutoML Edge, which is optimized for mobile devices.

asked 18/09/2024
Courage Marume
35 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first