List of questions
Related questions
Question 93 - Professional Machine Learning Engineer discussion
You have deployed a model on Vertex AI for real-time inference. During an online prediction request, you get an ''Out of Memory'' error. What should you do?
A.
Use batch prediction mode instead of online mode.
B.
Send the request again with a smaller batch of instances.
C.
Use base64 to encode your data before using it for prediction.
D.
Apply for a quota increase for the number of prediction requests.
Your answer:
0 comments
Sorted by
Leave a comment first