List of questions
Related questions
Question 279 - Professional Machine Learning Engineer discussion
You work as an ML researcher at an investment bank and are experimenting with the Gemini large language model (LLM). You plan to deploy the model for an internal use case and need full control of the model's underlying infrastructure while minimizing inference time. Which serving configuration should you use for this task?
Deploy the model on a Vertex AI endpoint using one-click deployment in Model Garden.
Deploy the model on a Google Kubernetes Engine (GKE) cluster manually by creating a custom YAML manifest.
Deploy the model on a Vertex AI endpoint manually by creating a custom inference container.
Deploy the model on a Google Kubernetes Engine (GKE) cluster using the deployment options in Model Garden.
0 comments
Leave a comment first