ExamGecko
Question list
Search
Search

List of questions

Search

Related questions











Question 81 - Associate Cloud Engineer discussion

Report
Export

You are operating a Google Kubernetes Engine (GKE) cluster for your company where different teams can run non-production workloads. Your Machine Learning (ML) team needs access to Nvidia Tesla P100 GPUs to train their models. You want to minimize effort and cost. What should you do?

A.
Ask your ML team to add the ''accelerator: gpu'' annotation to their pod specification.
Answers
A.
Ask your ML team to add the ''accelerator: gpu'' annotation to their pod specification.
B.
Recreate all the nodes of the GKE cluster to enable GPUs on all of them.
Answers
B.
Recreate all the nodes of the GKE cluster to enable GPUs on all of them.
C.
Create your own Kubernetes cluster on top of Compute Engine with nodes that have GPUs. Dedicate this cluster to your ML team.
Answers
C.
Create your own Kubernetes cluster on top of Compute Engine with nodes that have GPUs. Dedicate this cluster to your ML team.
D.
Add a new, GPU-enabled, node pool to the GKE cluster. Ask your ML team to add the cloud.google.com/gke -accelerator: nvidia-tesla-p100 nodeSelector to their pod specification.
Answers
D.
Add a new, GPU-enabled, node pool to the GKE cluster. Ask your ML team to add the cloud.google.com/gke -accelerator: nvidia-tesla-p100 nodeSelector to their pod specification.
Suggested answer: D

Explanation:


Ref:https://cloud.google.com/kubernetes-engine/pricing

Example:

apiVersion: v1

kind: Pod

metadata:

name: my-gpu-pod

spec:

containers:

name: my-gpu-container

image: nvidia/cuda:10.0-runtime-ubuntu18.04

command: [/bin/bash]

resources:

limits:

nvidia.com/gpu: 2

nodeSelector:

cloud.google.com/gke-accelerator: nvidia-tesla-k80 # or nvidia-tesla-p100 or nvidia-tesla-p4 or nvidia-tesla-v100 or nvidia-tesla-t4

asked 18/09/2024
Teste Teste
39 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first