ExamGecko
Question list
Search
Search

List of questions

Search

Related questions











Question 168 - Professional Machine Learning Engineer discussion

Report
Export

You developed a Transformer model in TensorFlow to translate text Your training data includes millions of documents in a Cloud Storage bucket. You plan to use distributed training to reduce training time. You need to configure the training job while minimizing the effort required to modify code and to manage the clusters configuration. What should you do?

A.
Create a Vertex Al custom training job with GPU accelerators for the second worker pool Use tf .distribute.MultiWorkerMirroredStrategy for distribution.
Answers
A.
Create a Vertex Al custom training job with GPU accelerators for the second worker pool Use tf .distribute.MultiWorkerMirroredStrategy for distribution.
B.
Create a Vertex Al custom distributed training job with Reduction Server Use N1 high-memory machine type instances for the first and second pools, and use N1 high-CPU machine type instances for the third worker pool.
Answers
B.
Create a Vertex Al custom distributed training job with Reduction Server Use N1 high-memory machine type instances for the first and second pools, and use N1 high-CPU machine type instances for the third worker pool.
C.
Create a training job that uses Cloud TPU VMs Use tf.distribute.TPUStrategy for distribution.
Answers
C.
Create a training job that uses Cloud TPU VMs Use tf.distribute.TPUStrategy for distribution.
D.
Create a Vertex Al custom training job with a single worker pool of A2 GPU machine type instances Use tf .distribute.MirroredStraregy for distribution.
Answers
D.
Create a Vertex Al custom training job with a single worker pool of A2 GPU machine type instances Use tf .distribute.MirroredStraregy for distribution.
Suggested answer: C

Explanation:

According to the official exam guide1, one of the skills assessed in the exam is to ''configure and optimize model training jobs''.Cloud TPU VMs2are a new way to access Cloud TPUs directly on the TPU host machines, offering a simpler and more flexible user experience. Cloud TPU VMs are optimized for ML model training and can reduce training time and cost.You can use Cloud TPU VMs to train Transformer models in TensorFlow by using the tf.distribute.TPUStrategy3, which handles the distribution of computations across the TPU cores. The other options are not relevant or optimal for this scenario.Reference:

Professional ML Engineer Exam Guide

Cloud TPU VMs

Distributed training with TPUStrategy

Google Professional Machine Learning Certification Exam 2023

Latest Google Professional Machine Learning Engineer Actual Free Exam Questions

asked 18/09/2024
darren cain
30 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first