List of questions
Related questions
Question 119 - Professional Machine Learning Engineer discussion
You recently developed a deep learning model using Keras, and now you are experimenting with different training strategies. First, you trained the model using a single GPU, but the training process was too slow. Next, you distributed the training across 4 GPUs using tf.distribute.MirroredStrategy (with no other changes), but you did not observe a decrease in training time. What should you do?
A.
Distribute the dataset with tf.distribute.Strategy.experimental_distribute_dataset
B.
Create a custom training loop.
C.
Use a TPU with tf.distribute.TPUStrategy.
D.
Increase the batch size.
Your answer:
0 comments
Sorted by
Leave a comment first