Topic 1 Question 103
You recently developed a deep learning model using Keras, and now you are experimenting with different training strategies. First, you trained the model using a single GPU, but the training process was too slow. Next, you distributed the training across 4 GPUs using tf.distribute.MirroredStrategy (with no other changes), but you did not observe a decrease in training time. What should you do?
Distribute the dataset with tf.distribute.Strategy.experimental_distribute_dataset
Create a custom training loop.
Use a TPU with tf.distribute.TPUStrategy.
Increase the batch size.
ユーザの投票
コメント(12)
- 正解だと思う選択肢: D
Ans D: Check this link https://www.tensorflow.org/guide/gpu_performance_analysis for details on how to Optimize the performance on the multi-GPU single host
👍 9egdiaa2022/12/26 - 正解だと思う選択肢: A
I think its A
👍 4MithunDesai2022/12/20 - 正解だと思う選択肢: A👍 3mil_spyro2022/12/18
シャッフルモード