Topic 1 Question 28
2 つ選択You are training a Resnet model on AI Platform using TPUs to visually categorize types of defects in automobile engines. You capture the training profile using the Cloud TPU profiler plugin and observe that it is highly input-bound. You want to reduce the bottleneck and speed up your model training process. Which modifications should you make to the tf.data dataset?
Use the interleave option for reading data.
Reduce the value of the repeat parameter.
Increase the buffer size for the shuttle option.
Set the prefetch option equal to the training batch size.
Decrease the batch size argument in your transformation.
ユーザの投票
コメント(12)
AD - please weigh in guys
👍 34ralf_cc2021/07/10A. Use the interleave option for reading data. - Yes, that helps to parallelize data reading. B. Reduce the value of the repeat parameter. - No, this is only to repeat rows of the dataset. C. Increase the buffer size for the shuttle option. - No, there is only a shuttle option. D. Set the prefetch option equal to the training batch size. - Yes, this will pre-load the data. E. Decrease the batch size argument in your transformation. - No, could be even slower due to more I/Os.
👍 23danielp140219902021/11/10I think it should be DE. I found this article https://towardsdatascience.com/overcoming-data-preprocessing-bottlenecks-with-tensorflow-data-service-nvidia-dali-and-other-d6321917f851
👍 3gcp2021go2021/07/20
シャッフルモード