Topic 1 Question 96
You are training an object detection machine learning model on a dataset that consists of three million X-ray images, each roughly 2 GB in size. You are using Vertex AI Training to run a custom training application on a Compute Engine instance with 32-cores, 128 GB of RAM, and 1 NVIDIA P100 GPU. You notice that model training is taking a very long time. You want to decrease training time without sacrificing model performance. What should you do?
Increase the instance memory to 512 GB and increase the batch size.
Replace the NVIDIA P100 GPU with a v3-32 TPU in the training job.
Enable early stopping in your Vertex AI Training job.
Use the tf.distribute.Strategy API and run a distributed training job.
ユーザの投票
コメント(12)
- 正解だと思う選択肢: C
I would say C.
The question asks about time, so the option "early stopping" looks fine because it will no impact the existent accuracy (it will maybe improve it).
The tf.distribute.Strategy reading the TF docs says that it's used when you want to split training between GPUs, but the question says that we have a single GPU.
Open to discuss. :)
👍 4smarques2023/01/18 - 正解だと思う選択肢: B
We don't have money problems, and we need something that doesn't impair the performance of the model. So I think it's good to change GPU for TPU
👍 4enghabeth2023/02/08 - 正解だと思う選択肢: D
D. Use the tf.distribute.Strategy API and run a distributed training job.
Given that the dataset is very large and the current instance is not making the most of its 32 cores, it is necessary to use distributed training to parallelize the training process and reduce training time. Using the tf.distribute.Strategy API, one can distribute the training across multiple GPUs or TPUs, allowing for faster model training without sacrificing performance. Increasing instance memory and enabling early stopping may not necessarily lead to significant improvements in training time, while replacing the GPU with a TPU may require additional changes to the model and code, which can be time-consuming.
👍 3shankalman7172023/02/23
シャッフルモード