Topic 1 Question 50
Your team is building a convolutional neural network (CNN)-based architecture from scratch. The preliminary experiments running on your on-premises CPU-only infrastructure were encouraging, but have slow convergence. You have been asked to speed up model training to reduce time-to-market. You want to experiment with virtual machines (VMs) on Google Cloud to leverage more powerful hardware. Your code does not include any manual device placement and has not been wrapped in Estimator model-level abstraction. Which environment should you train your model on?
AVM on Compute Engine and 1 TPU with all dependencies installed manually.
AVM on Compute Engine and 8 GPUs with all dependencies installed manually.
A Deep Learning VM with an n1-standard-2 machine and 1 GPU with all libraries pre-installed.
A Deep Learning VM with more powerful CPU e2-highcpu-16 machines with all libraries pre-installed.
ユーザの投票
コメント(15)
ANS: C
to support CNN, you should use GPU. for preliminary experiment, pre-installed pkgs/libs are good choice.
https://cloud.google.com/deep-learning-vm/docs/cli#creating_an_instance_with_one_or_more_gpus https://cloud.google.com/deep-learning-vm/docs/introduction#pre-installed_packages
👍 12celia202004102021/07/20Code without manual device placement => default to CPU if TPU is present or to the lowest order GPU if multiple GPUs are present. => Not A, B. D: already using CPU and needing GPU for CNN. Ans: C
👍 9Paul_Dirac2021/07/31"has not been wrapped in Estimator model-level abstraction" How you can use GPU? D in my opinion, E-family using for high CPU tasks
👍 3suresh_vn2022/08/24
シャッフルモード