Topic 1 Question 273
You work for a rapidly growing social media company. Your team builds TensorFlow recommender models in an on-premises CPU cluster. The data contains billions of historical user events and 100,000 categorical features. You notice that as the data increases, the model training time increases. You plan to move the models to Google Cloud. You want to use the most scalable approach that also minimizes training time. What should you do?
Deploy the training jobs by using TPU VMs with TPUv3 Pod slices, and use the TPUEmbeading API
Deploy the training jobs in an autoscaling Google Kubernetes Engine cluster with CPUs
Deploy a matrix factorization model training job by using BigQuery ML
Deploy the training jobs by using Compute Engine instances with A100 GPUs, and use the tf.nn.embedding_lookup API
ユーザの投票
コメント(1)
- 正解だと思う選択肢: A
TPU (Tensor Processing Units) VMs are specialized hardware accelerators designed by Google specifically for machine learning tasks. TPUv3 Pod slices offer high scalability and are excellent for distributed training tasks. The TPUEmbedding API is optimized for handling large volumes of categorical features, which fits your scenario with 100,000 categorical features. This option is likely to offer the fastest training times due to specialized hardware and optimized APIs for large-scale machine learning tasks.
👍 2daidai752024/01/08
シャッフルモード