Topic 1 Question 37
You are developing models to classify customer support emails. You created models with TensorFlow Estimators using small datasets on your on-premises system, but you now need to train the models using large datasets to ensure high performance. You will port your models to Google Cloud and want to minimize code refactoring and infrastructure overhead for easier migration from on-prem to cloud. What should you do?
Use AI Platform for distributed training.
Create a cluster on Dataproc for training.
Create a Managed Instance Group with autoscaling.
Use Kubeflow Pipelines to train on a Google Kubernetes Engine cluster.
ユーザの投票
コメント(14)
A. AI platform provides lower infrastructure overhead and allows you to not have to refactor your code too much (no containerization and such, like in KubeFlow).
👍 26maartenalexander2021/06/22A. D involves more infra overhead.
👍 4Danny20212021/09/08I think the answer is either A or B, but personally think it is likely B because dataproc is a common tool box on GCP used for ML while AI platform might require refactoring. However, I dont really know A or B
👍 3q4exam2021/09/22
シャッフルモード