Topic 1 Question 64
You recently designed and built a custom neural network that uses critical dependencies specific to your organization’s framework. You need to train the model using a managed training service on Google Cloud. However, the ML framework and related dependencies are not supported by AI Platform Training. Also, both your model and your data are too large to fit in memory on a single machine. Your ML framework of choice uses the scheduler, workers, and servers distribution structure. What should you do?
Use a built-in model available on AI Platform Training.
Build your custom container to run jobs on AI Platform Training.
Build your custom containers to run distributed training jobs on AI Platform Training.
Reconfigure your code to a ML framework with dependencies that are supported by AI Platform Training.
ユーザの投票
コメント(6)
- 正解だと思う選択肢: C
Answer C. By running your machine learning (ML) training job in a custom container, you can use ML frameworks, non-ML dependencies, libraries, and binaries that are not otherwise supported on Vertex AI. Model and your data are too large to fit in memory on a single machine hence distributed training jobs. https://cloud.google.com/vertex-ai/docs/training/containers-overview
👍 5mil_spyro2022/12/18 Will go for 'C'- Custom containers can address the env limitation and distributed processing will handle the data volume
👍 1Vedjha2022/12/07- 正解だと思う選択肢: C
I think it's C
👍 1JeanEl2022/12/09
シャッフルモード