Topic 1 Question 175
You have recently trained a scikit-learn model that you plan to deploy on Vertex AI. This model will support both online and batch prediction. You need to preprocess input data for model inference. You want to package the model for deployment while minimizing additional code. What should you do?
- Upload your model to the Vertex AI Model Registry by using a prebuilt scikit-ieam prediction container.
- Deploy your model to Vertex AI Endpoints, and create a Vertex AI batch prediction job that uses the instanceConfig.instanceType setting to transform your input data.
- Wrap your model in a custom prediction routine (CPR). and build a container image from the CPR local model.
- Upload your scikit learn model container to Vertex AI Model Registry.
- Deploy your model to Vertex AI Endpoints, and create a Vertex AI batch prediction job
- Create a custom container for your scikit learn model.
- Define a custom serving function for your model.
- Upload your model and custom container to Vertex AI Model Registry.
- Deploy your model to Vertex AI Endpoints, and create a Vertex AI batch prediction job.
- Create a custom container for your scikit learn model.
- Upload your model and custom container to Vertex AI Model Registry.
- Deploy your model to Vertex AI Endpoints, and create a Vertex AI batch prediction job that uses the instanceConfig.instanceType setting to transform your input data.
ユーザの投票
コメント(2)
- 正解だと思う選択肢: B
I go with B:
“Custom prediction routines (CPR) lets you build custom containers with pre/post processing code easily, without dealing with the details of setting up an HTTP server or building a container from scratch.” (https://cloud.google.com/vertex-ai/docs/predictions/custom-prediction-routines). This alone makes B preferable to C and D, provided lack of complex model architecture.
Regarding A, pre-built containers only allow serving predictions, but not preprocessing of data (https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers#use_a_prebuilt_container). B thus remains the most likely option.
👍 1b1a8fae2024/01/10 - 正解だと思う選択肢: D
Considering the goal of minimizing additional code and complexity, option D - "Create a custom container for your scikit-learn model, upload your model and custom container to Vertex AI Model Registry, deploy your model to Vertex AI Endpoints, and create a Vertex AI batch prediction job that uses the instanceConfig.instanceType setting to transform your input data" seems to be a more straightforward and efficient approach. It involves customizing the container for the scikit-learn model, leveraging the Vertex AI Model Registry, and utilizing the specified instance type for batch prediction without introducing unnecessary complexity like custom prediction routines.
👍 1pikachu0072024/01/11
シャッフルモード