Topic 1 Question 222
You developed a custom model by using Vertex AI to predict your application's user churn rate. You are using Vertex AI Model Monitoring for skew detection. The training data stored in BigQuery contains two sets of features - demographic and behavioral. You later discover that two separate models trained on each set perform better than the original model. You need to configure a new model monitoring pipeline that splits traffic among the two models. You want to use the same prediction-sampling-rate and monitoring-frequency for each model. You also want to minimize management effort. What should you do?
Keep the training dataset as is. Deploy the models to two separate endpoints, and submit two Vertex AI Model Monitoring jobs with appropriately selected feature-thresholds parameters.
Keep the training dataset as is. Deploy both models to the same endpoint and submit a Vertex AI Model Monitoring job with a monitoring-config-from-file parameter that accounts for the model IDs and feature selections.
Separate the training dataset into two tables based on demographic and behavioral features. Deploy the models to two separate endpoints, and submit two Vertex AI Model Monitoring jobs.
Separate the training dataset into two tables based on demographic and behavioral features. Deploy both models to the same endpoint, and submit a Vertex AI Model Monitoring job with a monitoring-config-from-file parameter that accounts for the model IDs and training datasets.
ユーザの投票
コメント(1)
- 正解だと思う選択肢: B
A. Separate Endpoints: This approach involves more management overhead and potentially complicates monitoring configurations. C. Separate Datasets: Splitting the dataset into two tables is unnecessary for model monitoring and could introduce data management complexities. D. Separate Datasets, Same Endpoint: While feasible, this option lacks the flexibility of granular feature control provided by monitoring-config-from-file.
👍 1pikachu0072024/01/12
シャッフルモード