Topic 1 Question 185
You have developed a BigQuery ML model that predicts customer chum, and deployed the model to Vertex AI Endpoints. You want to automate the retraining of your model by using minimal additional code when model feature values change. You also want to minimize the number of times that your model is retrained to reduce training costs. What should you do?
1 Enable request-response logging on Vertex AI Endpoints 2. Schedule a TensorFlow Data Validation job to monitor prediction drift 3. Execute model retraining if there is significant distance between the distributions
- Enable request-response logging on Vertex AI Endpoints
- Schedule a TensorFlow Data Validation job to monitor training/serving skew
- Execute model retraining if there is significant distance between the distributions
- Create a Vertex AI Model Monitoring job configured to monitor prediction drift
- Configure alert monitoring to publish a message to a Pub/Sub queue when a monitoring alert is detected
- Use a Cloud Function to monitor the Pub/Sub queue, and trigger retraining in BigQuery
- Create a Vertex AI Model Monitoring job configured to monitor training/serving skew
- Configure alert monitoring to publish a message to a Pub/Sub queue when a monitoring alert is detected
- Use a Cloud Function to monitor the Pub/Sub queue, and trigger retraining in BigQuery
ユーザの投票
コメント(1)
- 正解だと思う選択肢: C
A and B: TensorFlow Data Validation jobs require more setup and maintenance, and they might not integrate as seamlessly with Vertex AI Endpoints for automated retraining. D: Monitoring training/serving skew focuses on differences between training and deployment environments, which might not directly address feature value changes.
👍 1pikachu0072024/01/11
シャッフルモード