Topic 1 Question 102
You have successfully deployed to production a large and complex TensorFlow model trained on tabular data. You want to predict the lifetime value (LTV) field for each subscription stored in the BigQuery table named subscription. subscriptionPurchase in the project named my-fortune500-company-project.
You have organized all your training code, from preprocessing data from the BigQuery table up to deploying the validated model to the Vertex AI endpoint, into a TensorFlow Extended (TFX) pipeline. You want to prevent prediction drift, i.e., a situation when a feature data distribution in production changes significantly over time. What should you do?
Implement continuous retraining of the model daily using Vertex AI Pipelines.
Add a model monitoring job where 10% of incoming predictions are sampled 24 hours.
Add a model monitoring job where 90% of incoming predictions are sampled 24 hours.
Add a model monitoring job where 10% of incoming predictions are sampled every hour.
ユーザの投票
コメント(8)
- 正解だと思う選択肢: B
I guess 10% of 24 hours should be good enough?
👍 3mymy94182022/12/21 - 正解だと思う選択肢: B
B , I got it from Machine Learning in the Enterprise course for google partnet skillboost you can watch cafully on video "Model management using Vertex AI" I imply that it is default setting on typical case.
👍 3John_Pongthorn2023/02/07 - 正解だと思う選択肢: B👍 2hiromi2022/12/20
シャッフルモード