Topic 1 Question 15
A company has deployed an XGBoost prediction model in production to predict if a customer is likely to cancel a subscription. The company uses Amazon SageMaker Model Monitor to detect deviations in the F1 score. During a baseline analysis of model quality, the company recorded a threshold for the F1 score. After several months of no change, the model's F1 score decreases significantly. What could be the reason for the reduced F1 score?
Concept drift occurred in the underlying customer data that was used for predictions.
The model was not sufficiently complex to capture all the patterns in the original baseline data.
The original baseline data had a data quality issue of missing values.
Incorrect ground truth labels were provided to Model Monitor during the calculation of the baseline.
ユーザの投票
コメント(4)
- 正解だと思う選択肢: A
Concept Drift: Refers to the change in the statistical properties of the underlying data distribution over time --> Decrease F1 score --> perform poorly on new data
👍 3Saransundar2024/12/04 - 正解だと思う選択肢: A
Concept Drift: Occurs when the statistical properties of the data used for predictions change over time, causing the model to underperform on current data.
Why Not the Other Options?
B. If the model complexity was insufficient, the issue would have been detected during the initial evaluation or baseline analysis, not after months of stable performance. C. A data quality issue would have impacted the model's performance immediately after deployment, not months later. D. Incorrect labels during baseline calculation could result in an inaccurate baseline F1 score, but it wouldn't explain a significant drop after stable performance over months.
👍 3motk1232024/12/09 - 正解だと思う選択肢: A
Option A could be the only one possible reason for drifting "after several months".
👍 2GiorgioGss2024/11/27
シャッフルモード