Topic 1 Question 204
A retail company uses a machine learning (ML) model for daily sales forecasting. The model has provided inaccurate results for the past 3 weeks. At the end of each day, an AWS Glue job consolidates the input data that is used for the forecasting with the actual daily sales data and the predictions of the model. The AWS Glue job stores the data in Amazon S3.
The company's ML team determines that the inaccuracies are occurring because of a change in the value distributions of the model features. The ML team must implement a solution that will detect when this type of change occurs in the future.
Which solution will meet these requirements with the LEAST amount of operational overhead?
Use Amazon SageMaker Model Monitor to create a data quality baseline. Confirm that the emit_metrics option is set to Enabled in the baseline constraints file. Set up an Amazon CloudWatch alarm for the metric.
Use Amazon SageMaker Model Monitor to create a model quality baseline. Confirm that the emit_metrics option is set to Enabled in the baseline constraints file. Set up an Amazon CloudWatch alarm for the metric.
Use Amazon SageMaker Debugger to create rules to capture feature values Set up an Amazon CloudWatch alarm for the rules.
Use Amazon CloudWatch to monitor Amazon SageMaker endpoints. Analyze logs in Amazon CloudWatch Logs to check for data drift.
ユーザの投票
コメント(7)
- 正解だと思う選択肢: A
A is correct. "If the statistical nature of the data that your model receives while in production drifts away from the nature of the baseline data it was trained on, the model begins to lose accuracy in its predictions. Amazon SageMaker Model Monitor uses rules to detect data drift and alerts you when it happens." https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-data-quality.html
👍 9hichemck2022/11/29 - 正解だと思う選択肢: A
A is correct.
The best solution to meet the requirements is to use Amazon SageMaker Model Monitor to create a data quality baseline. The ML team can set up a data quality baseline to detect when the input data to the model has drifted significantly from the historical distribution of the data. When data drift occurs, the Model Monitor emits a metric that can trigger an alarm in Amazon CloudWatch. The ML team can use this alarm to investigate and take corrective action.
Option B is incorrect because model quality baseline monitors model performance, not the input data quality.
Option C is incorrect because Amazon SageMaker Debugger is used to debug machine learning models and to identify problems with model training, not data quality.
Option D is incorrect because Amazon CloudWatch does not provide any features to monitor data drift in the input data used for the machine learning model.
👍 2AjoseO2023/02/19 What is the difference of ans A and B?
👍 1tsangckl2022/11/26
シャッフルモード