Topic 1 Question 275
You have trained an XGBoost model that you plan to deploy on Vertex AI for online prediction. You are now uploading your model to Vertex AI Model Registry, and you need to configure the explanation method that will serve online prediction requests to be returned with minimal latency. You also want to be alerted when feature attributions of the model meaningfully change over time. What should you do?
- Specify sampled Shapley as the explanation method with a path count of 5.
- Deploy the model to Vertex AI Endpoints.
- Create a Model Monitoring job that uses prediction drift as the monitoring objective.
- Specify Integrated Gradients as the explanation method with a path count of 5.
- Deploy the model to Vertex AI Endpoints.
- Create a Model Monitoring job that uses prediction drift as the monitoring objective.
- Specify sampled Shapley as the explanation method with a path count of 50.
- Deploy the model to Vertex AI Endpoints.
- Create a Model Monitoring job that uses training-serving skew as the monitoring objective.
- Specify Integrated Gradients as the explanation method with a path count of 50.
- Deploy the model to Vertex AI Endpoints.
- Create a Model Monitoring job that uses training-serving skew as the monitoring objective.
ユーザの投票
コメント(4)
A Sampled Shapley is a fast and scalable approximation of the Shapley value, which is a game-theoretic concept that measures the contribution of each feature to the model prediction. Sampled Shapley is suitable for online prediction requests, as it can return feature attributions with minimal latency. The path count parameter controls the number of samples used to estimate the Shapley value, and a lower value means faster computation. Integrated Gradients is another explanation method that computes the average gradient along the path from a baseline input to the actual input. Integrated Gradients is more accurate than Sampled Shapley, but also more computationally intensive
👍 436bdc1e2024/07/14- 正解だと思う選択肢: A
Explanation Method:
Sampled Shapley: This method provides high-fidelity feature attributions while being computationally efficient, making it ideal for low-latency online predictions. Integrated Gradients: While also accurate, it's generally more computationally intensive than sampled Shapley, potentially introducing latency. Path Count:
Lower Path Count (5): Reducing path count further decreases computation time, optimizing for faster prediction responses. Monitoring Objective:
Prediction Drift: This type of monitoring detects changes in feature importance over time, aligning with the goal of tracking feature attribution shifts. Training-Serving Skew: This monitors discrepancies between training and serving data distributions, which isn't directly related to feature attributions.
👍 3pikachu0072024/07/13 - 正解だと思う選択肢: A
not B as integrated gradients is only for Custom-trained TensorFlow models that use a TensorFlow prebuilt container to serve predictions and AutoML image models
👍 3shadz102024/07/15
シャッフルモード