Topic 1 Question 235
You are using Vertex AI and TensorFlow to develop a custom image classification model. You need the model’s decisions and the rationale to be understandable to your company’s stakeholders. You also want to explore the results to identify any issues or potential biases. What should you do?
- Use TensorFlow to generate and visualize features and statistics.
- Analyze the results together with the standard model evaluation metrics.
- Use TensorFlow Profiler to visualize the model execution.
- Analyze the relationship between incorrect predictions and execution bottlenecks.
- Use Vertex Explainable AI to generate example-based explanations.
- Visualize the results of sample inputs from the entire dataset together with the standard model evaluation metrics.
- Use Vertex Explainable AI to generate feature attributions. Aggregate feature attributions over the entire dataset.
- Analyze the aggregation result together with the standard model evaluation metrics.
ユーザの投票
コメント(6)
- 正解だと思う選択肢: D
My Answer: D
This approach leverages Vertex Explainable AI to provide feature attributions, which helps in understanding the rationale behind the model's decisions. By aggregating these feature attributions over the entire dataset, you can gain insights into potential biases or areas of concern. Analyzing these results alongside standard model evaluation metrics allows for a comprehensive understanding of the model's performance and its interpretability.
Option C is better to understand specific cases, but does not show overall contributions.
👍 7guilhermebutzke2024/08/18 - 正解だと思う選択肢: D
If you inspect specific instances, and also aggregate feature attributions across your training dataset, you can get deeper insight into how your model works. Consider the following advantages:
Debugging models: Feature attributions can help detect issues in the data that standard model evaluation techniques would usually miss. https://cloud.google.com/vertex-ai/docs/explainable-ai/overview
👍 4shadz102024/07/17 - 正解だと思う選択肢: D
Feature-Level Insights: Feature attributions pinpoint which image regions contribute most to predictions, offering granular understanding of model reasoning. Bias Detection: Aggregating feature attributions over the entire dataset can reveal systematic biases or patterns of model behavior, helping identify potential fairness issues. Complementary to Evaluation Metrics: Combining attributions with standard metrics (e.g., accuracy, precision, recall) provides a comprehensive view of model performance and fairness.
👍 3pikachu0072024/07/12
シャッフルモード