Topic 1 Question 229
You work for a manufacturing company. You need to train a custom image classification model to detect product defects at the end of an assembly line. Although your model is performing well, some images in your holdout set are consistently mislabeled with high confidence. You want to use Vertex AI to understand your model’s results. What should you do?
Configure feature-based explanations by using Integrated Gradients. Set visualization type to PIXELS, and set clip_percent_upperbound to 95.
Create an index by using Vertex AI Matching Engine. Query the index with your mislabeled images.
Configure feature-based explanations by using XRAI. Set visualization type to OUTLINES, and set polarity to positive.
Configure example-based explanations. Specify the embedding output layer to be used for the latent space representation.
ユーザの投票
コメント(7)
My Answer: A
According to this documentation:
https://cloud.google.com/vertex-ai/docs/explainable-ai/visualization-settings
This option A aligns with using Integrated Gradients, which is suitable for feature-based explanations. Setting the visualization type to PIXELS allows for per-pixel attribution, which can help in understanding the specific regions of the image influencing the model's decision. Additionally, setting the clip_percent_upperbound parameter to 95 helps in filtering out noise and focusing on areas of strong attribution, which is crucial for understanding mislabeled images with high confidence.
Option C suggests using XRAI for feature-based explanations and setting the visualization type to OUTLINES, along with setting the polarity to positive. However, based on the provided documentation, XRAI is recommended to have its visualization type set to PIXELS, not OUTLINES.
👍 4guilhermebutzke2024/02/15- 正解だと思う選択肢: D
Improve your data or model: One of the core use cases for example-based explanations is helping you understand why your model made certain mistakes in its predictions, and using those insights to improve your data or model.
https://cloud.google.com/vertex-ai/docs/explainable-ai/overview
👍 3VipinSingla2024/03/14 - 正解だと思う選択肢: A
Going with A Not c - For XRAI, Pixels is the default setting and shows areas of attribution. Outlines is not recommended for XRAI. https://cloud.google.com/ai-platform/prediction/docs/ai-explanations/visualizing-explanations
👍 2shadz102024/01/16
シャッフルモード