Topic 1 Question 123
Which technique can a company use to lower bias and toxicity in generative AI applications during the post-processing ML lifecycle?
Human-in-the-loop
Data augmentation
Feature engineering
Adversarial training
ユーザの投票
コメント(5)
- 正解だと思う選択肢: A
The question specifies reducing bias and toxicity during post-processing of generated content.
A. Human-in-the-loop: This is the correct answer. Human review of generated outputs allows for filtering or modification of biased or toxic content after generation.
B. Data augmentation: This occurs during training, modifying the training data itself, not the generated outputs.
C. Feature engineering: Also a training phase activity, focusing on input features, not generated content.
D. Adversarial training: Used during training to improve robustness, not to filter post-generation content
👍 2Moon2024/12/31 - 正解だと思う選択肢: A
Human-in-the-loop (HITL) involves incorporating human reviewers into the model’s post-processing workflow to evaluate and refine outputs generated by the AI.
This approach helps identify and reduce bias or toxic content by leveraging human judgment to assess and correct inappropriate or inaccurate results.
HITL is particularly useful in generative AI applications where outputs can be subjective and require nuanced review to align with ethical and business standards.
👍 1ap64912024/12/27 - 正解だと思う選択肢: A
A. Human-in-the-loop
Explanation:
Human-in-the-loop (HITL) is a technique used in the post-processing stage of the machine learning lifecycle to improve model performance, including reducing bias and toxicity. In HITL, human evaluators intervene to assess and refine model outputs. This feedback loop helps to identify and correct biases, toxic language, or other undesirable outputs before they are presented to end-users. It ensures that the AI system adheres to ethical guidelines and improves the quality of generated content.
👍 1aws_Tamilan2024/12/27
シャッフルモード