Topic 1 Question 159
Your organization manages an online message board. A few months ago, you discovered an increase in toxic language and bullying on the message board. You deployed an automated text classifier that flags certain comments as toxic or harmful. Now some users are reporting that benign comments referencing their religion are being misclassified as abusive. Upon further inspection, you find that your classifier's false positive rate is higher for comments that reference certain underrepresented religious groups. Your team has a limited budget and is already overextended. What should you do?
Add synthetic training data where those phrases are used in non-toxic ways.
Remove the model and replace it with human moderation.
Replace your model with a different text classifier.
Raise the threshold for comments to be considered toxic or harmful.
ユーザの投票
コメント(4)
- 正解だと思う選択肢: D
By raising the threshold for comments to be considered toxic or harmful, you will decrease the number of false positives.
B is wrong because we are taking a Google MLE exam :) A and C are wrong because both of them involve a good amount of additional work, either for extending the dataset or training/experimenting with a new model. Considering your team is already over the budget and has too many tasks on their plate (overextended), these two options are not available for you.
👍 2[Removed]2023/07/22 - 正解だと思う選択肢: A
A. Add synthetic training data where those phrases are used in non-toxic ways.
In this situation, where your automated text classifier is misclassifying benign comments referencing certain underrepresented religious groups as toxic or harmful, adding synthetic training data where those phrases are used in non-toxic ways can be a cost-effective solution to improve the model's performance.
👍 1PST212023/07/20 - 正解だと思う選択肢: D
"Your team has a limited budget and is already overextended"
👍 1powerby352023/07/25
シャッフルモード