Topic 1 Question 124
A bank has fine-tuned a large language model (LLM) to expedite the loan approval process. During an external audit of the model, the company discovered that the model was approving loans at a faster pace for a specific demographic than for other demographics.
How should the bank fix this issue MOST cost-effectively?
Include more diverse training data. Fine-tune the model again by using the new data.
Use Retrieval Augmented Generation (RAG) with the fine-tuned model.
Use AWS Trusted Advisor checks to eliminate bias.
Pre-train a new LLM with more diverse training data.
ユーザの投票
コメント(3)
- 正解だと思う選択肢: A
A. Include more diverse training data. Fine-tune the model again by using the new data.
Explanation:
The issue of bias in the loan approval model likely arises from the model being trained on data that does not sufficiently represent all demographics. To address this, the bank should augment the training dataset with more diverse data to ensure that the model can learn to make fair and equitable decisions across different demographics. After incorporating the more diverse training data, the bank can fine-tune the model again to adjust its behavior and reduce any biases identified during the audit.
👍 1aws_Tamilan2024/12/27 - 正解だと思う選択肢: A
The correct answer is A. Fine-tuning with more diverse data is the most cost-effective bias mitigation approach.
👍 1may2021_r2024/12/28 - 正解だと思う選択肢: A
The model's bias likely stems from unrepresentative training data. Adding more diverse data and fine-tuning the model is the most cost-effective solution to address bias.
👍 1Jessiii2025/02/11
シャッフルモード