Topic 1 Question 139
A company’s large language model (LLM) is experiencing hallucinations.
How can the company decrease hallucinations?
Set up Agents for Amazon Bedrock to supervise the model training.
Use data pre-processing and remove any data that causes hallucinations.
Decrease the temperature inference parameter for the model.
Use a foundation model (FM) that is trained to not hallucinate.
ユーザの投票
コメント(5)
- 正解だと思う選択肢: C
Decreasing the temperature reduces the variety of answer and forcing the model to focus on the tuned patterns
👍 1chris_spencer2025/02/04 - 正解だと思う選択肢: C
The temperature parameter controls the randomness of the model's output. Lowering the temperature makes the model's responses more deterministic and focused, reducing the likelihood of generating incorrect or nonsensical information (hallucinations).
👍 1Jessiii2025/02/11 C is the correct answer. Here's why:
Decreasing the temperature parameter makes the model's outputs more deterministic and conservative, reducing the likelihood of hallucinations.
👍 1kopper20192025/02/12
シャッフルモード