Topic 1 Question 77
An AI practitioner is using a large language model (LLM) to create content for marketing campaigns. The generated content sounds plausible and factual but is incorrect. Which problem is the LLM having?
Data leakage
Hallucination
Overfitting
Underfitting
ユーザの投票
コメント(4)
B. Hallucination is the right answer
👍 2L12345678902024/11/22- 正解だと思う選択肢: B
Hallucination
👍 2L12345678902024/11/23 - 正解だと思う選択肢: B
Hallucination is a phenomenon in which an LLM generates text that sounds plausible and factual but is actually incorrect or nonsensical. This occurs when the model is overconfident in its ability to generate coherent text based on patterns it has learned from training data.
👍 2AzureDP9002025/01/25
シャッフルモード