Topic 1 Question 65
A company wants to use a large language model (LLM) on Amazon Bedrock for sentiment analysis. The company needs the LLM to produce more consistent responses to the same input prompt. Which adjustment to an inference parameter should the company make to meet these requirements?
Decrease the temperature value.
Increase the temperature value.
Decrease the length of output tokens.
Increase the maximum generation length.
ユーザの投票
コメント(3)
A. Decrease the temperature value.
Lowering the temperature value reduces the randomness of predictions from a large language model (LLM) and makes the output more deterministic and consistent. This is ideal for producing consistent responses to the same input prompt during sentiment analysis.
👍 3dehkon2024/11/07- 正解だと思う選択肢: A
Lowering the temperature value in an LLM controls the randomness of the model's output. A lower temperature (close to 0) makes the model's predictions more deterministic and consistent, leading to similar outputs for identical prompts. This is particularly beneficial in tasks like sentiment analysis, where consistency and reliability in responses are crucial.
👍 2Blair772024/11/12 - 正解だと思う選択肢: A
A. Decrease the temperature value: The temperature parameter controls the randomness of the model’s output. Lower temperatures make the model more deterministic and lead to more consistent and focused responses, while higher temperatures introduce more randomness and variety. For sentiment analysis, where you want consistent outputs for the same input, decreasing the temperature will help achieve more predictable and reliable results.
👍 1Jessiii2025/02/11
シャッフルモード