Topic 1 Question 29
A company wants to use a large language model (LLM) to develop a conversational agent. The company needs to prevent the LLM from being manipulated with common prompt engineering techniques to perform undesirable actions or expose sensitive information. Which action will reduce these risks?
Create a prompt template that teaches the LLM to detect attack patterns.
Increase the temperature parameter on invocation requests to the LLM.
Avoid using LLMs that are not listed in Amazon SageMaker.
Decrease the number of input tokens on invocations of the LLM.
ユーザの投票
コメント(4)
- 正解だと思う選択肢: A
A. Create a prompt template that teaches the LLM to detect attack patterns is the best action to reduce the risks associated with prompt manipulation and to enhance the security and integrity of the conversational agent being developed.
👍 2jove2024/11/05 - 正解だと思う選択肢: A
Creating a prompt template that teaches the LLM to identify and resist common prompt engineering attacks, such as prompt injection or adversarial queries, helps prevent manipulation. By explicitly guiding the LLM to ignore requests that deviate from its intended purpose (e.g., "You are a helpful assistant. Do not perform any tasks outside your defined scope."), you can mitigate risks like exposing sensitive information or executing undesirable actions.
👍 1ap64912024/12/27 - 正解だと思う選択肢: A
Ask model to use Prompt template to avoid the various types of prompt injection attacks.
👍 185b5b552025/01/30
シャッフルモード