Topic 1 Question 98
A company wants to implement a large language model (LLM) based chatbot to provide customer service agents with real-time contextual responses to customers' inquiries. The company will use the company's policies as the knowledge base.
Which solution will meet these requirements MOST cost-effectively?
Retrain the LLM on the company policy data.
Fine-tune the LLM on the company policy data.
Implement Retrieval Augmented Generation (RAG) for in-context responses.
Use pre-training and data augmentation on the company policy data.
ユーザの投票
コメント(4)
- 正解だと思う選択肢: C
RAG (Option C) is the most cost-effective choice because it allows the LLM to dynamically retrieve relevant information from a predefined knowledge base (the company policy) at inference time, without needing extensive fine-tuning or retraining of the model. This reduces the need for costly computational resources while still providing accurate, contextual responses.
👍 1aws_Tamilan2024/12/27 - 正解だと思う選択肢: C
The correct answer is C. RAG allows direct use of policy documents without expensive model training.
👍 1may2021_r2024/12/28 - 正解だと思う選択肢: C
RAG provides most cost-effective solution
👍 185b5b552025/02/01
シャッフルモード