Topic 1 Question 81
Which option is a benefit of ongoing pre-training when fine-tuning a foundation model (FM)?
Helps decrease the model's complexity
Improves model performance over time
Decreases the training time requirement
Optimizes model inference time
ユーザの投票
コメント(3)
- 正解だと思う選択肢: B
The correct answer is B. Ongoing pre-training improves model performance over time by allowing the model to adapt to new data and tasks.
👍 2may2021_r2024/12/29 - 正解だと思う選択肢: B
Ongoing pre-training helps the model continuously learn and improve its performance over time. This is the whole point of fine-tuning a foundation model
👍 1Amitst2024/12/05 - 正解だと思う選択肢: B
Ongoing pre-training when fine-tuning a foundation model (FM) allows the model to continue learning and adapting to new data or evolving contexts. As new data becomes available, the model can be pre-trained on this additional data, improving its ability to handle specific tasks, making it more effective and accurate over time.
👍 1Jessiii2025/02/11
シャッフルモード