Topic 1 Question 74
You have deployed a model on Vertex AI for real-time inference. During an online prediction request, you get an “Out of Memory” error. What should you do?
Use batch prediction mode instead of online mode.
Send the request again with a smaller batch of instances.
Use base64 to encode your data before using it for prediction.
Apply for a quota increase for the number of prediction requests.
ユーザの投票
コメント(8)
- 正解だと思う選択肢: B
B is the answer 429 - Out of Memory https://cloud.google.com/ai-platform/training/docs/troubleshooting
👍 10hiromi2022/12/18 - 正解だと思う選択肢: B👍 2koakande2022/12/29
- 正解だと思う選択肢: B👍 1Sivaram062022/12/11
シャッフルモード