Topic 1 Question 57
A company has an ML model that needs to run one time each night to predict stock values. The model input is 3 MB of data that is collected during the current day. The model produces the predictions for the next day. The prediction process takes less than 1 minute to finish running. How should the company deploy the model on Amazon SageMaker to meet these requirements?
Use a multi-model serverless endpoint. Enable caching.
Use an asynchronous inference endpoint. Set the InitialInstanceCount parameter to 0.
Use a real-time endpoint. Configure an auto scaling policy to scale the model to 0 when the model is not in use.
Use a serverless inference endpoint. Set the MaxConcurrency parameter to 1.
ユーザの投票
コメント(2)
- 正解だと思う選択肢: D
"The prediction process takes less than 1 minute to finish running" so why would you provision something in the first place - go serverless
👍 2GiorgioGss2024/11/27 - 正解だと思う選択肢: D
ServerlessConfig:- MemorySizeInMB: Set to 2048 MB (options: 1024–6144 MB). MaxConcurrency: Set to 1 (minimum for nightly predictions). Efficient and cost-effective for one-time nightly use.
👍 1Saransundar2024/12/04
シャッフルモード