Topic 1 Question 27
A company has developed an ML model for image classification. The company wants to deploy the model to production so that a web application can use the model. The company needs to implement a solution to host the model and serve predictions without managing any of the underlying infrastructure. Which solution will meet these requirements?
Use Amazon SageMaker Serverless Inference to deploy the model.
Use Amazon CloudFront to deploy the model.
Use Amazon API Gateway to host the model and serve predictions.
Use AWS Batch to host the model and serve predictions.
ユーザの投票
コメント(6)
- 正解だと思う選択肢: A
Amazon SageMaker helps to host the model, and serve predictions without managing infrastructure provisioning and configurations.
👍 285b5b552025/01/28 - 正解だと思う選択肢: A
Use Amazon SageMaker Serverless Inference to deploy the model: Amazon SageMaker Serverless Inference allows you to deploy machine learning models in a fully managed, serverless environment. You don't need to manage the underlying infrastructure (such as EC2 instances) to handle predictions. This is ideal for scenarios like yours, where the model needs to be deployed and used by a web application, and scalability and infrastructure management should be abstracted away.
👍 2Jessiii2025/02/11 A. Use Amazon SageMaker Serverless Inference to deploy the model. With serverless inference, there's no need to manage any infra.
👍 1minime2024/11/10
シャッフルモード