Topic 1 Question 67
A company has an ML model that generates text descriptions based on images that customers upload to the company's website. The images can be up to 50 MB in total size. An ML engineer decides to store the images in an Amazon S3 bucket. The ML engineer must implement a processing solution that can scale to accommodate changes in demand. Which solution will meet these requirements with the LEAST operational overhead?
Create an Amazon SageMaker batch transform job to process all the images in the S3 bucket.
Create an Amazon SageMaker Asynchronous Inference endpoint and a scaling policy. Run a script to make an inference request for each image.
Create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster that uses Karpenter for auto scaling. Host the model on the EKS cluster. Run a script to make an inference request for each image.
Create an AWS Batch job that uses an Amazon Elastic Container Service (Amazon ECS) cluster. Specify a list of images to process for each AWS Batch job.
ユーザの投票
コメント(2)
- 正解だと思う選択肢: B
LEAST effort = B
👍 1GiorgioGss2024/11/27 - 正解だと思う選択肢: B
https://docs.aws.amazon.com/sagemaker/latest/dg/async-inference-autoscale.html To autoscale asynchronous endpoint -> Register model -> Define and apply scaling policy; Other options are complex to implement
👍 1Saransundar2024/12/04
シャッフルモード