Topic 1 Question 954
A company is developing machine learning (ML) models on AWS. The company is developing the ML models as independent microservices. The microservices fetch approximately 1 GB of model data from Amazon S3 at startup and load the data into memory. Users access the ML models through an asynchronous API. Users can send a request or a batch of requests.
The company provides the ML models to hundreds of users. The usage patterns for the models are irregular. Some models are not used for days or weeks. Other models receive batches of thousands of requests at a time.
Which solution will meet these requirements?
Direct the requests from the API to a Network Load Balancer (NLB). Deploy the ML models as AWS Lambda functions that the NLB will invoke. Use auto scaling to scale the Lambda functions based on the traffic that the NLB receives.
Direct the requests from the API to an Application Load Balancer (ALB). Deploy the ML models as Amazon Elastic Container Service (Amazon ECS) services that the ALB will invoke. Use auto scaling to scale the ECS cluster instances based on the traffic that the ALB receives.
Direct the requests from the API into an Amazon Simple Queue Service (Amazon SQS) queue. Deploy the ML models as AWS Lambda functions that SQS events will invoke. Use auto scaling to increase the number of vCPUs for the Lambda functions based on the size of the SQS queue.
Direct the requests from the API into an Amazon Simple Queue Service (Amazon SQS) queue. Deploy the ML models as Amazon Elastic Container Service (Amazon ECS) services that read from the queue. Use auto scaling for Amazon ECS to scale both the cluster capacity and number of the services based on the size of the SQS queue.
ユーザの投票
コメント(5)
- 正解だと思う選択肢: D
D is the answer: SQS Queue: Directing API requests to SQS decouples the API from ML processing, efficiently handles high traffic, and ensures reliable request processing without overloading the ML models.
Amazon ECS Services: Running ML models on ECS provides effective management of containerized applications, ideal for handling ML workloads.
Auto Scaling: ECS auto scales based on SQS queue size, adjusting container and cluster capacity to match demand, ensuring efficient handling of varying workloads.
👍 5[Removed]2024/08/18 - 正解だと思う選択肢: B
Why should I use SQS in option D? Wouldn't ALB be enough?
👍 2kbgsgsgs2024/10/03 - 正解だと思う選択肢: C
Since AWS Lambda supports 1GB of memory and can be scaled seamlessly, the ans should be C. Question says a lot of APIs are not used frequently so keeping them in ECS will result in higher costs. Also these are async operations means the response is not time bound hence we can wait for lambda to startup and scale up based on the size of the SQS queue.
👍 2RamanAgarwal2024/12/15
シャッフルモード