Topic 1 Question 456
A company needs to run large batch-processing jobs on data that is stored in an Amazon S3 bucket. The jobs perform simulations. The results of the jobs are not time sensitive, and the process can withstand interruptions.
Each job must process 15-20 GB of data when the data is stored in the S3 bucket. The company will store the output from the jobs in a different Amazon S3 bucket for further analysis.
Which solution will meet these requirements MOST cost-effectively?
Create a serverless data pipeline. Use AWS Step Functions for orchestration. Use AWS Lambda functions with provisioned capacity to process the data.
Create an AWS Batch compute environment that includes Amazon EC2 Spot Instances. Specify the SPOT_CAPACITY_OPTIMIZED allocation strategy.
Create an AWS Batch compute environment that includes Amazon EC2 On-Demand Instances and Spot Instances. Specify the SPOT_CAPACITY_OPTIMIZED allocation strategy for the Spot Instances.
Use Amazon Elastic Kubernetes Service (Amazon EKS) to run the processing jobs. Use managed node groups that contain a combination of Amazon EC2 On-Demand Instances and Spot Instances.
ユーザの投票
コメント(7)
- 正解だと思う選択肢: B
"large batch-processing jobs" -> Batch "not time sensitive, and the process can withstand interruptions" -> Spot
👍 7VerRi2024/03/24 - 正解だと思う選択肢: B
AWS Batch with Spot instances given not time sensitive
👍 1CMMC2024/03/19 - 正解だと思う選択肢: B
The correct answer is B.
👍 1Dgix2024/03/20
シャッフルモード