Topic 1 Question 23
A company is running a data-intensive application on AWS. The application runs on a cluster of hundreds of Amazon EC2 instances. A shared file system also runs on several EC2 instances that store 200 TB of data. The application reads and modifies the data on the shared file system and generates a report. The job runs once monthly, reads a subset of the files from the shared file system, and takes about 72 hours to complete. The compute instances scale in an Auto Scaling group, but the instances that host the shared file system run continuously. The compute and storage instances are all in the same AWS Region. A solutions architect needs to reduce costs by replacing the shared file system instances. The file system must provide high performance access to the needed data for the duration of the 72-hour run. Which solution will provide the LARGEST overall cost reduction while meeting these requirements?
Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Intelligent-Tiering storage class. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using lazy loading. Use the new file system as the shared storage for the duration of the job. Delete the file system when the job is complete.
Migrate the data from the existing shared file system to a large Amazon Elastic Block Store (Amazon EBS) volume with Multi-Attach enabled. Attach the EBS volume to each of the instances by using a user data script in the Auto Scaling group launch template. Use the EBS volume as the shared storage for the duration of the job. Detach the EBS volume when the job is complete
Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Standard storage class. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using batch loading. Use the new file system as the shared storage for the duration of the job. Delete the file system when the job is complete.
Migrate the data from the existing shared file system to an Amazon S3 bucket. Before the job runs each month, use AWS Storage Gateway to create a file gateway with the data from Amazon S3. Use the file gateway as the shared storage for the job. Delete the file gateway when the job is complete.
ユーザの投票
コメント(16)
- 正解だと思う選択肢: A👍 4masetromain2022/12/13
- 正解だと思う選択肢: A
A: Lazy loading is cost-effective because only a subset of data is used at every job B: There are hundreds of EC2 instances using the volume which is not possible (one EBS volume is limited to 16 nitro instances attached) C: Batching would load too much data D: storage gateway is used for on premises data access, I don't know is you can install a gateway in AWS, but Amazon would never advise this
👍 4sambb2023/02/27 - 正解だと思う選択肢: A
This solution would provide the largest overall cost reduction while meeting the requirements. By migrating the data to an S3 bucket using the S3 Intelligent-Tiering storage class, the company can take advantage of the automatic cost optimization provided by the storage class, which can result in significant cost savings. Additionally, by using Amazon FSx for Lustre to create a new file system with the data from Amazon S3, the company can take advantage of the high-performance access to the needed data for the duration of the 72-hour run. When the job is complete, the company can delete the file system, further reducing costs.
Option B, C and D may provide some cost savings over the current solution, but the savings would be less significant than the option A, because of the cost of the storage, the cost of the data transfer, and the cost of the storage gateway, the solution using the S3 and FSx for Lustre is the most cost-effective while meeting the requirements.
👍 3masetromain2023/01/13
シャッフルモード