Topic 1 Question 448
A company is deploying a new web-based application and needs a storage solution for the Linux application servers. The company wants to create a single location for updates to application data for all instances. The active dataset will be up to 100 GB in size. A solutions architect has determined that peak operations will occur for 3 hours daily and will require a total of 225 MiBps of read throughput.
The solutions architect must design a Multi-AZ solution that makes a copy of the data available in another AWS Region for disaster recovery (DR). The DR copy has an RPO of less than 1 hour.
Which solution will meet these requirements?
Deploy a new Amazon Elastic File System (Amazon EFS) Multi-AZ file system. Configure the file system for 75 MiBps of provisioned throughput. Implement replication to a file system in the DR Region.
Deploy a new Amazon FSx for Lustre file system. Configure Bursting Throughput mode for the file system. Use AWS Backup to back up the file system to the DR Region.
Deploy a General Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS) volume with 225 MiBps of throughput. Enable Multi-Attach for the EBS volume. Use AWS Elastic Disaster Recovery to replicate the EBS volume to the DR Region.
Deploy an Amazon FSx for OpenZFS file system in both the production Region and the DR Region. Create an AWS DataSync scheduled task to replicate the data from the production file system to the DR file system every 10 minutes.
ユーザの投票
コメント(12)
So practically everyone here is wrong. because it is A. Here is why B is wrong because one there is no such thing as bursting mode for Lustre that is an EFS thing, but also Backup will not work for the RPO. C is wrong obviously because GP3 can't be shared. D is wrong because Datasync tasks cannot be scheduled for any more frequent then hourly so no D is wrong because you cannot schedule data sync tasks less then hourly so you don't meet the RPO. So all of those are easily wrong because they have bad information. They fooled everyone on A because all they say is the 'Active working set is 100GB" not the entire filesystem. EFS accumulates bursting credits so for every 100GB of filesystem size you can burst up to 300MiBps for up to 72 minutes. So you provision 75MiBps because that would average out over time so you aren't being overcharged for the provisioned size.
👍 20e4bc18e2024/05/09- 正解だと思う選択肢: D
D
a sneaky question since my first impression is go for A but it is wrong due to the 75M throughput mode. What's the calculation here? one region has 3 AZ? so 75x3=225?. EFS is not provisioned in that way. Even that, the 225 is the total throughput where question asked 225 for read. Implied the total would be more like 225+XXX. Anyway, A is wrong. https://docs.aws.amazon.com/efs/latest/ug/performance.html
C is wrong since EBS multi attach don't support gp3 https://docs.aws.amazon.com/ebs/latest/userguide/ebs-volumes-multi.html
👍 4pangchn2024/03/23 - 正解だと思う選択肢: A
big thank to e4bc18e
👍 4trungtd2024/06/12
シャッフルモード