Topic 1 Question 690
2 つ選択A company regularly uploads GB-sized files to Amazon S3. After the company uploads the files, the company uses a fleet of Amazon EC2 Spot Instances to transcode the file format. The company needs to scale throughput when the company uploads data from the on-premises data center to Amazon S3 and when the company downloads data from Amazon S3 to the EC2 instances.
Which solutions will meet these requirements?
Use the S3 bucket access point instead of accessing the S3 bucket directly.
Upload the files into multiple S3 buckets.
Use S3 multipart uploads.
Fetch multiple byte-ranges of an object in parallel.
Add a random prefix to each object when uploading the files.
ユーザの投票
コメント(9)
CD C: Increase the file upload throughput D: increase the file download throughput
👍 10betttty2024/02/05- 正解だと思う選択肢: CD
C: Upload: Multipart clear, D: Download: You can fetch a byte-range from an object, transferring only the specified portion. You can use concurrent connections to Amazon S3 to fetch different byte ranges from within the same object. This helps you achieve higher aggregate throughput versus a single whole-object request.
A: S3 Access Points can be easily scaled, but are typically used to simplify data access for any AWS service or customer application that stores data in S3. E: Prefixes: You can increase your read or write performance by using parallelization. For example, if you create 10 prefixes in an Amazon S3 bucket to parallelize reads, you could scale your read performance to 55,000 read requests per second. But wording in this answer is strange...
👍 5sandordini2024/04/24 - 正解だと思う選択肢: CD
CD are the correct options
👍 3Darshan072024/02/12
シャッフルモード