Topic 1 Question 460
2 つ選択A company is running a web-crawling process on a list of target URLs to obtain training documents for machine learning training algorithms. A fleet of Amazon EC2 t2.micro instances pulls the target URLs from an Amazon Simple Queue Service (Amazon SQS) queue. The instances then write the result of the crawling algorithm as a .csv file to an Amazon Elastic File System (Amazon EFS) volume. The EFS volume is mounted on all instances of the fleet.
A separate system adds the URLs to the SQS queue at infrequent rates. The instances crawl each URL in 10 seconds or less.
Metrics indicate that some instances are idle when no URLs are in the SQS queue. A solutions architect needs to redesign the architecture to optimize costs.
Which combination of steps will meet these requirements MOST cost-effectively?
Use m5.8xlarge instances instead of t2.micro instances for the web-crawling process. Reduce the number of instances in the fleet by 50%.
Convert the web-crawling process into an AWS Lambda function. Configure the Lambda function to pull URLs from the SQS queue.
Modify the web-crawling process to store results in Amazon Neptune.
Modify the web-crawling process to store results in an Amazon Aurora Serverless MySQL instance.
Modify the web-crawling process to store results in Amazon S3.
ユーザの投票
コメント(4)
- 正解だと思う選択肢: BE
A is utter rubbish - scaling out is not what we need B is optimal in terms of cost C and D involve fairly expensive databases not suitable for this use case. Moreover, Neptune must run in a VPC. E is optimal in terms of accessibility and cost
👍 3Dgix2024/03/20 - 正解だと思う選択肢: BE
BE lamda + S3 the process don't need a database
👍 3pangchn2024/03/24 - 正解だと思う選択肢: BE
use lambda instead of a fleet of EC2, and store the results into cost-effective S3
👍 1CMMC2024/03/19
シャッフルモード