Topic 1 Question 73
A company receives a daily file that contains customer data in .xls format. The company stores the file in Amazon S3. The daily file is approximately 2 GB in size. A data engineer concatenates the column in the file that contains customer first names and the column that contains customer last names. The data engineer needs to determine the number of distinct customers in the file. Which solution will meet this requirement with the LEAST operational effort?
Create and run an Apache Spark job in an AWS Glue notebook. Configure the job to read the S3 file and calculate the number of distinct customers.
Create an AWS Glue crawler to create an AWS Glue Data Catalog of the S3 file. Run SQL queries from Amazon Athena to calculate the number of distinct customers.
Create and run an Apache Spark job in Amazon EMR Serverless to calculate the number of distinct customers.
Use AWS Glue DataBrew to create a recipe that uses the COUNT_DISTINCT aggregate function to calculate the number of distinct customers.
ユーザの投票
コメント(4)
- 正解だと思う選択肢: D
AWS Glue DataBrew: AWS Glue DataBrew is a visual data preparation tool that allows data engineers and data analysts to clean and normalize data without writing code. Using DataBrew, a data engineer could create a recipe that includes the concatenation of the customer first and last names and then use the COUNT_DISTINCT function. This would not require complex code and could be performed through the DataBrew user interface, representing a lower operational effort.
👍 9rralucard_2024/02/02 - 正解だと思う選択肢: D
since it's less operational effort, I would go in D
👍 2lucas_rfsb2024/04/02 - 正解だと思う選択肢: D
go in D
👍 2Ousseyni2024/04/17
シャッフルモード