Topic 1 Question 70
You are designing storage for very large text files for a data pipeline on Google Cloud. You want to support ANSI SQL queries. You also want to support compression and parallel load from the input locations using Google recommended practices. What should you do?
Transform text files to compressed Avro using Cloud Dataflow. Use BigQuery for storage and query.
Transform text files to compressed Avro using Cloud Dataflow. Use Cloud Storage and BigQuery permanent linked tables for query.
Compress text files to gzip using the Grid Computing Tools. Use BigQuery for storage and query.
Compress text files to gzip using the Grid Computing Tools. Use Cloud Storage, and then import into Cloud Bigtable for query.
ユーザの投票
コメント(17)
B. The question is focused on designing storage for very large files, with support for compression, ANSI SQL queries, and parallel loading from the input locations. This can be met using GCS for storage and Bigquery permanent tables with external data source in GCS.
👍 49Ganshank2020/04/11Should be A
👍 13[Removed]2020/03/21- 正解だと思う選択肢: B
https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-avro The Avro binary format: Is faster to load. The data can be read in parallel, even if the data blocks are compressed
👍 4Lui19792022/05/08
シャッフルモード