Topic 1 Question 297
You migrated your on-premises Apache Hadoop Distributed File System (HDFS) data lake to Cloud Storage. The data scientist team needs to process the data by using Apache Spark and SQL. Security policies need to be enforced at the column level. You need a cost-effective solution that can scale into a data mesh. What should you do?
- Deploy a long-living Dataproc cluster with Apache Hive and Ranger enabled.
- Configure Ranger for column level security.
- Process with Dataproc Spark or Hive SQL.
- Define a BigLake table.
- Create a taxonomy of policy tags in Data Catalog.
- Add policy tags to columns.
- Process with the Spark-BigQuery connector or BigQuery SQL.
- Load the data to BigQuery tables.
- Create a taxonomy of policy tags in Data Catalog.
- Add policy tags to columns.
- Process with the Spark-BigQuery connector or BigQuery SQL.
- Apply an Identity and Access Management (IAM) policy at the file level in Cloud Storage.
- Define a BigQuery external table for SQL processing.
- Use Dataproc Spark to process the Cloud Storage files.
ユーザの投票
コメント(3)
- 正解だと思う選択肢: B
BigLake leverages existing Cloud Storage infrastructure, eliminating the need for a dedicated Dataproc cluster, reducing costs significantly.
👍 2Jordan182024/01/07 - 正解だと思う選 択肢: B
- BigLake Integration: BigLake allows you to define tables on top of data in Cloud Storage, providing a bridge between data lake storage and BigQuery's powerful analytics capabilities. This approach is cost-effective and scalable.
- Data Catalog for Governance: Creating a taxonomy of policy tags in Google Cloud's Data Catalog and applying these tags to specific columns in your BigLake tables enables fine-grained, column-level access control.
- Processing with Spark and SQL: The Spark-BigQuery connector allows data scientists to process data using Apache Spark directly against BigQuery (and BigLake tables). This supports both Spark and SQL processing needs.
- Scalability into a Data Mesh: BigLake and Data Catalog are designed to scale and support the data mesh architecture, which involves decentralized data ownership and governance.
👍 2raaad2024/01/11 - 正解だと思う選択肢: C
C.
- Load the data to BigQuery tables.
- Create a taxonomy of policy tags in Data Catalog.
- Add policy tags to columns.
- Process with the Spark-BigQuery connector or BigQuery SQL.
👍 1scaenruy2024/01/04
シャッフルモード