Topic 1 Question 269
Your organization's data assets are stored in BigQuery, Pub/Sub, and a PostgreSQL instance running on Compute Engine. Because there are multiple domains and diverse teams using the data, teams in your organization are unable to discover existing data assets. You need to design a solution to improve data discoverability while keeping development and configuration efforts to a minimum. What should you do?
Use Data Catalog to automatically catalog BigQuery datasets. Use Data Catalog APIs to manually catalog Pub/Sub topics and PostgreSQL tables.
Use Data Catalog to automatically catalog BigQuery datasets and Pub/Sub topics. Use Data Catalog APIs to manually catalog PostgreSQL tables.
Use Data Catalog to automatically catalog BigQuery datasets and Pub/Sub topics. Use custom connectors to manually catalog PostgreSQL tables.
Use customer connectors to manually catalog BigQuery datasets, Pub/Sub topics, and PostgreSQL tables.
ユーザの投票
コメント(3)
- 正解だと思う選択肢: B
- It utilizes Data Catalog's native support for both BigQuery datasets and Pub/Sub topics.
- For PostgreSQL tables running on a Compute Engine instance, you'd use Data Catalog APIs to create custom entries, as Data Catalog does not automatically discover external databases like PostgreSQL.
👍 1raaad2024/01/05 B. -- Looks much better option as needed low development efforts. -- C not looking right as it will need lot of dev efforts for custom connectors.
👍 1GCP0012024/01/08- 正解だと思う選択肢: B
Option B - Data Catalog automatically maps out GCP resources and dev efforts are minimized by leveraging the data catalog API to do the same for postgresql db
👍 1Matt_1082024/01/13
シャッフルモード