Topic 1 Question 146
Your company wants to migrate their 10-TB on-premises database export into Cloud Storage. You want to minimize the time it takes to complete this activity, the overall cost, and database load. The bandwidth between the on-premises environment and Google Cloud is 1 Gbps. You want to follow Google-recommended practices. What should you do?
Develop a Dataflow job to read data directly from the database and write it into Cloud Storage.
Use the Data Transfer appliance to perform an offline migration.
Use a commercial partner ETL solution to extract the data from the on-premises database and upload it into Cloud Storage.
Compress the data and upload it with gsutil -m to enable multi-threaded copy.
ユーザの投票
コメント(17)
This is pretty simple. Time to transfer using Transfer Appliance: 1-3 weeks (I've used it twice and had a 2-3 week turnaround total) Time to transfer using 1Gbps : 30 hours (https://cloud.google.com/architecture/migration-to-google-cloud-transferring-your-large-datasets)
Answer is D, using gsutil
👍 65pr2web2021/09/08No perfect answer as B and D both have flaws. B is time latency as transfer appliance usually takes weeks; D gsutil applies for less than 1TB. The answer should be storage transfer service for on-premises data, which is not available here.
If have to choose one I go for B
👍 17gingerbeer2021/09/27Answer is B. Although the time taken for transfer of 10TB with 1 GBPS is 30 hours, Google recommends to use GSUTIL for data transfers less than 1 TB. Perhaps Storage Transfer Service would have been better choice but not listed. Ref. https://cloud.google.com/architecture/migration-to-google-cloud-transferring-your-large-datasets#transfer-options
👍 3cert20202023/03/13
シャッフルモード