Topic 1 Question 101
Your company is building a new architecture to support its data-centric business focus. You are responsible for setting up the network. Your company's mobile and web-facing applications will be deployed on-premises, and all data analysis will be conducted in GCP. The plan is to process and load 7 years of archived .csv files totaling 900 TB of data and then continue loading 10 TB of data daily. You currently have an existing 100-MB internet connection. What actions will meet your company's needs?
Compress and upload both archived files and files uploaded daily using the gsutil ג€"m option.
Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storage. Establish a connection with Google using a Dedicated Interconnect or Direct Peering connection and use it to upload files daily.
Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storage. Establish one Cloud VPN Tunnel to VPC networks over the public internet, and compress and upload files daily using the gsutil ג€"m option.
Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storage. Establish a Cloud VPN Tunnel to VPC networks over the public internet, and compress and upload files daily.
ユーザの投票
コメント(17)
With option A, daily data would take 27 hours. My answer is B. How do you think?
👍 48KouShikyou2019/10/08Agree B. 100Mbps connections for 10TB data transfer is takes too long
https://cloud.google.com/solutions/transferring-big-data-sets-to-gcp#close
👍 20wk2019/10/19Hmm. Not easy. Everybody tends to B. But I think that can't be correct for the simple reason...
A is obviously not an option because of the initial amount of data. It could be that through compression the amount can be extremely reduced (I saw examples down to 5% of the original size). But the assumption at that point is, that the entropy is high => that means the compression is not necessarily the most important factor.
B: The answer doesn't say anything about the reach-ability of the interconnect / peering (meaning IXP's). So one has to assume that they CAN'T be reached. That means that only C/D can be correct.
I would tend to C because Google obviously would always recommend their own tools (gsutil). Then the hope would be that the compression and the bandwidth of the VPN is enough to get accomplish the job successfully.
👍 3cpi_web2022/05/13
シャッフルモード