Topic 1 Question 119
You need to migrate Hadoop jobs for your company's Data Science team without modifying the underlying infrastructure. You want to minimize costs and infrastructure management effort. What should you do?
Create a Dataproc cluster using standard worker instances.
Create a Dataproc cluster using preemptible worker instances.
Manually deploy a Hadoop cluster on Compute Engine using standard instances.
Manually deploy a Hadoop cluster on Compute Engine using preemptible instances.
ユーザの投票
コメント(17)
Should be B, you want to minimize costs. https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms#preemptible_and_non-preemptible_secondary_workers
👍 56TotoroChina2021/06/30It's A, the primary workers can only be standard, where secondary workers can be preemtible.------In addition to using standard Compute Engine VMs as Dataproc workers (called "primary" workers), Dataproc clusters can use "secondary" workers. There are two types of secondary workers: preemptible and non-preemptible. All secondary workers in your cluster must be of the same type, either preemptible or non-preemptible. The default is preemptible.
👍 30firecloud2021/07/28The Answer is B
Just had my exam today with a pass, this question was in the exam. Dated 31/12/22 Thanks to this site it was by far my most valuable
👍 4windsor_432022/12/31
シャッフルモード