Topic 1 Question 106
You are managing a Cloud Dataproc cluster. You need to make a job run faster while minimizing costs, without losing work in progress on your clusters. What should you do?
Increase the cluster size with more non-preemptible workers.
Increase the cluster size with preemptible worker nodes, and configure them to forcefully decommission.
Increase the cluster size with preemptible worker nodes, and use Cloud Stackdriver to trigger a script to preserve work.
Increase the cluster size with preemptible worker nodes, and configure them to use graceful decommissioning.
ユーザの投票
コメント(7)
- 👍 5AzureDP9002022/12/30
- 正解だと思う選択肢: D
https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/scaling-clusters Why scale a Dataproc cluster? to increase the number of workers to make a job run faster to decrease the number of workers to save money (see Graceful Decommissioning as an option to use when downsizing a cluster to avoid losing work in progress). to increase the number of nodes to expand available Hadoop Distributed Filesystem (HDFS) storage
👍 3John_Pongthorn2022/09/14 A. "graceful decommissioning" is not a configuration value but a parameter passed with scale down action - to decrease the number of workers to save money (see Graceful Decommissioning as an option to use when downsizing a cluster to avoid losing work in progress)
👍 2skp572022/11/12
シャッフルモード
