Topic 1 Question 52
You are implementing security best practices on your data pipeline. Currently, you are manually executing jobs as the Project Owner. You want to automate these jobs by taking nightly batch files containing non-public information from Google Cloud Storage, processing them with a Spark Scala job on a Google Cloud Dataproc cluster, and depositing the results into Google BigQuery. How should you securely run this workload?
Restrict the Google Cloud Storage bucket so only you can see the files
Grant the Project Owner role to a service account, and run the job with it
Use a service account with the ability to read the batch files and to write to BigQuery
Use a user account with the Project Viewer role on the Cloud Dataproc cluster to read the batch files and write to BigQuery
ユーザの投票
コメント(17)
A is wrong, if only I can see the bucket no automation is possible, besides, also needs launch the dataproc job B is too much, does not follow the security best practices C has one point missing…you need to submit dataproc jobs. In D viewer role will not be able to submit dataproc jobs, the rest is ok
Thus….the only one that would work is B! BUT this service account has too many permissions. Should have dataproc editor, write big query and read from bucket
👍 30digvijay2020/03/24Should be C
👍 29rickywck2020/03/16Why there are so many wrong answers? Examtopics.com are you enjoying paid subscription by giving random answers from people? Ans: C
👍 5JG1232021/11/26
シャッフルモード