Topic 1 Question 142
You have built a model that is trained on data stored in Parquet files. You access the data through a Hive table hosted on Google Cloud. You preprocessed these data with PySpark and exported it as a CSV file into Cloud Storage. After preprocessing, you execute additional steps to train and evaluate your model. You want to parametrize this model training in Kubeflow Pipelines. What should you do?
Remove the data transformation step from your pipeline.
Containerize the PySpark transformation step, and add it to your pipeline.
Add a ContainerOp to your pipeline that spins a Dataproc cluster, runs a transformation, and then saves the transformed data in Cloud Storage.
Deploy Apache Spark at a separate node pool in a Google Kubernetes Engine cluster. Add a ContainerOp to your pipeline that invokes a corresponding transformation job for this Spark instance.
ユーザの投票
コメント(5)
- 正解だと思う選択肢: C
This will allow to reuse the same pipeline for different datasets without the need to manually preprocess and transform the data each time.
👍 5mil_spyro2022/12/13 - 正解だと思う選択肢: B
All the wrong answers on this site really baffle me...correct answer is B... you must containerize your component for Kubeflow to run it.
👍 4chidstar2023/02/26 - 正解だと思う選択肢: C
Answer C
👍 2TNT872022/12/27
シャッフルモード