Topic 1 Question 171
A machine learning (ML) specialist wants to create a data preparation job that uses a PySpark script with complex window aggregation operations to create data for training and testing. The ML specialist needs to evaluate the impact of the number of features and the sample count on model performance. Which approach should the ML specialist use to determine the ideal data transformations for the model?
Add an Amazon SageMaker Debugger hook to the script to capture key metrics. Run the script as an AWS Glue job.
Add an Amazon SageMaker Experiments tracker to the script to capture key metrics. Run the script as an AWS Glue job.
Add an Amazon SageMaker Debugger hook to the script to capture key parameters. Run the script as a SageMaker processing job.
Add an Amazon SageMaker Experiments tracker to the script to capture key parameters. Run the script as a SageMaker processing job.
ユーザの投票
コメント(9)
- 正解だと思う選択肢: D
while I agree that Sagemaker Experiments is the way to go, it only supports Training, Processing, and Transform jobs, so the right answer is to run the job as a processing job, hence D not B
👍 11dolorez2022/05/23 - 👍 7bluer12022/04/28
- 正解だと思う選択肢: B👍 5ovokpus2022/06/24
シャッフルモード