Topic 1 Question 172
You created an ML pipeline with multiple input parameters. You want to investigate the tradeoffs between different parameter combinations. The parameter options are • Input dataset • Max tree depth of the boosted tree regressor • Optimizer learning rate
You need to compare the pipeline performance of the different parameter combinations measured in F1 score, time to train, and model complexity. You want your approach to be reproducible, and track all pipeline runs on the same platform. What should you do?
- Use BigQueryML to create a boosted tree regressor, and use the hyperparameter tuning capability.
- Configure the hyperparameter syntax to select different input datasets: max tree depths, and optimizer learning rates. Choose the grid search option.
- Create a Vertex AI pipeline with a custom model training job as part of the pipeline. Configure the pipeline’s parameters to include those you are investigating.
- In the custom training step, use the Bayesian optimization method with F1 score as the target to maximize.
- Create a Vertex AI Workbench notebook for each of the different input datasets.
- In each notebook, run different local training jobs with different combinations of the max tree depth and optimizer learning rate parameters.
- After each notebook finishes, append the results to a BigQuery table.
- Create an experiment in Vertex AI Experiments.
- Create a Vertex AI pipeline with a custom model training job as part of the pipeline. Configure the pipeline’s parameters to include those you are investigating.
- Submit multiple runs to the same experiment, using different values for the parameters.
ユーザの投票
コメント(3)
- 正解だと思う選択肢: D
Given the objective of investigating parameter tradeoffs while ensuring reproducibility and tracking, option D - "Create an experiment in Vertex AI Experiments and submit multiple runs to the same experiment, using different values for the parameters" seems to be the most suitable. This approach provides a structured and trackable environment within Vertex AI Experiments, allowing multiple runs with varied parameters to be monitored for F1 score, training times, and potentially model complexity, enabling a comprehensive analysis of parameter combinations' tradeoffs.
👍 1pikachu0072024/01/10 - 正解だと思う選択肢: D
Vertex AI Experiment was created to compare runs. A is incorrect because you can't create a boosted tree using BigQueryML https://cloud.google.com/bigquery/docs/bqml-introduction#supported_models
👍 1BlehMaks2024/01/12 D The best option for investigating the tradeoffs between different parameter combinations is to create an experiment in Vertex AI Experiments,
👍 136bdc1e2024/01/13
シャッフルモード