Topic 1 Question 133
You have recently created a proof-of-concept (POC) deep learning model. You are satisfied with the overall architecture, but you need to determine the value for a couple of hyperparameters. You want to perform hyperparameter tuning on Vertex AI to determine both the appropriate embedding dimension for a categorical feature used by your model and the optimal learning rate. You configure the following settings: • For the embedding dimension, you set the type to INTEGER with a minValue of 16 and maxValue of 64. • For the learning rate, you set the type to DOUBLE with a minValue of 10e-05 and maxValue of 10e-02.
You are using the default Bayesian optimization tuning algorithm, and you want to maximize model accuracy. Training time is not a concern. How should you set the hyperparameter scaling for each hyperparameter and the maxParallelTrials?
Use UNIT_LINEAR_SCALE for the embedding dimension, UNIT_LOG_SCALE for the learning rate, and a large number of parallel trials.
Use UNIT_LINEAR_SCALE for the embedding dimension, UNIT_LOG_SCALE for the learning rate, and a small number of parallel trials.
Use UNIT_LOG_SCALE for the embedding dimension, UNIT_LINEAR_SCALE for the learning rate, and a large number of parallel trials.
Use UNIT_LOG_SCALE for the embedding dimension, UNIT_LINEAR_SCALE for the learning rate, and a small number of parallel trials.
ユーザの投票
コメント(10)
- 正解だと思う選択肢: B
Vote B
👍 10YangG2022/12/13 - 正解だと思う選択肢: D
Vote D, this can help the tuning algorithm explore a wider range of values for the learning rate, while also focusing on a smaller range of values for the embedding dimension.
👍 2mil_spyro2022/12/13 - 正解だと思う選択肢: B
Learning Rage is subtle and take time so, it use Log Scale
👍 2John_Pongthorn2023/01/23
シャッフルモード