Topic 1 Question 40
2 つ選択You have a functioning end-to-end ML pipeline that involves tuning the hyperparameters of your ML model using AI Platform, and then using the best-tuned parameters for training. Hypertuning is taking longer than expected and is delaying the downstream processes. You want to speed up the tuning job without significantly compromising its effectiveness. Which actions should you take?
Decrease the number of parallel trials.
Decrease the range of floating-point values.
Set the early stopping parameter to TRUE.
Change the search algorithm from Bayesian search to random search.
Decrease the maximum number of trials during subsequent training phases.
ユーザの投票
コメント(17)
I think should CE. I can't find any reference regarding B can reduce tuning time.
👍 16gcp2021go2021/08/01Answer: B & C (Ref: https://cloud.google.com/ai-platform/training/docs/using-hyperparameter-tuning) (A) Decreasing the number of parallel trials will increase tuning time. (D) Bayesian search works better and faster than random search since it's selective in points to evaluate and uses knowledge of previouls evaluated points. (E) maxTrials should be larger than 10the number of hyperparameters used. And spanning the whole minimum space (10num_hyperparams) already takes some time. So, lowering maxTrials has little effect on reducing tuning time.
👍 14Paul_Dirac2021/06/26I guess it's A & C:
A: "However, running in parallel can reduce the effectiveness of the tuning job overall. That is because hyperparameter tuning uses the results of previous trials" -> See https://cloud.google.com/ai-platform/training/docs/using-hyperparameter-tuning#running_parallel_trials
C: "Training must automatically stop a trial that has become clearly unpromising. This saves you the cost of continuing a trial that is unlikely to be useful" -> See https://cloud.google.com/ai-platform/training/docs/using-hyperparameter-tuning#early-stopping
👍 4majejim4352021/10/26
シャッフルモード