Topic 1 Question 16
During mini-batch training of a neural network for a classification problem, a Data Scientist notices that training accuracy oscillates. What is the MOST likely cause of this issue?
The class distribution in the dataset is imbalanced.
Dataset shuffling is disabled.
The batch size is too big.
The learning rate is very high.
解説
ユーザの投票
コメント(15)
Answer is D. Should the weight be increased or reduced so that the error is smaller than the current value? You need to examine the amount of change to know that. Therefore, we differentiate and check whether the slope of the tangent is positive or negative, and update the weight value in the direction to reduce the error. The operation is repeated over and over so as to approach the optimal solution that is the goal. The width of the update amount is important at this time, and is determined by the learning rate.
👍 16gaku10162021/09/29maybe D ?
👍 8ozan112021/09/27If the learning rate is too small, it will take very long time to get to the bottom.If the learning rate is too big, it could get oscillate away from the bottom. If training a neural net and you find that the loss or accuracy is speeding to infinity the learning rate is too high.
👍 3emailtorajivk2021/09/30
シャッフルモード