Topic 1 Question 84
A Machine Learning Specialist is applying a linear least squares regression model to a dataset with 1,000 records and 50 features. Prior to training, the ML Specialist notices that two features are perfectly linearly dependent. Why could this be an issue for the linear least squares regression model?
It could cause the backpropagation algorithm to fail during training
It could create a singular matrix during optimization, which fails to define a unique solution
It could modify the loss function during optimization, causing it to fail during training
It could introduce non-linear dependencies within the data, which could invalidate the linear assumptions of the model
ユーザの投票
コメント(8)
B is correct answer .
👍 20Paul_NoName2021/10/05B: If two features in the dataset are perfectly linearly dependent, it means that one feature can be expressed as a linear combination of the other. This can create a singular matrix during optimization, as the linear model would be trying to fit a linear equation to a dataset where one variable is fully determined by the other. This would lead to an ill-defined optimization problem, as there would be no unique solution that minimizes the sum of the squares of the residuals. This could lead to problems during training, as the model would not be able to find appropriate parameter values to fit the data.
👍 5Sneep2023/01/09B. See the multicollinearity problem in wikipedia https://en.wikipedia.org/wiki/Multicollinearity (second paragraph)
👍 4jerto972021/10/30
シャッフルモード