Topic 1 Question 72
You are building a linear model with over 100 input features, all with values between –1 and 1. You suspect that many features are non-informative. You want to remove the non-informative features from your model while keeping the informative ones in their original form. Which technique should you use?
Use principal component analysis (PCA) to eliminate the least informative features.
Use L1 regularization to reduce the coefficients of uninformative features to 0.
After building your model, use Shapley values to determine which features are the most informative.
Use an iterative dropout technique to identify which features do not degrade the model when removed.
ユーザの投票
コメント(9)
A. PCA reconfigures the features, so no. C. After building your model, so no. D. Dropout should be in the model and it doesn't tell us which features are informative or not. Big No! For me, it's B.
👍 5ares812022/12/11- 正解だと思う選択肢: B
L1 regularization it's good for feature selection https://www.quora.com/How-does-the-L1-regularization-method-help-in-feature-selection https://developers.google.com/machine-learning/crash-course/regularization-for-sparsity/l1-regularization
👍 5hiromi2022/12/18 - 正解だと思う選択肢: C
Answer C: In the official sample questions, there's a similar question, the explanation is that L! is for reducing overfitting while explainability (shapely) is for feature selection, hence C. https://docs.google.com/forms/d/e/1FAIpQLSeYmkCANE81qSBqLW0g2X7RoskBX9yGYQu-m1TtsjMvHabGqg/viewform
👍 3mlgh2023/01/26
シャッフルモード