Topic 1 Question 76
A company uses a long short-term memory (LSTM) model to evaluate the risk factors of a particular energy sector. The model reviews multi-page text documents to analyze each sentence of the text and categorize it as either a potential risk or no risk. The model is not performing well, even though the Data Scientist has experimented with many different network structures and tuned the corresponding hyperparameters. Which approach will provide the MAXIMUM performance boost?
Initialize the words by term frequency-inverse document frequency (TF-IDF) vectors pretrained on a large collection of news articles related to the energy sector.
Use gated recurrent units (GRUs) instead of LSTM and run the training process until the validation loss stops decreasing.
Reduce the learning rate and run the training process until the training loss stops decreasing.
Initialize the words by word2vec embeddings pretrained on a large collection of news articles related to the energy sector.
ユーザの投票
コメント(13)
I think the right answer is D
👍 24jiadong2021/09/27D is correct. C is not the best the answer because the question states that tuning parameters doesn't help a lot. Transfer learning would be better solution!
👍 11SophieSu2021/09/29both A & D "seem" correct, but word2vec takes ORDER of words into acc (to some extent)--while TF-IDF does not. Thus max boost is from D.
B,C are wrong because the DS has tried several network architectures (aka LSTM) and hyperparameter tuning (aka option C)
👍 6bitsplease2022/01/19
シャッフルモード