Topic 1 Question 135
You work for the AI team of an automobile company, and you are developing a visual defect detection model using TensorFlow and Keras. To improve your model performance, you want to incorporate some image augmentation functions such as translation, cropping, and contrast tweaking. You randomly apply these functions to each training batch. You want to optimize your data processing pipeline for run time and compute resources utilization. What should you do?
Embed the augmentation functions dynamically in the tf.Data pipeline.
Embed the augmentation functions dynamically as part of Keras generators.
Use Dataflow to create all possible augmentations, and store them as TFRecords.
Use Dataflow to create the augmentations dynamically per training run, and stage them as TFRecords.
ユーザの投票
コメント(10)
- 正解だと思う選択肢: A
incorporating the augmentation functions into the pipeline, you can apply them dynamically to each training batch, without the need to generate all possible augmentations in advance or stage them as TFRecords.
👍 3mil_spyro2022/12/13 - 正解だと思う選択肢: B👍 2YangG2022/12/13
- 正解だと思う選択肢: A
Embedding the augmentation functions dynamically in the tf.Data pipeline allows the data pipeline to apply the augmentations on the fly as the data is being loaded into the model during training. This means that the model can utilize the compute resources effectively by loading and processing the data as needed, rather than pre-generating all possible augmentations ahead of time (as in options C and D), which could be computationally expensive and time-consuming.
Option B is also a viable choice, but it may not be as efficient as option A since the data augmentation functions would be applied during training using Keras generators, which could cause some overhead.
👍 2shankalman7172023/02/24
シャッフルモード