Topic 1 Question 231
You recently deployed several data processing jobs into your Cloud Composer 2 environment. You notice that some tasks are failing in Apache Airflow. On the monitoring dashboard, you see an increase in the total workers memory usage, and there were worker pod evictions. You need to resolve these errors. What should you do?
Increase the directed acyclic graph (DAG) file parsing interval.
Increase the Cloud Composer 2 environment size from medium to large.
Increase the maximum number of workers and reduce worker concurrency.
Increase the memory available to the Airflow workers.
Increase the memory available to the Airflow triggerer.
ユーザの投票
コメント(3)
- 正解だと思う選択肢: B
B&D: B :
- Scaling up the environment size can provide more resources, including memory, to the Airflow workers. If worker pod evictions are occurring due to insufficient memory, increasing the environment size to allocate more resources could alleviate the problem and improve the stability of your data processing jobs.
D:
- Increase the memory available to the Airflow workers. - Directly increasing the memory allocation for Airflow workers can address the issue of high memory usage and worker pod evictions. More memory per worker means that each worker can handle more demanding tasks or a higher volume of tasks without running out of memory.
👍 2raaad2024/01/04 - 正解だと思う選択肢: C
C and D Check ref for memory optimization - https://cloud.google.com/composer/docs/composer-2/optimize-environments
👍 2GCP0012024/01/10 - 正解だと思う選択肢: C
C and D
👍 1Jordan182024/01/06
シャッフルモード