Topic 1 Question 42
Your company has recently grown rapidly and now ingesting data at a significantly higher rate than it was previously. You manage the daily batch MapReduce analytics jobs in Apache Hadoop. However, the recent increase in data has meant the batch jobs are falling behind. You were asked to recommend ways the development team could increase the responsiveness of the analytics without increasing costs. What should you recommend they do?
Rewrite the job in Pig.
Rewrite the job in Apache Spark.
Increase the size of the Hadoop cluster.
Decrease the size of the Hadoop cluster but also rewrite the job in Hive.
ユーザの投票
コメント(17)
I would say B since Apache Spark is faster than Hadoop/Pig/MapReduce
👍 33jvg6372020/03/15Answer: B Description: Spark performs in-memory processing and faster, which results in optimization of job’s processing time
👍 17[Removed]2020/03/27Wow, a question that does not recommend to use Google product
👍 7ler_mp2023/01/03
シャッフルモード