MapReduce is the elementary data access for Hadoop. MapReduce provides the fastest way in terms of performance, but maybe not in terms of time-to-market. Writing MapReduce queries might be trickier than Hive or Pig. Other projects, such as Hive or Pig translate the code you entered into native MapReduce queries and therefore often come with a tradeoff.
A typical MapReduce function follows the following process:
- The Input Data is distributed on different Map-processes. The Map processes work on the provided Map-function.
- The Map-processes are executed in parallel.
- Each Map-process issues intermediate results. These results are stored, which is often called the shuffle-phase.
- Once all intermediate results are available, the Map-function has finished and the reduce function starts.
- The Reduce-function works on the intermediate results. The Reduce-function is also provided by the user (just like the Map-function).
A classical way to demonstrate MapReduce is via the Word-count example. The following listing will show this.
map(String name, String content):
for each word w in content: EmitIntermediate(w, 1); reduce(String word, Iterator intermediateList): int result = 0; for each v in intermediateList: result++; Emit(word, result); |
Leave a Reply
Want to join the discussion?Feel free to contribute!