MapReduce is a specialized framework designed for processing large volumes of data in parallel across a cluster of computers, also known as nodes. This system involves three primary operations – Map, Shuffle, and Reduce – that work on pairs of keys and values to manipulate and summarize data. Notable attributes of MapReduce include its abilities to handle faults, provide redundancy, and execute tasks in parallel. Available in several programming languages, it’s widely implemented in Apache Hadoop, an open-source software. While MapReduce has been criticized for some limitations, it’s adapted for use in various settings, such as multi-core systems and dynamic cloud environments. It’s applied in several contexts, such as distributed pattern searching and web link-graph reversal.
MapReduce est un programming model and an associated implementation for processing and generating big data sets with a parallel, distributed algorithm on a cluster.
A MapReduce program is composed of a map procedure, which performs filtering and sorting (such as sorting students by first name into queues, one queue for each name), and a reduce method, which performs a summary operation (such as counting the number of students in each queue, yielding name frequencies). The "MapReduce System" (also called "infrastructure" or "framework") orchestrates the processing by marshalling the distributed servers, running the various tasks in parallel, managing all communications and data transfers between the various parts of the system, and providing for redundancy et fault tolerance.
The model is a specialization of the split-apply-combine strategy for data analysis. It is inspired by the map et reduce functions commonly used in functional programming, although their purpose in the MapReduce framework is not the same as in their original forms. The key contributions of the MapReduce framework are not the actual map and reduce functions (which, for example, resemble the 1995 Message Passing Interface standard's reduce et scatter operations), but the scalability and fault-tolerance achieved for a variety of applications due to parallelization. As such, a single-threaded implementation of MapReduce is usually not faster than a traditional (non-MapReduce) implementation; any gains are usually only seen with multi-threaded implementations on multi-processor hardware. The use of this model is beneficial only when the optimized distributed shuffle operation (which reduces network communication cost) and fault tolerance features of the MapReduce framework come into play. Optimizing the communication cost is essential to a good MapReduce algorithm.
MapReduce libraries have been written in many programming languages, with different levels of optimization. A popular open-source implementation that has support for distributed shuffles is part of Apache Hadoop. The name MapReduce originally referred to the proprietary Google technology, but has since been genericized. By 2014, Google was no longer using MapReduce as their primary big data processing model, and development on Apache Mahout had moved on to more capable and less disk-oriented mechanisms that incorporated full map and reduce capabilities.