A parallel processing model for handling extremely large data sets. First, a Map process runs to reduce a data set to key-value pairs (in tuples), and then a second Reduce process combines those pairs into a smaller set of tuples. First introduced by Google, MapReduce is a concept central to Hadoop.
Content © 2012-2019. All Rights Reserved.
Powered by T.O.W.E.R.S. IoTGuide, ThingManager, thingguide and thngguide are service marks. The internetofthingsguide.com domain name is used under license.