As
much as MapReduce is popular, so much is the discussion to make it even better
from a generalized approach to higher performance oriented approach. We will be
discussing a few frameworks which have tried to adapt MapReduce further for
higher performance orientation.
The first post in this series
tries will discuss AMREF, an Adaptive MapReduce Framework designed for real
time data intensive applications. (published in the paper Fan Zhang, Junwei Cao,
Xiaolong Song, Hong Cai, Cheng Wu: AMREF: An Adaptive MapReduce Framework forReal Time Applications. GCC 2010: 157-162.)
It is always a tricky question
on how many splitters, mappers and reducers must be there for an optimal
configuration. Faced with the same challenge, the authors felt it is normally
difficult to optimally predefine the number in order to maximize the operation performance.
The perennial dilemma according to them is how to balance between full utilization
of the nodes and the waiting period for an incoming event.
Splitter, as per the authors,
should take the additional responsibility to see which mapper is faster or
slower and accordingly the files need to be allocated to each mapper. For a
faster mapper, the files will be relatively more than other mappers.
As per the design proposed by
authors, the ‘Adaptive splitter’ would in stage 1 distribute
the input file evenly to the mappers.
Next in stage 2, different mappers with different processing capacity would have
different length of input files. Thereafter, in stage 3, a new input file is
distributed to the three mappers according to their processing capacity.
In the mapping stage, ‘Adaptive
mapper’ design increases or decreases the mappers based on the run time application.
An adaptive mapper is added dynamically if it is observed that there is an
overburden on the other mappers, or an unbalanced workload between mappers and
reducers. Similarly the design proposed to decrease adaptively a mapper when
the utilization is less.
For the ‘Adaptive Reducers’,
when the output of mappers are too fast for the number of reducers, an adaptive
reducer is added in parallel to produces output. Another variant could use a sequential
reducer as an adaptive addition where the input to the reducer would be the output
of the earlier reducers.
The authors used feedback
control and stochastic control in their experiments with this design approach. In
a positive feedback loop, if the utilization of the 95% servers or above in splitting
stage surpassed 90%, another splitting server node was added to optimize the
workload. Similarly, if the utilization of the 95% servers or above in
splitting stage was lower than 20%, they adaptively decreased one splitting
node. Similar rules were applied to map and reduce stage.
Another interesting technique
which was employed included Stochastic control. In this technique, they relied
on prediction based on the incoming data, including the incoming time, the amounts,
and traffic spikes to adjust the network for moderating the mutation of
incoming data
As reported in their
conclusion, they found Kalman filter prediction to be much more effective than
Smooth filter prediction. Kalman filter named after Rudolf (Rudy) E. Kálmán has
"common application for guidance, navigation and control of vehicles,
particularly aircraft and spacecraft. Furthermore, the Kalman filter is a
widely applied concept in time series". We will be covering about Kalman filter in out
subsequent posts because of the huge interest and discussion that it has been
generating in the circles of late.
Overall, the Adaptive MapReduce
approach presented by the authors offers interesting options to the application
designer. As claimed, it could have an impact in real time applications though
the real test would come in the commercial implementations subjected to huge
data sets on real time.
Comments
Post a Comment