Skip to main content

Situational Aware Mappers with Jaql


Adapting MapReduce for a higher performance has been one of the popular discussion topics. Let’s continue with our series on Adaptive MapReduce and explore the feature available via Jaql in IBM BigInsights commercial offering. This implementation also points to a much more vital corollary that enterprise offerings of Apache Hadoop are not just mere packaging and re-sell but have a bigger research initiative going on beneath the covers.


First a snapshot from a IBM Infosphere BigInsights presentation on what Adaptive MapReduce means in the product context.


To implement this, the guidance offered to IBM InfoSphere BigInsights v 1.3 developers is :

Normal Hadoop map tasks take exactly one split of data each.
Adaptive mappers start fewer map tasks and each mapper decides at run time
how many splits to take to process all splits. This process is beneficial if the task
startup is very expensive (for example, the task has to load some reference data),
or if a job has many small splits (for example, the input directory has many small files).
In both cases, adaptive mappers minimize the task startup cost, yet they balance the
workload across the cluster.

Adaptive MapReduce is off by default. 
You can turn it on in your Jaql query by using the setOptions function 
to set adaptivemr.map.enable = true, as shown in the following example:
 
conf = {
  "adaptivemr.map.enable": true
};





Now, let us dig deeper and explore 2 papers which demonstrate how Adaptive MapReduce can be leveraged using Situation Aware Mappers (SAM) and physical transparency feature of Jaql.
[1] Rares Vernica, Andrey Balmin, Kevin S. Beyer, Vuk Ercegovac: Adaptive MapReduce using situation-aware mappers. EDBT 2012: 420-431
[2] Andrey Balmin, Vuk Ercegovac, Rares Vernica, Kevin S. Beyer: Adaptive Processing of User-Defined Aggregates in Jaql. IEEE Data Eng. Bull. 34(4): 36-43 (2011)

The papers describe Adaptive Mapper, Adatpive Combiner, Adadptive Sampling and Adaptive  Partitioning.

Adaptive Mappers (AMs) dynamically stitch together multiple splits to be processed by a single mapper while changing the checkpoint interval at run time. (discussed in more detail below).

Adaptive Combiners (ACs) use best-effort hash-based aggregation of map outputs in
a fixed-size hash table kept in a mapper like Hive. Adaptive Combiner has been claimed to speed up user defined aggregates like determining top million pages with most number of out links. Leveraging Jaql’s physical transparency, the global thresholds are updated and coordinated between mappers which ensure the records below the threshold lower bound are discarded by mapper itself and not sent to reducer. This implies quicker sort, shuffle and merge along with lesser data travel.

Adaptive Sampling (AS) uses some early map outputs to produce a global sample of their keys which helps to determine when to stop sampling at runtime. Adaptive Partitioning (AP) dynamically partitions map outputs based on the sample. Mappers co-ordinate in parallel and the partitioning function is decided by one of them based on the sample. As soon as the partitioning function is decided, the mappers can start outputting data, which triggers the start of their partitioners.


One of the main components of this Situational Aware Mapper(SAM) technique is a distributed meta-data store (DMDS) which is implemented using Apache ZooKeeper. DMDS is a asynchronous communication channel between mappers and enables the mappers to post some metadata about their state and see state of all other mappers. This enables all mappers to be situational aware about other mappers and the job states thus enabling a global coordination and decision structure.


Unlike the usual MapReduce technique, where there is a one-to-one correspondence of map tasks and splits, an Adaptive Mappers (AM) makes a decision after every split to either checkpoint or take another split and “stitch” it to already processed one. The split location information is stored in ZooKeeper. Every time an AM finishes processing a split, it makes a decision to stop or to take a new split from DMDS and concatenate it to the existing one, transparently to the map function.

The key steps in an Adaptive Mapper local split processing include:
1: Create a locations and an assigned node for each job.
(where locations stores the metadata while assigned maps the split to mapper)
2: Use virtual splits to start mappers.
3: Connect to ZooKeeper and retrieve a list of the real splits, which are local to the current host
4: AM picks a random split from the list and locks it
5: Process
When mappers finish processing local splits, they will process unprocessed remote splits (as in steps 4, 5) which are determined by subtracting the list of assigned splits from the list of available splits at that host.


In a usual MapReduce scenario, having more mappers increases task scheduling and starting overhead, like running user code to perform job-specific setup tasks. However, having smaller splits tends to reduce the benefit from applying a combiner. The AM tends to decouple the number of splits from the number of mappers thus tries to achieve load balancing, reduced scheduling and starting overhead, and combiner benefit. AMs, however, do not use speculative execution and instead rely on relatively small split size. AM failure resolution involves automatically restarting failed mappers. While the merits may vary by use-case, one of the attractions of this approach seems to be no dependence on prior learning or modeling of job execution. Plus with code already imbibed in the Hadoop offerings, this approach is definitely a pick among various adaptive MapReduce approach.



Adaptive MapReduce using Situation-Aware Mappers

Top Image theme: Situational Aware Jaql ; courtesy: http://www.freedigitalphotos.net

Comments

Popular posts from this blog

Hadoop's 10 in LinkedIn's 10

LinkedIn, the pioneering professional social network has turned 10 years old. One of the hallmarks of its journey has been its technical accomplishments and significant contribution to open source, particularly in the last few years. Hadoop occupies a central place in its technical environment powering some of the most used features of desktop and mobile app. As LinkedIn enters the second decade of its existence, here is a look at 10 major projects and products powered by Hadoop in its data ecosystem.
1)      Voldemort:Arguably, the most famous export of LinkedIn engineering, Voldemort is a distributed key-value storage system. Named after an antagonist in Harry Potter series and influenced by Amazon’s Dynamo DB, the wizardry in this database extends to its self healing features. Available in HA configuration, its layered, pluggable architecture implementations are being used for both read and read-write use cases.
2)      Azkaban:A batch job scheduling system with a friendly UI, Azkab…

Data deduplication tactics with HDFS and MapReduce

As the amount of data continues to grow exponentially, there has been increased focus on stored data reduction methods. Data compression, single instance store and data deduplication are among the common techniques employed for stored data reduction.
Deduplication often refers to elimination of redundant subfiles (also known as chunks, blocks, or extents). Unlike compression, data is not changed and eliminates storage capacity for identical data. Data deduplication offers significant advantage in terms of reduction in storage, network bandwidth and promises increased scalability.
From a simplistic use case perspective, we can see application in removing duplicates in Call Detail Record (CDR) for a Telecom carrier. Similarly, we may apply the technique to optimize on network traffic carrying the same data packets.
Some of the common methods for data deduplication in storage architecture include hashing, binary comparison and delta differencing. In this post, we focus on how MapReduce and…

Top Big Data Influencers of 2015

2015 was an exciting year for big data and hadoop ecosystem. We saw hadoop becoming an essential part of data management strategy of almost all major enterprise organizations. There is cut throat competition among IT vendors now to help realize the vision of data hub, data lake and data warehouse with Hadoop and Spark.
As part of its annual assessment of big data and hadoop ecosystem, HadoopSphere publishes a list of top big data influencers each year. The list is derived based on a scientific methodology which involves assessing various parameters in each category of influencers. HadoopSphere Top Big Data Influencers list reflects the people, products, organizations and portals that exercised the most influence on big data and ecosystem in a particular year. The influencers have been listed in the following categories:

AnalystsSocial MediaOnline MediaProductsTechiesCoachThought LeadersClick here to read the methodology used.

Analysts:Doug HenschenIt might have been hard to miss Doug…