Skip to main content

Decoding Hadoop ETL

Continuing the Q&A with Syncsort CEO Lonne Jaffe, we explore the ETL use cases in Hadoop ecosystem. Lonne explains some of the key distinguishing characteristics of ETL solutions and how they make a compelling use case with inexpensive implementations.

 

 

What makes a great ETL solution for Hadoop? Can you tell us the important characteristics?

Some legacy ETL products are too heavyweight to work well with Hadoop – they don’t run natively, they sit on edge nodes or they generate a lot of inefficient code that needs to be maintained. 

Our enterprise-grade Hadoop-based transformation engine sits on each node in a cluster to deliver accelerated performance and avoids generating code. We made an open source contribution that enabled our engine to run natively in the Hadoop environment, which was committed as MAPREDUCE-2454 in early 2013. We’re now delivering, to some of the largest and most sophisticated users of Hadoop in the world, a product that can handle complex data models that include disparate structured and unstructured data from varied data sources, including the mainframe. 

We’re also focusing our organic investments on making it as easy as possible to move legacy workloads and data into Hadoop. For example, we created a SQL analyzer that scans and creates maps of existing legacy SQL and assists in efficiently recreating those SQL-based legacy workloads in Hadoop. We also built a product that analyzes SMF records on the mainframe to identify the mainframe workloads that are best-suited to moving to Hadoop to save money, improve performance, and make the data accessible to next-generation analytics.

Expect continued improvements in these existing capabilities and more offerings like this from us. 

 

How critical is pricing in your segment? Do the customers bend backwards on the price point?

- In 2013, many companies had no budget for Hadoop.  Customers were just testing – no real dollars were committed to building out production Hadoop clusters.  That has changed substantially in 2014.  Organizations can now much more easily justify investment in Hadoop because they immediately realize cost reductions in legacy data warehouses, legacy ETL tools and mainframes, saving much more money than it costs to create their Hadoop environment. Groups within large enterprises that are running offload projects are aggregating power and budget since they’re freeing up so much annual spend. Offloading legacy workloads and data into Hadoop doesn’t only save money, but, it also brings a new class of analytical compute power to the data. These organizations can quickly demonstrate the competitive benefits of the advanced analytics that Hadoop makes possible, giving them insights that can help grow the business – contributing to the top line. Anything that can generate top line revenue growth and lower costs simultaneously is very valuable to enterprises today.

 

Among the various use cases of your products in Hadoop ecosystem, can you tell us about your most fascinating one?

- What’s most fascinating about our customers’ use cases is how they have changed the economics of managing data. One customer who measured the cost of managing a terabyte of data in their enterprise data warehouse at $100,000 was able to offload and manage data in Hadoop at a cost of only $1,000 per terabyte -- ETL-like workloads can represent as much of 40-60% of the capacity of legacy enterprise data warehouses systems. In another case, by offloading mainframe data and processing into Hadoop, a major bank was able to save money and make new analytical capabilities available. Customers can also offload workloads from legacy ETL products into Hadoop.
We’re focusing much of our R&D and acquisition bandwidth going forward on building unique products that make this offload-to-Hadoop process as seamless as possible.


<< Previous - Offloading with Hadoop

Comments

Popular posts from this blog

In-memory data model with Apache Gora

Open source in-memory data model and persistence for big data framework Apache Gora™ version 0.3, was released in May 2013. The 0.3 release offers significant improvements and changes to a number of modules including a number of bug fixes. However, what may be of significant interest to the DynamoDB community will be the addition of a gora-dynamodb datastore for mapping and persisting objects to Amazon's DynamoDB. Additionally the release includes various improvements to the gora-core and gora-cassandra modules as well as a new Web Services API implementation which enables users to extend Gora to any cloud storage platform of their choice. This 2-part post provides commentary on all of the above and a whole lot more, expanding to cover where Gora fits in within the NoSQL and Big Data space, the development challenges and features which have been baked into Gora 0.3 and finally what we have on the road map for the 0.4 development drive.
Introducing Apache Gora Although there are var…

Data deduplication tactics with HDFS and MapReduce

As the amount of data continues to grow exponentially, there has been increased focus on stored data reduction methods. Data compression, single instance store and data deduplication are among the common techniques employed for stored data reduction.
Deduplication often refers to elimination of redundant subfiles (also known as chunks, blocks, or extents). Unlike compression, data is not changed and eliminates storage capacity for identical data. Data deduplication offers significant advantage in terms of reduction in storage, network bandwidth and promises increased scalability.
From a simplistic use case perspective, we can see application in removing duplicates in Call Detail Record (CDR) for a Telecom carrier. Similarly, we may apply the technique to optimize on network traffic carrying the same data packets.
Some of the common methods for data deduplication in storage architecture include hashing, binary comparison and delta differencing. In this post, we focus on how MapReduce and…

Amazon DynamoDB datastore for Gora

What was initially suggested during causal conversation at ApacheCon2011 in November 2011 as a “neat idea”, would soon become prime ground for Gora's first taste of participation within Google's Summer of Code program. Initially, the project, titled Amazon DynamoDB datastore for Gora, merely aimed to extend the Gora framework to Amazon DynamoDB. However, it seem became obvious that the issue would include much more than that simple vision.

The Gora 0.3 Toolbox We briefly digress to discuss some other noticeable additions to Gora in 0.3, namely: Modification of the Query interface: The Query interface was amended from Query<K, T> to Query<K, T extends Persistent> to be more precise and explicit for developers. Consequently all implementors and users of the Query interface can only pass object's of Persistent type. Logging improvements for data store mappings: A key aspect of using Gora well is the establishment and accurate definitio…