Continuing the Q&A with Syncsort CEO Lonne Jaffe, we explore the ETL use cases in Hadoop ecosystem. Lonne explains some of the key distinguishing characteristics of ETL solutions and how they make a compelling use case with inexpensive implementations.
What makes a great ETL solution for Hadoop? Can you tell us the important characteristics?
Some legacy ETL products are too heavyweight to work well with Hadoop – they don’t run natively, they sit on edge nodes or they generate a lot of inefficient code that needs to be maintained.
Our enterprise-grade Hadoop-based transformation engine sits on each node in a cluster to deliver accelerated performance and avoids generating code. We made an open source contribution that enabled our engine to run natively in the Hadoop environment, which was committed as MAPREDUCE-2454 in early 2013. We’re now delivering, to some of the largest and most sophisticated users of Hadoop in the world, a product that can handle complex data models that include disparate structured and unstructured data from varied data sources, including the mainframe.
We’re also focusing our organic investments on making it as easy as possible to move legacy workloads and data into Hadoop. For example, we created a SQL analyzer that scans and creates maps of existing legacy SQL and assists in efficiently recreating those SQL-based legacy workloads in Hadoop. We also built a product that analyzes SMF records on the mainframe to identify the mainframe workloads that are best-suited to moving to Hadoop to save money, improve performance, and make the data accessible to next-generation analytics.
Expect continued improvements in these existing capabilities and more offerings like this from us.
How critical is pricing in your segment? Do the customers bend backwards on the price point?
- In 2013, many companies had no budget for Hadoop. Customers were just testing – no real dollars were committed to building out production Hadoop clusters. That has changed substantially in 2014. Organizations can now much more easily justify investment in Hadoop because they immediately realize cost reductions in legacy data warehouses, legacy ETL tools and mainframes, saving much more money than it costs to create their Hadoop environment. Groups within large enterprises that are running offload projects are aggregating power and budget since they’re freeing up so much annual spend. Offloading legacy workloads and data into Hadoop doesn’t only save money, but, it also brings a new class of analytical compute power to the data. These organizations can quickly demonstrate the competitive benefits of the advanced analytics that Hadoop makes possible, giving them insights that can help grow the business – contributing to the top line. Anything that can generate top line revenue growth and lower costs simultaneously is very valuable to enterprises today.
Among the various use cases of your products in Hadoop ecosystem, can you tell us about your most fascinating one?
- What’s most fascinating about our customers’ use cases is how they have changed the economics of managing data. One customer who measured the cost of managing a terabyte of data in their enterprise data warehouse at $100,000 was able to offload and manage data in Hadoop at a cost of only $1,000 per terabyte -- ETL-like workloads can represent as much of 40-60% of the capacity of legacy enterprise data warehouses systems. In another case, by offloading mainframe data and processing into Hadoop, a major bank was able to save money and make new analytical capabilities available. Customers can also offload workloads from legacy ETL products into Hadoop.
We’re focusing much of our R&D and acquisition bandwidth going forward on building unique products that make this offload-to-Hadoop process as seamless as possible.
<< Previous - Offloading with Hadoop
Comments
Post a Comment