Skip to main content

Building an open source data warehouse with Apache Tajo

In our continuing series on Apache Tajo, we present the second part of interview with PMC chair Hyunsik Choi. While we earlier talked about the technical aspects of Apache Tajo in the first part of interview, we now further explore more on the fitment in the big data ecosystem. Tajo is designed for SQL queries on large data sets stored on HDFS and other data sources.Read below to find out more on how Apache Tajo can help you build an open source data warehouse.

How does Apache Tajo technically compare to Hive, Impala and Spark and what is the reason for performance difference?

In general view, Tajo is different from Hive, Spark, and Impala.
Tajo is a monolithic system for distributed relational processing. In contrast, Hive and Spark SQL are based on generic purpose processing systems (i.e., Tez and Spark respectively), and they also have additional layer for SQL(-like) language.
Tajo and Hive are implemented in Java, Impala is implemented C++, and Spark is implemented in Scala.

It may need micro benchmark in order to identify which parts give performance benefits. But, as far as I know, there hasn’t been such a study.
I can make a guess at some reasons as follows:
Tajo’s query optimizer is mature.
Tajo is specialized for distributed relational processing throughout whole stacks, and many parts including lower-level implementation are optimized for its purpose.
Tajo’s DAG framework can have more optimization opportunities by combinations of both various shuffle methods and flexible relational operator tree in stages. 
Tajo has more than 40 physical operators. Some of them are disk-based algorithms to spill data into disks, and others perform main-memory algorithms. Tajo physical planner chooses the best one according to a logical plan and available resources.
The hash shuffle of Tajo is very fast because it does the per-node hash shuffle, where all tuples associated with the same key are stored into a single file. So, this approach can exploit more sequential access during spilling and fetching. In contrast, other systems a disk-spilled shuffle method does task-level hash shuffle, causing a number of small random accesses.

Does Apache Tajo complement or substitute Apache Hive?

In 2010, Tajo was designed to an alternative to Apache Hive. At that time, there was no alternative to Hive. Now, we still are driving Tajo as an alternative to Hive. However, Tajo is also used as a complement system. Some users maintain both systems at the same time while they are migrating Hive workloads to Tajo. Such a migration is very easy because Tajo is compatible to most of Hive features except for SQL syntax; Tajo is an ANSI SQL standard compliance system. Also, some users replace only some Hive workloads by Apache Tajo due to more low response times without completely migrating Hive into Tajo. 

Can you take us through key steps in establishing a complete data warehouse with Apache Tajo?

First of all, you need to make a plan for the following:
which data sources (e.g., web logs, text files, JSON, HBase tables, RDBMs, …)
how long you will archive them
most dominant workloads and table schemas

The above factors may be similar when you consider RDBMS-based DW systems. But, Tajo supports in-situ processing on various data sources (e.g., HDFS, HBase) and  file formats (e.g., Parquet, SequenceFile, RCFile, Text, flat Json, and custom file formats). So, with Tajo you can maintain a warehouse by involving ETL process as well as archiving directly raw data sets without ETL. Also, depending on workloads and the archiving period, you need to consider table partitions, file formats, and compression policy. Recently, many Tajo users use Parquet in order to archive data sets, and Parquet also provides relatively faster query response times. Parquet’s storage space efficiency is great due to the advantages of columnar compression.

From archived data sets, you can build data marts which enable faster access against certain subject data sets. Due to its low latency, Tajo can be used as an OLAP engine to make reports and process interactive ad-hoc queries on data marts via BI tools and JDBC driver.



Hyunsik Choi is a director of research at Gruter Inc. which is a big data platform startup company located in Palo Alto CA. He is a co-founder of Apache Tajo project, which is an open source data warehouse system and is one of the Apache top-level projects. Since 2013, he has been a full-time contributor of Tajo. His recent interests are query compilation, cost-based optimization, and large-scale distributed systems. He also obtained a PhD degree from Korea University in 2013.

Comments

Popular posts from this blog

Offloading legacy with Hadoop

With most Fortune 500 organizations having invested in mainframes and other workload systems in the past, the rise of Big Data platforms poses newer integration challenges. The data integration and ETL players are finding fresh opportunities to solve business and IT problems within the Hadoop ecosystem.
To understand the context, challenges and opportunities, we asked a few questions to Syncsort CEO Lonne Jaffe. Syncsort provides fast, secure, enterprise-grade software spanning Big Data in Apache Hadoop to Big Iron on mainframes. At Syncsort, Lonne Jaffe is focusing on accelerating the growth of the company's high-performance Big Data offerings, both organically and through acquisition.
From mainframes to Hadoop and other platforms, Syncsort seems to have been evolving itself continuously. Where do you see Syncsort heading further?Lonne Jaffe: Syncsort is extraordinary in its ability to continuously reinvent itself. Today, we’re innovating around Apache Hadoop and other Big Data pla…

Data deduplication tactics with HDFS and MapReduce

As the amount of data continues to grow exponentially, there has been increased focus on stored data reduction methods. Data compression, single instance store and data deduplication are among the common techniques employed for stored data reduction.
Deduplication often refers to elimination of redundant subfiles (also known as chunks, blocks, or extents). Unlike compression, data is not changed and eliminates storage capacity for identical data. Data deduplication offers significant advantage in terms of reduction in storage, network bandwidth and promises increased scalability.
From a simplistic use case perspective, we can see application in removing duplicates in Call Detail Record (CDR) for a Telecom carrier. Similarly, we may apply the technique to optimize on network traffic carrying the same data packets.
Some of the common methods for data deduplication in storage architecture include hashing, binary comparison and delta differencing. In this post, we focus on how MapReduce and…

Top Big Data Influencers of 2015

2015 was an exciting year for big data and hadoop ecosystem. We saw hadoop becoming an essential part of data management strategy of almost all major enterprise organizations. There is cut throat competition among IT vendors now to help realize the vision of data hub, data lake and data warehouse with Hadoop and Spark.
As part of its annual assessment of big data and hadoop ecosystem, HadoopSphere publishes a list of top big data influencers each year. The list is derived based on a scientific methodology which involves assessing various parameters in each category of influencers. HadoopSphere Top Big Data Influencers list reflects the people, products, organizations and portals that exercised the most influence on big data and ecosystem in a particular year. The influencers have been listed in the following categories:

AnalystsSocial MediaOnline MediaProductsTechiesCoachThought LeadersClick here to read the methodology used.

Analysts:Doug HenschenIt might have been hard to miss Doug…