Skip to main content

Building an open source data warehouse with Apache Tajo

In our continuing series on Apache Tajo, we present the second part of interview with PMC chair Hyunsik Choi. While we earlier talked about the technical aspects of Apache Tajo in the first part of interview, we now further explore more on the fitment in the big data ecosystem. Tajo is designed for SQL queries on large data sets stored on HDFS and other data sources.Read below to find out more on how Apache Tajo can help you build an open source data warehouse.

How does Apache Tajo technically compare to Hive, Impala and Spark and what is the reason for performance difference?

In general view, Tajo is different from Hive, Spark, and Impala.
Tajo is a monolithic system for distributed relational processing. In contrast, Hive and Spark SQL are based on generic purpose processing systems (i.e., Tez and Spark respectively), and they also have additional layer for SQL(-like) language.
Tajo and Hive are implemented in Java, Impala is implemented C++, and Spark is implemented in Scala.

It may need micro benchmark in order to identify which parts give performance benefits. But, as far as I know, there hasn’t been such a study.
I can make a guess at some reasons as follows:
Tajo’s query optimizer is mature.
Tajo is specialized for distributed relational processing throughout whole stacks, and many parts including lower-level implementation are optimized for its purpose.
Tajo’s DAG framework can have more optimization opportunities by combinations of both various shuffle methods and flexible relational operator tree in stages. 
Tajo has more than 40 physical operators. Some of them are disk-based algorithms to spill data into disks, and others perform main-memory algorithms. Tajo physical planner chooses the best one according to a logical plan and available resources.
The hash shuffle of Tajo is very fast because it does the per-node hash shuffle, where all tuples associated with the same key are stored into a single file. So, this approach can exploit more sequential access during spilling and fetching. In contrast, other systems a disk-spilled shuffle method does task-level hash shuffle, causing a number of small random accesses.

Does Apache Tajo complement or substitute Apache Hive?

In 2010, Tajo was designed to an alternative to Apache Hive. At that time, there was no alternative to Hive. Now, we still are driving Tajo as an alternative to Hive. However, Tajo is also used as a complement system. Some users maintain both systems at the same time while they are migrating Hive workloads to Tajo. Such a migration is very easy because Tajo is compatible to most of Hive features except for SQL syntax; Tajo is an ANSI SQL standard compliance system. Also, some users replace only some Hive workloads by Apache Tajo due to more low response times without completely migrating Hive into Tajo. 

Can you take us through key steps in establishing a complete data warehouse with Apache Tajo?

First of all, you need to make a plan for the following:
which data sources (e.g., web logs, text files, JSON, HBase tables, RDBMs, …)
how long you will archive them
most dominant workloads and table schemas

The above factors may be similar when you consider RDBMS-based DW systems. But, Tajo supports in-situ processing on various data sources (e.g., HDFS, HBase) and  file formats (e.g., Parquet, SequenceFile, RCFile, Text, flat Json, and custom file formats). So, with Tajo you can maintain a warehouse by involving ETL process as well as archiving directly raw data sets without ETL. Also, depending on workloads and the archiving period, you need to consider table partitions, file formats, and compression policy. Recently, many Tajo users use Parquet in order to archive data sets, and Parquet also provides relatively faster query response times. Parquet’s storage space efficiency is great due to the advantages of columnar compression.

From archived data sets, you can build data marts which enable faster access against certain subject data sets. Due to its low latency, Tajo can be used as an OLAP engine to make reports and process interactive ad-hoc queries on data marts via BI tools and JDBC driver.



Hyunsik Choi is a director of research at Gruter Inc. which is a big data platform startup company located in Palo Alto CA. He is a co-founder of Apache Tajo project, which is an open source data warehouse system and is one of the Apache top-level projects. Since 2013, he has been a full-time contributor of Tajo. His recent interests are query compilation, cost-based optimization, and large-scale distributed systems. He also obtained a PhD degree from Korea University in 2013.

Comments

Popular posts from this blog

Data deduplication tactics with HDFS and MapReduce

As the amount of data continues to grow exponentially, there has been increased focus on stored data reduction methods. Data compression, single instance store and data deduplication are among the common techniques employed for stored data reduction.
Deduplication often refers to elimination of redundant subfiles (also known as chunks, blocks, or extents). Unlike compression, data is not changed and eliminates storage capacity for identical data. Data deduplication offers significant advantage in terms of reduction in storage, network bandwidth and promises increased scalability.
From a simplistic use case perspective, we can see application in removing duplicates in Call Detail Record (CDR) for a Telecom carrier. Similarly, we may apply the technique to optimize on network traffic carrying the same data packets.
Some of the common methods for data deduplication in storage architecture include hashing, binary comparison and delta differencing. In this post, we focus on how MapReduce and…

5 online tools in data visualization playground

While building up an analytics dashboard, one of the major decision points is regarding the type of charts and graphs that would provide better insight into the data. To avoid a lot of re-work later, it makes sense to try the various chart options during the requirement and design phase. It is probably a well known myth that existing tool options in any product can serve all the user requirements with just minor configuration changes. We all know and realize that code needs to be written to serve each customer’s individual needs.
To that effect, here are 5 tools that could empower your technical and business teams to decide on visualization options during the requirement phase. Listed below are online tools for you to add data and use as playground.
1)      Many Eyes: Many Eyes is a data visualization experiment by IBM Researchandthe IBM Cognos software group. This tool provides option to upload data sets and create visualizations including Scatter Plot, Tree Map, Tag/Word cloud and ge…

Pricing models for Hadoop products

A look at the various pricing models adopted by the vendors in the Hadoop ecosystem. While the pricing models are evolving in this rapid and dynamic market, listed below are some of the major variations utilized by companies in the sphere.
1) Per Node:Among the most common model, the node based pricing mechanism utilizes customized rules for determining pricing per node. This may be as straight forward as pricing per name node and data node or could have complex variants of pricing based on number of core processors utilized by the nodes in the cluster or per user license in case of applications.
2) Per TB:The data based pricing mechanism charges customer for license cost per TB of data. This model usually accounts non replicated data for computation of cost.
3) Subscription Support cost only:In this model, the vendor prefers to give away software for free but charges the customer for subscription support on a specified number of nodes. The support timings and level of support further …