In our continuing series on Apache Tajo, we present the second part of interview with PMC chair Hyunsik Choi. While we earlier talked about the technical aspects of Apache Tajo in the first part of interview, we now further explore more on the fitment in the big data ecosystem. Tajo is designed for SQL queries on large data sets stored on HDFS and other data sources.Read below to find out more on how Apache Tajo can help you build an open source data warehouse.
How does Apache Tajo technically compare to Hive, Impala and Spark and what is the reason for performance difference?
In general view, Tajo is different from Hive, Spark, and Impala.
• Tajo is a monolithic system for distributed relational processing. In contrast, Hive and Spark SQL are based on generic purpose processing systems (i.e., Tez and Spark respectively), and they also have additional layer for SQL(-like) language.
• Tajo and Hive are implemented in Java, Impala is implemented C++, and Spark is implemented in Scala.
It may need micro benchmark in order to identify which parts give performance benefits. But, as far as I know, there hasn’t been such a study.
I can make a guess at some reasons as follows:
• Tajo’s query optimizer is mature.
• Tajo is specialized for distributed relational processing throughout whole stacks, and many parts including lower-level implementation are optimized for its purpose.
• Tajo’s DAG framework can have more optimization opportunities by combinations of both various shuffle methods and flexible relational operator tree in stages.
• Tajo has more than 40 physical operators. Some of them are disk-based algorithms to spill data into disks, and others perform main-memory algorithms. Tajo physical planner chooses the best one according to a logical plan and available resources.
• The hash shuffle of Tajo is very fast because it does the per-node hash shuffle, where all tuples associated with the same key are stored into a single file. So, this approach can exploit more sequential access during spilling and fetching. In contrast, other systems a disk-spilled shuffle method does task-level hash shuffle, causing a number of small random accesses.
Does Apache Tajo complement or substitute Apache Hive?
In 2010, Tajo was designed to an alternative to Apache Hive. At that time, there was no alternative to Hive. Now, we still are driving Tajo as an alternative to Hive. However, Tajo is also used as a complement system. Some users maintain both systems at the same time while they are migrating Hive workloads to Tajo. Such a migration is very easy because Tajo is compatible to most of Hive features except for SQL syntax; Tajo is an ANSI SQL standard compliance system. Also, some users replace only some Hive workloads by Apache Tajo due to more low response times without completely migrating Hive into Tajo.
Can you take us through key steps in establishing a complete data warehouse with Apache Tajo?
First of all, you need to make a plan for the following:
• which data sources (e.g., web logs, text files, JSON, HBase tables, RDBMs, …)
• how long you will archive them
• most dominant workloads and table schemas
The above factors may be similar when you consider RDBMS-based DW systems. But, Tajo supports in-situ processing on various data sources (e.g., HDFS, HBase) and file formats (e.g., Parquet, SequenceFile, RCFile, Text, flat Json, and custom file formats). So, with Tajo you can maintain a warehouse by involving ETL process as well as archiving directly raw data sets without ETL. Also, depending on workloads and the archiving period, you need to consider table partitions, file formats, and compression policy. Recently, many Tajo users use Parquet in order to archive data sets, and Parquet also provides relatively faster query response times. Parquet’s storage space efficiency is great due to the advantages of columnar compression.
From archived data sets, you can build data marts which enable faster access against certain subject data sets. Due to its low latency, Tajo can be used as an OLAP engine to make reports and process interactive ad-hoc queries on data marts via BI tools and JDBC driver.
Hyunsik Choi is a director of research at Gruter Inc. which is a big data platform startup company located in Palo Alto CA. He is a co-founder of Apache Tajo project, which is an open source data warehouse system and is one of the Apache top-level projects. Since 2013, he has been a full-time contributor of Tajo. His recent interests are query compilation, cost-based optimization, and large-scale distributed systems. He also obtained a PhD degree from Korea University in 2013. |
Comments
Post a Comment