Skip to main content

Technical deep dive into Apache Tajo

Over the past few months, a new Hadoop based warehouse called Apache Tajo has been making the buzz. We tried to do a technical deep dive in the topic and reached out to PMC chair Hyunsik Choi. Apache Tajo is an Apache top level project since March 2014 and supports SQL standards. It has a powerful distributed processing architecture which is not based on MapReduce. To get more sense on Tajo's claims for providing a distributed, fault-tolerant, low-latency and high throughput SQL engine, we asked a few questions to Choi. This Q&A is published in a two part article series on Read below to get a better idea on Apache Tajo.

How does Apache Tajo work?

As you may already know, Tajo does not use MapReduce and has its own distributed processing framework which is flexible and specialized to relational processing. Since its storage manager is pluggable, Tajo can access data sets stored in various storages , such as HDFS, Amazon S3, Openstack Swift or local file system. In near future, we will add RDBMS-based storages and query optimization improvement to exploit indexing features of underlying RDBMSs. Tajo team also has developed its own query optimizer which is similar to those of RDBMSs but is specialized to Hadoop-scale distributed environments. The early optimizer is rule-based one, and the current optimizer has employed a combination of both cost-based join order and a rule-based optimization technique since 2013.
Figure 1: Apache Tajo component architecture

Figure 1 shows an overall architecture. Basically, a Tajo cluster instance consists of one TajoMaster and a number of Tajo Workers. TajoMaster coordinates cluster membership and their resources, and it also provides a gateway for clients. TajoWorker actually processes data sets stored in storages. When a user submits a SQL query, TajoMaster decides whether the query is immediately executed in only TajoMaster or the query is executed across a number of TajoWorkers. Depending on the decision, TajoMaster either forwards the query to workers or does not.
Since Tajo has originated from academia, its many parts are well defined. The Figure 2 shows a sequence of query planning steps. A single SQL statement passes these steps sequentially, and each intermediate form can be optimized in several parts.
Figure 2: SQL Query transformation in Apache Tajo

Aforementioned, Tajo does not use MapReduce and has its own distributed execution framework. Figure 3 shows an example of distributed execution plan. A distributed execution plan is a direct acyclic graph (DAG). In the figure, each rounded box indicates a processing stage, and each line between each rounded boxes indicates a data flow. A data flow can be specified with shuffle methods. Basically, groupby, Join, and sort require shuffles. Currently, three types of shuffle methods are supported: hash, range, and scattered hash. Hash shuffle are usually used for groupby and join, and range shuffle is usually used for sort.
Also, a stage (rounded box) can be specified by an internal DAG which represents a relational operator tree. Combining a relation operator tree and various shuffle methods enables generating flexible distributed plans.
Figure 3: Distributed query execution plan

Can you take an example of join operation and help us with understanding of Apache Tajo internals?

In the Figure 3, a join operation is included in the distributed plan. As I mentioned in the question #1, a join operation requires a hash shuffle. As a result, a join operation requires three stages, where two stages are for shuffle and one stage is for join. Each stage for shuffle hashes tuples with join keys. A join stage fetches two data sources  which are already hashed in two shuffle stages, and then executes a join physical operator (also called join algorithm) with two data sources.
Even though ‘join’ is represented in Figure 3, Tajo has two inner join algorithms: in-memory hash join and sort-merge join. If input data sets fit into main memory, hash join is usually chosen. Otherwise, sort-merge join is mostly chosen.

Hyunsik Choi is a director of research at Gruter Inc. which is a big data platform startup company located in Palo Alto CA. He is a co-founder of Apache Tajo project, which is an open source data warehouse system and is one of the Apache top-level projects. Since 2013, he has been a full-time contributor of Tajo. His recent interests are query compilation, cost-based optimization, and large-scale distributed systems. He also obtained a PhD degree from Korea University in 2013.


  1. Where is the metadata stored?

  2. Hi Mark,

    Tajo has its own metadata store, named CatalogStore shown in Fig 1. CatalogStore mostly maintains table descriptions, including table schemas, table stats, table partitions, and table physical properties. CatalogStore employs one of RDBMs (e.g., Derby, MariaDB, MySQL, Oracle, and PostgreSQL) as its persistent storage.

    Besides, CatalogStore can directly access Hive meta store. As a result, Tajo can see Hive tables and directly process them.

    Best regards,

  3. Hi,
    Can u please elaborate on what are the benefits and why should I use Tajo instead of the vast competition that also uses some kind of DAG workflow system / new implemented optimiser instead of map-reduce ? (Impala, Hive on Tez, SparkSQL , even google Big Query)

  4. Looks like it is similar to what Splice Machine do...?


Post a Comment

Popular posts from this blog

Hadoop's 10 in LinkedIn's 10

LinkedIn, the pioneering professional social network has turned 10 years old. One of the hallmarks of its journey has been its technical accomplishments and significant contribution to open source, particularly in the last few years. Hadoop occupies a central place in its technical environment powering some of the most used features of desktop and mobile app. As LinkedIn enters the second decade of its existence, here is a look at 10 major projects and products powered by Hadoop in its data ecosystem.
1)      Voldemort:Arguably, the most famous export of LinkedIn engineering, Voldemort is a distributed key-value storage system. Named after an antagonist in Harry Potter series and influenced by Amazon’s Dynamo DB, the wizardry in this database extends to its self healing features. Available in HA configuration, its layered, pluggable architecture implementations are being used for both read and read-write use cases.
2)      Azkaban:A batch job scheduling system with a friendly UI, Azkab…

Data deduplication tactics with HDFS and MapReduce

As the amount of data continues to grow exponentially, there has been increased focus on stored data reduction methods. Data compression, single instance store and data deduplication are among the common techniques employed for stored data reduction.
Deduplication often refers to elimination of redundant subfiles (also known as chunks, blocks, or extents). Unlike compression, data is not changed and eliminates storage capacity for identical data. Data deduplication offers significant advantage in terms of reduction in storage, network bandwidth and promises increased scalability.
From a simplistic use case perspective, we can see application in removing duplicates in Call Detail Record (CDR) for a Telecom carrier. Similarly, we may apply the technique to optimize on network traffic carrying the same data packets.
Some of the common methods for data deduplication in storage architecture include hashing, binary comparison and delta differencing. In this post, we focus on how MapReduce and…

Top Big Data Influencers of 2015

2015 was an exciting year for big data and hadoop ecosystem. We saw hadoop becoming an essential part of data management strategy of almost all major enterprise organizations. There is cut throat competition among IT vendors now to help realize the vision of data hub, data lake and data warehouse with Hadoop and Spark.
As part of its annual assessment of big data and hadoop ecosystem, HadoopSphere publishes a list of top big data influencers each year. The list is derived based on a scientific methodology which involves assessing various parameters in each category of influencers. HadoopSphere Top Big Data Influencers list reflects the people, products, organizations and portals that exercised the most influence on big data and ecosystem in a particular year. The influencers have been listed in the following categories:

AnalystsSocial MediaOnline MediaProductsTechiesCoachThought LeadersClick here to read the methodology used.

Analysts:Doug HenschenIt might have been hard to miss Doug…