Skip to main content

Technical deep dive into Apache Tajo

Over the past few months, a new Hadoop based warehouse called Apache Tajo has been making the buzz. We tried to do a technical deep dive in the topic and reached out to PMC chair Hyunsik Choi. Apache Tajo is an Apache top level project since March 2014 and supports SQL standards. It has a powerful distributed processing architecture which is not based on MapReduce. To get more sense on Tajo's claims for providing a distributed, fault-tolerant, low-latency and high throughput SQL engine, we asked a few questions to Choi. This Q&A is published in a two part article series on hadoopsphere.com. Read below to get a better idea on Apache Tajo.

How does Apache Tajo work?

As you may already know, Tajo does not use MapReduce and has its own distributed processing framework which is flexible and specialized to relational processing. Since its storage manager is pluggable, Tajo can access data sets stored in various storages , such as HDFS, Amazon S3, Openstack Swift or local file system. In near future, we will add RDBMS-based storages and query optimization improvement to exploit indexing features of underlying RDBMSs. Tajo team also has developed its own query optimizer which is similar to those of RDBMSs but is specialized to Hadoop-scale distributed environments. The early optimizer is rule-based one, and the current optimizer has employed a combination of both cost-based join order and a rule-based optimization technique since 2013.
Figure 1: Apache Tajo component architecture

Figure 1 shows an overall architecture. Basically, a Tajo cluster instance consists of one TajoMaster and a number of Tajo Workers. TajoMaster coordinates cluster membership and their resources, and it also provides a gateway for clients. TajoWorker actually processes data sets stored in storages. When a user submits a SQL query, TajoMaster decides whether the query is immediately executed in only TajoMaster or the query is executed across a number of TajoWorkers. Depending on the decision, TajoMaster either forwards the query to workers or does not.
Since Tajo has originated from academia, its many parts are well defined. The Figure 2 shows a sequence of query planning steps. A single SQL statement passes these steps sequentially, and each intermediate form can be optimized in several parts.
Figure 2: SQL Query transformation in Apache Tajo

Aforementioned, Tajo does not use MapReduce and has its own distributed execution framework. Figure 3 shows an example of distributed execution plan. A distributed execution plan is a direct acyclic graph (DAG). In the figure, each rounded box indicates a processing stage, and each line between each rounded boxes indicates a data flow. A data flow can be specified with shuffle methods. Basically, groupby, Join, and sort require shuffles. Currently, three types of shuffle methods are supported: hash, range, and scattered hash. Hash shuffle are usually used for groupby and join, and range shuffle is usually used for sort.
Also, a stage (rounded box) can be specified by an internal DAG which represents a relational operator tree. Combining a relation operator tree and various shuffle methods enables generating flexible distributed plans.
Figure 3: Distributed query execution plan

Can you take an example of join operation and help us with understanding of Apache Tajo internals?

In the Figure 3, a join operation is included in the distributed plan. As I mentioned in the question #1, a join operation requires a hash shuffle. As a result, a join operation requires three stages, where two stages are for shuffle and one stage is for join. Each stage for shuffle hashes tuples with join keys. A join stage fetches two data sources  which are already hashed in two shuffle stages, and then executes a join physical operator (also called join algorithm) with two data sources.
Even though ‘join’ is represented in Figure 3, Tajo has two inner join algorithms: in-memory hash join and sort-merge join. If input data sets fit into main memory, hash join is usually chosen. Otherwise, sort-merge join is mostly chosen.

Hyunsik Choi is a director of research at Gruter Inc. which is a big data platform startup company located in Palo Alto CA. He is a co-founder of Apache Tajo project, which is an open source data warehouse system and is one of the Apache top-level projects. Since 2013, he has been a full-time contributor of Tajo. His recent interests are query compilation, cost-based optimization, and large-scale distributed systems. He also obtained a PhD degree from Korea University in 2013.

Comments

  1. Where is the metadata stored?

    ReplyDelete
  2. Hi Mark,

    Tajo has its own metadata store, named CatalogStore shown in Fig 1. CatalogStore mostly maintains table descriptions, including table schemas, table stats, table partitions, and table physical properties. CatalogStore employs one of RDBMs (e.g., Derby, MariaDB, MySQL, Oracle, and PostgreSQL) as its persistent storage.

    Besides, CatalogStore can directly access Hive meta store. As a result, Tajo can see Hive tables and directly process them.

    Best regards,
    Hyunsik

    ReplyDelete
  3. Hi,
    Can u please elaborate on what are the benefits and why should I use Tajo instead of the vast competition that also uses some kind of DAG workflow system / new implemented optimiser instead of map-reduce ? (Impala, Hive on Tez, SparkSQL , even google Big Query)
    Thanks,
    Daniel.

    ReplyDelete
  4. Looks like it is similar to what Splice Machine do...?

    ReplyDelete

Post a Comment

Popular posts from this blog

In-memory data model with Apache Gora

Open source in-memory data model and persistence for big data framework Apache Gora™ version 0.3, was released in May 2013. The 0.3 release offers significant improvements and changes to a number of modules including a number of bug fixes. However, what may be of significant interest to the DynamoDB community will be the addition of a gora-dynamodb datastore for mapping and persisting objects to Amazon's DynamoDB. Additionally the release includes various improvements to the gora-core and gora-cassandra modules as well as a new Web Services API implementation which enables users to extend Gora to any cloud storage platform of their choice. This 2-part post provides commentary on all of the above and a whole lot more, expanding to cover where Gora fits in within the NoSQL and Big Data space, the development challenges and features which have been baked into Gora 0.3 and finally what we have on the road map for the 0.4 development drive.
Introducing Apache Gora Although there are var…

Data deduplication tactics with HDFS and MapReduce

As the amount of data continues to grow exponentially, there has been increased focus on stored data reduction methods. Data compression, single instance store and data deduplication are among the common techniques employed for stored data reduction.
Deduplication often refers to elimination of redundant subfiles (also known as chunks, blocks, or extents). Unlike compression, data is not changed and eliminates storage capacity for identical data. Data deduplication offers significant advantage in terms of reduction in storage, network bandwidth and promises increased scalability.
From a simplistic use case perspective, we can see application in removing duplicates in Call Detail Record (CDR) for a Telecom carrier. Similarly, we may apply the technique to optimize on network traffic carrying the same data packets.
Some of the common methods for data deduplication in storage architecture include hashing, binary comparison and delta differencing. In this post, we focus on how MapReduce and…

Amazon DynamoDB datastore for Gora

What was initially suggested during causal conversation at ApacheCon2011 in November 2011 as a “neat idea”, would soon become prime ground for Gora's first taste of participation within Google's Summer of Code program. Initially, the project, titled Amazon DynamoDB datastore for Gora, merely aimed to extend the Gora framework to Amazon DynamoDB. However, it seem became obvious that the issue would include much more than that simple vision.

The Gora 0.3 Toolbox We briefly digress to discuss some other noticeable additions to Gora in 0.3, namely: Modification of the Query interface: The Query interface was amended from Query<K, T> to Query<K, T extends Persistent> to be more precise and explicit for developers. Consequently all implementors and users of the Query interface can only pass object's of Persistent type. Logging improvements for data store mappings: A key aspect of using Gora well is the establishment and accurate definitio…