Skip to main content

Building a billion dollar enterprise with Hadoop



Among the major technical trends observed in the year 2013, SQL-on-Hadoop was the most prominent one that caught the market attention and imagination like never before. We saw some major announcements in this space all through the year. Riding on the hype that Hadapt and Cloudera Impala brought in the earlier years, there were more players who brought in technical innovation along with marketing blitz. 

We asked Monte Zweben, co-founder and chief executive officer of Splice Machine on the fuss and excitement around SQL-on-Hadoop. Monte is not a newcomer to technology space. He is a NASA alumni and founded RedPepper software that merged with PeopleSoft in 1996. He later led Blue Martini to a billion dollar market value capitalization on NASDAQ in 2000. His latest venture is Splice Machine which provides transactional SQL-on-Hadoop database designed for real-time Big Data applications. Monte is also currently board member of Rocket Fuel Inc. which was ranked among Hadoopsphere's top Big Data influencers of 2013 in stock performers category. During the interview, we also tried to probe his motivation and vision for the Hadoop technology sphere. 

Here's what he had to say:

Why do we need SQL-on-Hadoop?


- SQL-on-Hadoop solutions have become very popular recently as they address the shortcomings of Hadoop and provide a scale-out alternative for traditional RDBMSs. Because Hadoop requires specialized Java programs to access data, it had become the “roach motel” of Big Data – easy to get data in, but hard to get it out. SQL-on-Hadoop solutions dramatically improve access to data in Hadoop because of most data and business analysts are well trained users of SQL. Existing SQL tools and Business Intelligence (BI) platforms can now connect to Hadoop data through a standard ODBC connection and SQL applications that can now update and act on that data.
 

For existing databases experiencing scaling issues, SQL-on-Hadoop solutions can provide a full SQL database that can scale out on commodity hardware. With standard SQL, it can eliminate application rewrites to access scale-out technology. With scalability proven in petabytes on inexpensive servers for Hadoop, SQL-on-Hadoop also provides a highly scalable data platform that does not require expensive, specialized hardware.

Is the space already crowded with many vendors pitching in with solutions in SQL-on-Hadoop space?


- Yes, but all SQL-on-Hadoop solutions are not equal. For instance, Splice Machine provides transactional SQL-on-Hadoop database for real-time Big Data applications, whether operational or analytical. Because it was built on proven Hadoop and HBase stacks, Splice Machine takes the best of both SQL and NoSQL database solutions to deliver a massively scalable database that provides robust SQL support, secondary indexes, join optimizations and transactional integrity. 


Do you think Hadoop can help you make another billion $ company?


- Absolutely, given the scaling issues of current databases and dramatic increase of data in many companies. More and more enterprises are beginning to understand the critical nature of big data management and its long-term implications on their application infrastructure and business.

 

Your top 3 predictions for Hadoop ecosystem in 2014.

- Hadoop will move from being a static repository for data science to a platform that powers real-time, interactive applications.
- Hadoop will complement relational SQL databases, and in some cases be a replacement
- Hadoop-based ETL will be the new norm for large data sets

Comments

Popular posts from this blog

Data deduplication tactics with HDFS and MapReduce

As the amount of data continues to grow exponentially, there has been increased focus on stored data reduction methods. Data compression, single instance store and data deduplication are among the common techniques employed for stored data reduction.
Deduplication often refers to elimination of redundant subfiles (also known as chunks, blocks, or extents). Unlike compression, data is not changed and eliminates storage capacity for identical data. Data deduplication offers significant advantage in terms of reduction in storage, network bandwidth and promises increased scalability.
From a simplistic use case perspective, we can see application in removing duplicates in Call Detail Record (CDR) for a Telecom carrier. Similarly, we may apply the technique to optimize on network traffic carrying the same data packets.
Some of the common methods for data deduplication in storage architecture include hashing, binary comparison and delta differencing. In this post, we focus on how MapReduce and…

Pricing models for Hadoop products

A look at the various pricing models adopted by the vendors in the Hadoop ecosystem. While the pricing models are evolving in this rapid and dynamic market, listed below are some of the major variations utilized by companies in the sphere.
1) Per Node:Among the most common model, the node based pricing mechanism utilizes customized rules for determining pricing per node. This may be as straight forward as pricing per name node and data node or could have complex variants of pricing based on number of core processors utilized by the nodes in the cluster or per user license in case of applications.
2) Per TB:The data based pricing mechanism charges customer for license cost per TB of data. This model usually accounts non replicated data for computation of cost.
3) Subscription Support cost only:In this model, the vendor prefers to give away software for free but charges the customer for subscription support on a specified number of nodes. The support timings and level of support further …

In-memory data model with Apache Gora

Open source in-memory data model and persistence for big data framework Apache Gora™ version 0.3, was released in May 2013. The 0.3 release offers significant improvements and changes to a number of modules including a number of bug fixes. However, what may be of significant interest to the DynamoDB community will be the addition of a gora-dynamodb datastore for mapping and persisting objects to Amazon's DynamoDB. Additionally the release includes various improvements to the gora-core and gora-cassandra modules as well as a new Web Services API implementation which enables users to extend Gora to any cloud storage platform of their choice. This 2-part post provides commentary on all of the above and a whole lot more, expanding to cover where Gora fits in within the NoSQL and Big Data space, the development challenges and features which have been baked into Gora 0.3 and finally what we have on the road map for the 0.4 development drive.
Introducing Apache Gora Although there are var…