Skip to main content

Are Big Data appliances worth the buck

While setting up the Big Data technical environment, one of the questions which most enterprise grapple with is whether to go for an appliance or a cluster. A Big Data appliance can be defined as an integrated system which provides a combination of hardware, software, storage and network device for enabling big data use cases. A Big Data cluster on the other hand can be defined as a combination of exclusive nodes with required hardware, big data processing software, coupled storage and can be integrated together via network devices.

While appliances are usually known to involve a large payout to the vendor, comparative studies have tried to prove that the Total Cost of Ownership (TCO) may in certain cases be less or equal to a cluster setup.  Let’s take a look at whether the appliances are worth the money spent.
- Higher initial payout - Lower initial payout – with a chance to acquire new resources as you scale out
- Standard configuration across nodes - Provision to mix and match configurations based on distinct need for name node or data nodes
- High probability of vendor lock in - More liberty in terms of switching vendors and associated software and components
- Field tested Hadoop and ecosystem projects version offered as package - Need to make difficult component choices and version compatibility tests
-Lower set up time and enablement -Higher setup time and labor effort
- Eliminates learning curve for administrators on each component -Need high comfort level and education on required components
- Could have issues in installing add on software - Flexibility in terms of installing additional software
- New hardware investment - Offers possibility of leveraging existing hardware
- Need to read the fine line in contract on software upgrade and pricing - Better control on software upgrade and pricing
- Additional scaling capabilities could lead to technical and pricing challenges - More flexibility on additional scaling capability
- Will need to stick to SQL standard offered by vendor - Can choose your own preferred SQL on Hadoop solution
- Lesser hard work required for restoration of node with common support subscription - Could involve following and coordination among multiple vendors for trouble-shooting
- May involve migration costs - May not involve any major migration cost since you could add up additional nodes on the cluster

Recommended steps to arrive at decision:
  1. Collect use cases, associated data volume and growth projections
  2. Determine the Hadoop/Big data ecosystem layers that you will invest in next 3 years.
  3. Analyze software, hardware components being offered vis-à-vis requirements as listed out in steps 1 and 2 above
  4. Perform benchmark tests (if required skills are available)
  5. Compare metrics across appliances of different vendors and cluster machines with varied configuration
  6. Arrive at qualitative and quantitative comparison across the options to help you choose a winner.


Popular articles

5 online tools in data visualization playground

While building up an analytics dashboard, one of the major decision points is regarding the type of charts and graphs that would provide better insight into the data. To avoid a lot of re-work later, it makes sense to try the various chart options during the requirement and design phase. It is probably a well known myth that existing tool options in any product can serve all the user requirements with just minor configuration changes. We all know and realize that code needs to be written to serve each customer’s individual needs. To that effect, here are 5 tools that could empower your technical and business teams to decide on visualization options during the requirement phase. Listed below are online tools for you to add data and use as playground. 1)      Many Eyes : Many Eyes is a data visualization experiment by IBM Research and the IBM Cognos software group. This tool provides option to upload data sets and create visualizations including Scatter Plot, Tree Ma

Data deduplication tactics with HDFS and MapReduce

As the amount of data continues to grow exponentially, there has been increased focus on stored data reduction methods. Data compression, single instance store and data deduplication are among the common techniques employed for stored data reduction. Deduplication often refers to elimination of redundant subfiles (also known as chunks, blocks, or extents). Unlike compression, data is not changed and eliminates storage capacity for identical data. Data deduplication offers significant advantage in terms of reduction in storage, network bandwidth and promises increased scalability. From a simplistic use case perspective, we can see application in removing duplicates in Call Detail Record (CDR) for a Telecom carrier. Similarly, we may apply the technique to optimize on network traffic carrying the same data packets. Some of the common methods for data deduplication in storage architecture include hashing, binary comparison and delta differencing. In this post, we focus o

In-memory data model with Apache Gora

Open source in-memory data model and persistence for big data framework Apache Gora™ version 0.3, was released in May 2013. The 0.3 release offers significant improvements and changes to a number of modules including a number of bug fixes. However, what may be of significant interest to the DynamoDB community will be the addition of a gora-dynamodb datastore for mapping and persisting objects to Amazon's DynamoDB . Additionally the release includes various improvements to the gora-core and gora-cassandra modules as well as a new Web Services API implementation which enables users to extend Gora to any cloud storage platform of their choice. This 2-part post provides commentary on all of the above and a whole lot more, expanding to cover where Gora fits in within the NoSQL and Big Data space, the development challenges and features which have been baked into Gora 0.3 and finally what we have on the road map for the 0.4 development drive. Introducing Apache Gora Although