Skip to main content

Taking stock of Oracle's Big Data arsenal

Oracle just concluded its big pompous paid showcase event called Oracle Open World. Big Data occupied limelight right from kick off to the closing. There were a few major announcements done through the week and some of them like additional security for big data appliance were just hype rather than major technical breakthrough. Lets attempt to get a quick analysis on what Oracle has in store for us and what it does not.

Oracle’s Big Data offering constituents:

Listed below are key constituents to Oracle’s Big Data offering for its customers:
Oracle NoSQL Database
Horizontally Scaled, Key-Value Database for Web Services and Cloud

Oracle Database
Oracle Database 11g features a wide range of options to meet specific customer requirements in the areas of performance and availability, security and compliance, data warehousing and analytics, unstructured data and manageability.With Oracle Database 12c, the company has built an in-memory database competitor to SAP HANA.

Oracle Big Data Appliance
Pre-integrated full rack configuration with 18 of Oracle's Sun servers and combines Cloudera Hadoop distribution, Oracle NoSQL Database and Big Data connectors.

Oracle Data Integrator Enterprise Edition
Flagship ETL/ELT. High-performance bulk data movement and data transformation.

Oracle Big Data Connectors
Software suite that integrates Apache Hadoop with Oracle software, including Oracle Database, Oracle Endeca Information Discovery, and Oracle Data Integrator.

Oracle Advanced Analytics
Extends the Oracle database into a comprehensive advanced analytics platform through two major components: Oracle R Enterprise and Oracle Data Mining.

Oracle Exadata Database Machine
Next generation Database Machine, Oracle Exadata, combines massive memory and low-cost disks to deliver high performance and petabyte scalability for all applications including Online Transaction Processing (OLTP), Data Warehousing (DW) and consolidation of mixed workloads.

Oracle Exalytics In-Memory Machine
Engineered System for advanced analytics which provides in-memory analytics software and hardware optimised to work together along with advanced data visualisation/exploration to quickly provide actionable insight from large amounts of data.

The + and –

A quick snapshot on where Oracle can win and where it can lose in comparison to other Big Data vendors.

Oracle winners
Oracle losers
Industry leading relational database
Exadata and other offerings still not compatible with all Oracle DB versions and non Oracle DB
Strong tie up with Cloudera (Hadoop distribution company) and high acquisition possibility
Utilizes connector Hadoop architecture without leveraging native support
With major acquisitions in the past, Oracle now has hardware, software, application services
Finds tough holding ground in heterogeneous vendor software environment
Provides in-memory computing capabilities
Flash cache only available for OLTP till now
Has rich BI and visualization layer
Has only limited advanced predictive modeling capabilities (like R)
Provides convenient mobile interface
Virtualization still a big question for Exadata
Has advanced BI appliance called Exalytics
New SPARC servers require learning curve on Solaris administration
Now offers Cloud deployments
Oracle hardware license list price do not cover Software, support cost; TCO can shoot up as high as 10x on total bundle
Aggressive management vision and marketing budgets
CEO can ditch the Oracle Open World customers for other passions

Larry Ellison off at America's cup sailing competition while audience expect him at Oracle Open World
(original cartoon author: unknown)

Comments

Popular posts from this blog

In-memory data model with Apache Gora

Open source in-memory data model and persistence for big data framework Apache Gora™ version 0.3, was released in May 2013. The 0.3 release offers significant improvements and changes to a number of modules including a number of bug fixes. However, what may be of significant interest to the DynamoDB community will be the addition of a gora-dynamodb datastore for mapping and persisting objects to Amazon's DynamoDB. Additionally the release includes various improvements to the gora-core and gora-cassandra modules as well as a new Web Services API implementation which enables users to extend Gora to any cloud storage platform of their choice. This 2-part post provides commentary on all of the above and a whole lot more, expanding to cover where Gora fits in within the NoSQL and Big Data space, the development challenges and features which have been baked into Gora 0.3 and finally what we have on the road map for the 0.4 development drive.
Introducing Apache Gora Although there are var…

Amazon DynamoDB datastore for Gora

What was initially suggested during causal conversation at ApacheCon2011 in November 2011 as a “neat idea”, would soon become prime ground for Gora's first taste of participation within Google's Summer of Code program. Initially, the project, titled Amazon DynamoDB datastore for Gora, merely aimed to extend the Gora framework to Amazon DynamoDB. However, it seem became obvious that the issue would include much more than that simple vision.

The Gora 0.3 Toolbox We briefly digress to discuss some other noticeable additions to Gora in 0.3, namely: Modification of the Query interface: The Query interface was amended from Query<K, T> to Query<K, T extends Persistent> to be more precise and explicit for developers. Consequently all implementors and users of the Query interface can only pass object's of Persistent type. Logging improvements for data store mappings: A key aspect of using Gora well is the establishment and accurate definitio…

Data deduplication tactics with HDFS and MapReduce

As the amount of data continues to grow exponentially, there has been increased focus on stored data reduction methods. Data compression, single instance store and data deduplication are among the common techniques employed for stored data reduction.
Deduplication often refers to elimination of redundant subfiles (also known as chunks, blocks, or extents). Unlike compression, data is not changed and eliminates storage capacity for identical data. Data deduplication offers significant advantage in terms of reduction in storage, network bandwidth and promises increased scalability.
From a simplistic use case perspective, we can see application in removing duplicates in Call Detail Record (CDR) for a Telecom carrier. Similarly, we may apply the technique to optimize on network traffic carrying the same data packets.
Some of the common methods for data deduplication in storage architecture include hashing, binary comparison and delta differencing. In this post, we focus on how MapReduce and…