Skip to main content

Taking stock of Oracle's Big Data arsenal

Oracle just concluded its big pompous paid showcase event called Oracle Open World. Big Data occupied limelight right from kick off to the closing. There were a few major announcements done through the week and some of them like additional security for big data appliance were just hype rather than major technical breakthrough. Lets attempt to get a quick analysis on what Oracle has in store for us and what it does not.

Oracle’s Big Data offering constituents:

Listed below are key constituents to Oracle’s Big Data offering for its customers:
Oracle NoSQL Database
Horizontally Scaled, Key-Value Database for Web Services and Cloud

Oracle Database
Oracle Database 11g features a wide range of options to meet specific customer requirements in the areas of performance and availability, security and compliance, data warehousing and analytics, unstructured data and manageability.With Oracle Database 12c, the company has built an in-memory database competitor to SAP HANA.

Oracle Big Data Appliance
Pre-integrated full rack configuration with 18 of Oracle's Sun servers and combines Cloudera Hadoop distribution, Oracle NoSQL Database and Big Data connectors.

Oracle Data Integrator Enterprise Edition
Flagship ETL/ELT. High-performance bulk data movement and data transformation.

Oracle Big Data Connectors
Software suite that integrates Apache Hadoop with Oracle software, including Oracle Database, Oracle Endeca Information Discovery, and Oracle Data Integrator.

Oracle Advanced Analytics
Extends the Oracle database into a comprehensive advanced analytics platform through two major components: Oracle R Enterprise and Oracle Data Mining.

Oracle Exadata Database Machine
Next generation Database Machine, Oracle Exadata, combines massive memory and low-cost disks to deliver high performance and petabyte scalability for all applications including Online Transaction Processing (OLTP), Data Warehousing (DW) and consolidation of mixed workloads.

Oracle Exalytics In-Memory Machine
Engineered System for advanced analytics which provides in-memory analytics software and hardware optimised to work together along with advanced data visualisation/exploration to quickly provide actionable insight from large amounts of data.

The + and –

A quick snapshot on where Oracle can win and where it can lose in comparison to other Big Data vendors.

Oracle winners
Oracle losers
Industry leading relational database
Exadata and other offerings still not compatible with all Oracle DB versions and non Oracle DB
Strong tie up with Cloudera (Hadoop distribution company) and high acquisition possibility
Utilizes connector Hadoop architecture without leveraging native support
With major acquisitions in the past, Oracle now has hardware, software, application services
Finds tough holding ground in heterogeneous vendor software environment
Provides in-memory computing capabilities
Flash cache only available for OLTP till now
Has rich BI and visualization layer
Has only limited advanced predictive modeling capabilities (like R)
Provides convenient mobile interface
Virtualization still a big question for Exadata
Has advanced BI appliance called Exalytics
New SPARC servers require learning curve on Solaris administration
Now offers Cloud deployments
Oracle hardware license list price do not cover Software, support cost; TCO can shoot up as high as 10x on total bundle
Aggressive management vision and marketing budgets
CEO can ditch the Oracle Open World customers for other passions

Larry Ellison off at America's cup sailing competition while audience expect him at Oracle Open World
(original cartoon author: unknown)

Comments

Popular posts from this blog

Beyond NSA, the intelligence community has a big technology footprint

While all through the past few days the focus has been on NSA activities, the discussion has often veered around the technologies and products used by NSA. At the same time, a side discussion topic has been the larger technical ecosystem of intelligence units. CIA has been one of the more prolific users of Information Technology by its own admission. To that extent, CIA spinned off a venture capital firm In-Q-Tel in 1999 to invest in focused sector companies. Per Helen Coster of Fortune Magazine, In-Q-Tel (IQT) has been named “after the gadget-toting James Bond character Q”.
In-Q-Tel states on its website that “We design our strategic investments to accelerate product development and delivery for this ready-soon innovation, and specifically to help companies add capabilities needed by our customers in the Intelligence Community”. To that effect, it has made over 200 investments in early stage companies for propping up products. Being a not-for-profit group, unlike Private Venture capi…

Data deduplication tactics with HDFS and MapReduce

As the amount of data continues to grow exponentially, there has been increased focus on stored data reduction methods. Data compression, single instance store and data deduplication are among the common techniques employed for stored data reduction.
Deduplication often refers to elimination of redundant subfiles (also known as chunks, blocks, or extents). Unlike compression, data is not changed and eliminates storage capacity for identical data. Data deduplication offers significant advantage in terms of reduction in storage, network bandwidth and promises increased scalability.
From a simplistic use case perspective, we can see application in removing duplicates in Call Detail Record (CDR) for a Telecom carrier. Similarly, we may apply the technique to optimize on network traffic carrying the same data packets.
Some of the common methods for data deduplication in storage architecture include hashing, binary comparison and delta differencing. In this post, we focus on how MapReduce and…

Top Big Data Influencers of 2015

2015 was an exciting year for big data and hadoop ecosystem. We saw hadoop becoming an essential part of data management strategy of almost all major enterprise organizations. There is cut throat competition among IT vendors now to help realize the vision of data hub, data lake and data warehouse with Hadoop and Spark.
As part of its annual assessment of big data and hadoop ecosystem, HadoopSphere publishes a list of top big data influencers each year. The list is derived based on a scientific methodology which involves assessing various parameters in each category of influencers. HadoopSphere Top Big Data Influencers list reflects the people, products, organizations and portals that exercised the most influence on big data and ecosystem in a particular year. The influencers have been listed in the following categories:

AnalystsSocial MediaOnline MediaProductsTechiesCoachThought LeadersClick here to read the methodology used.

Analysts:Doug HenschenIt might have been hard to miss Doug…