Skip to main content

Disclaimer




Hadoopsphere.com (referred herein-after as site) is pure blogging opinion and personal views of the author.
The site has no association with Apache Software foundation.


All the opinions or content posted on hadoopsphere.com are purely for the sake of knowledge sharing with other like minded Hadoop enthusiasts. If any material or text or image or chart or presentation or document is found to be violating any norms, including copyright or trademark, it is to be considered as inadvertent and not meant to be intentional. Any such incident needs to be brought to Hadoopshere.com's attention through the contact or comments option on the site so that it can be dealt with appropriately.

The site uses analytics to analyze traffic trend. The site also incoprorates advertising on the various posts, web pages and search options. As part of the analytics, the software used by analytics may collect information about the user. The information is collected as part of analytics software and the site has no bearing whatsoever with how, why, when, where and what information is collected. 

The site does not assume conflict of interest with any organization or corporation or individual. If any such information is found on site which has a conflict of interest, please bring it to site attention through contact or comments option.

The site including the content on this site is not meant for any legal proceedings or legal matters and should be ommited in any such proceedings or matters.

For any comments posted on the site, they are moderated and any comment found to be spam is not published to general public.

The content on this site is coypright and some rights are reserved with Hadoopsphere.com. As in any blogging site, syndication and cross reference is encouraged. Such permission may be used using contact or comments option on the site.


Comments

Popular posts from this blog

Large scale graph processing with Apache Hama

Recently Apache Hama team released official 0.7.0 version. According to the release announcement, there were big improvements in Graph package. In this article, we provide an overview of the newly improved Graph package of Apache Hama, and the benchmark results that performed by cloud platform team at Samsung Electronics.

Large scale datasets are being increasingly used in many fields. Graph algorithms are becoming important for analyzing big data. Data scientists are able to predict the behavior of the customer, the trends of the market, and make a decision by analyzing the graph structure and characteristics. Currently there are a variety of open source graph analytic frameworks, such as Google’s Pregel[1], Apache Giraph[2], GraphLab[3] and GraphX[4]. These frameworks are aimed at computations varying from classical graph traversal algorithms to graph statistics calculations such as triangle counting to complex machine learning algorithms. However these frameworks have been developed…

In-memory data model with Apache Gora

Open source in-memory data model and persistence for big data framework Apache Gora™ version 0.3, was released in May 2013. The 0.3 release offers significant improvements and changes to a number of modules including a number of bug fixes. However, what may be of significant interest to the DynamoDB community will be the addition of a gora-dynamodb datastore for mapping and persisting objects to Amazon's DynamoDB. Additionally the release includes various improvements to the gora-core and gora-cassandra modules as well as a new Web Services API implementation which enables users to extend Gora to any cloud storage platform of their choice. This 2-part post provides commentary on all of the above and a whole lot more, expanding to cover where Gora fits in within the NoSQL and Big Data space, the development challenges and features which have been baked into Gora 0.3 and finally what we have on the road map for the 0.4 development drive.
Introducing Apache Gora Although there are var…

Data deduplication tactics with HDFS and MapReduce

As the amount of data continues to grow exponentially, there has been increased focus on stored data reduction methods. Data compression, single instance store and data deduplication are among the common techniques employed for stored data reduction.
Deduplication often refers to elimination of redundant subfiles (also known as chunks, blocks, or extents). Unlike compression, data is not changed and eliminates storage capacity for identical data. Data deduplication offers significant advantage in terms of reduction in storage, network bandwidth and promises increased scalability.
From a simplistic use case perspective, we can see application in removing duplicates in Call Detail Record (CDR) for a Telecom carrier. Similarly, we may apply the technique to optimize on network traffic carrying the same data packets.
Some of the common methods for data deduplication in storage architecture include hashing, binary comparison and delta differencing. In this post, we focus on how MapReduce and…