Skip to main content

Facebook predicts what you like to see on web page

The conventional wisdom of HTTP web page request between browser and servers is to transmit the response as whole in the form of a structured mark up language. However, this may not be how it works in today’s social networks.

Today’s smart social networks like Facebook use Hadoop and Hive driven intelligence to predict which resources of web page have a predetermined likelihood to be included in a response to a future request. Resources which includes java-scripts, style sheets, image etc are identified based on map-reduce and other computational algorithms which run on distributed systems analyzing billions of entries in the resource logs. These identified resources are stored/cached in server hash maps and the page is rendered in phase wise manner.

This method and system is described in US patent 8,108,377 Predictive resource identification and phased delivery of structured documents (Inventors: Jiang; Changhao, Wei; Xiaoliang; Assignee: Facebook, Inc. (Palo Alto, CA)) .

As the disclosure goes on to describe Hadoop and Hive usage, we find that
“…the resource logging, analyzing, filtering, predicting, and/or selecting operations discussed above can be implemented using Hive to accomplish ad hoc querying, summarization and data analysis, as well as using as incorporating statistical modules by embedding mapper and reducer scripts, such as Python or Perl scripts that implement a statistical algorithm. Other development platforms that can leverage Hadoop or other Map-Reduce execution engines can be used as well…”

Must say, this is one of the smart implementations of predictive computation which is reducing latency, limiting network load and overall leading to a better user experience. Remember, since this is patented, check with assignee before any commercial usage. 


Post a Comment

Popular posts from this blog

Low latency SQL querying on HBase

HBase has emerged as one of the most popular NoSQL database offering distributed, versioned, non-relational tables hosted on commodity hardware. However, with a large set of users coming from a relational SQL world, it made sense to bring the SQL back in this NoSQL. With Apache Phoenix, database professionals get a convenient way to query HBase through SQL in a fast and efficient manner. Continuing our discussion with James Taylor, the founder of Apache Phoenix, we focus on the functional aspects of Phoenix in this second part of interaction.
Although Apache Phoenix started off with distinct low latency advantage, have the other options like Hive/Impala (integrated with HBase) caught up in terms of performance?
No, these other tools such as Hive and Impala have not invested in improving performance against HBase data, so if anything, Phoenix's advantage has only gotten bigger as our performance improves.  See this link for comparison of Apache Phoenix with Apache Hive and Cloudera Im…

Data deduplication tactics with HDFS and MapReduce

As the amount of data continues to grow exponentially, there has been increased focus on stored data reduction methods. Data compression, single instance store and data deduplication are among the common techniques employed for stored data reduction.
Deduplication often refers to elimination of redundant subfiles (also known as chunks, blocks, or extents). Unlike compression, data is not changed and eliminates storage capacity for identical data. Data deduplication offers significant advantage in terms of reduction in storage, network bandwidth and promises increased scalability.
From a simplistic use case perspective, we can see application in removing duplicates in Call Detail Record (CDR) for a Telecom carrier. Similarly, we may apply the technique to optimize on network traffic carrying the same data packets.
Some of the common methods for data deduplication in storage architecture include hashing, binary comparison and delta differencing. In this post, we focus on how MapReduce and…

Pricing models for Hadoop products

A look at the various pricing models adopted by the vendors in the Hadoop ecosystem. While the pricing models are evolving in this rapid and dynamic market, listed below are some of the major variations utilized by companies in the sphere.
1) Per Node:Among the most common model, the node based pricing mechanism utilizes customized rules for determining pricing per node. This may be as straight forward as pricing per name node and data node or could have complex variants of pricing based on number of core processors utilized by the nodes in the cluster or per user license in case of applications.
2) Per TB:The data based pricing mechanism charges customer for license cost per TB of data. This model usually accounts non replicated data for computation of cost.
3) Subscription Support cost only:In this model, the vendor prefers to give away software for free but charges the customer for subscription support on a specified number of nodes. The support timings and level of support further …