Hadoop's 10 in LinkedIn's 10

LinkedIn, the pioneering professional social network has turned 10 years old. One of the hallmarks of its journey has been its technical accomplishments and significant contribution to open source, particularly in the last few years. Hadoop occupies a central place in its technical environment powering some of the most used features of desktop and mobile app. As LinkedIn enters the second decade of its existence, here is a look at 10 major projects and products powered by Hadoop in its data ecosystem.

1)      Voldemort:

Arguably, the most famous export of LinkedIn engineering, Voldemort is a distributed key-value storage system. Named after an antagonist in Harry Potter series and influenced by Amazon’s Dynamo DB, the wizardry in this database extends to its self healing features. Available in HA configuration, its layered, pluggable architecture implementations are being used for both read and read-write use cases.

2)      Azkaban:

A batch job scheduling system with a friendly UI, Azkaban aims to make batch programming easy and visually appealing. "It allows the independent pieces to be declaratively assembled into a single workflow, and for that workflow to be scheduled to run periodically...includes things like email notifications of success or failure, resource locking, retry on failure, log collection, historical job run time information, and so on.”

3)      DataFu:

DataFu is a collection of Pig UDFs (user defined functions) for data analysis on Hadoop. As the team at LinkedIn developed and refined its UDF for ‘People you may know’ and ‘Skills’ section, it compiled the well tested functions in this library. It contains functions for statistical tasks like PageRank, Variance, bag operations and set operations.

4)      Decomposer:

Decomposer is a collection of extremely large matrix decomposition algorithm implementations, in Java.  It currently contains Singular Value Decomposition implementation and the library is in process of being ‘absorbed’ into the Apache Mahout project.

5)      Kafka:

Kafka is a distributed publish-subscribe messaging system. Although at first look it may seem similar to Apache Flume, it is actually intended to be a Message Broker equivalent. “Kafka aims to unify offline and online processing by providing a mechanism for parallel load into Hadoop as well as the ability to partition real-time consumption over a cluster of machines”

6)      White Elephant:

White Elephant is used to parse Hadoop logs and provide visualization dashboard for Hadoop cluster statistics, including total task time, slots used, CPU time, and failed job counts.  White Elephant’s server is a JRuby application, also deployable on Tomcat while the data is stored in HyperSQL in-memory DB and charts rendered with Rickshaw.

7)      Helix:

Helix, built on top of Apache Zookeeper, is a generic cluster management framework for automatic management of partitioned, replicated and distributed resources hosted on a cluster of nodes. LinkedIn uses “Helix to manage our search-as-a-service clusters hosting multiple search applications, Databus, our data change capture component, and Espresso, our indexed, timeline-consistent, document-oriented data store”

8)      Norbert:

Norbert, implemented in Scala, wraps ZooKeeper and Netty and uses Protocol Buffers to provide easy cluster management and workload distribution. It is claimed to be capable of “quickly distribut(ing) a simple client/server architecture to create a highly scalable architecture capable of handling heavy traffic”

9)      Giraph:

Giraph, inspired by Google Pregel utilizes Bulk Synchronous Parallel (BSP) model for computation of graph algorithms on Hadoop clusters. Taking an active interest in this project like its social media counterparts in Facebook and Twitter, today LinkedIn has been using Giraph for social graph computations and interpretations.

10)  Avtara:

“Avatara is LinkedIn's scalable, low latency, and highly-available OLAP system for ‘sharded’ multi-dimensional queries in the time constraints of a request/response loop.” Used in "Who's Viewed My Profile?” Avtara has an offline engine that computes cubes in batch and an online engine that serves queries in real time.

(Although Pig, Avro, Zookeeper constitute key part of ecosystem, they have been skipped detailed mention here assuming them as part of core layers of Hadoop deployment.)

And, finally, if you are still not convinced on LinkedIn’s Hadoop story, here is a quick snapshot of Hadoop skilled employees in various social networks. The data has been sourced from LinkedIn profiles. An additional measure of employees associated with Apache Software Foundation has been included but it is likely that the number reflected in LinkedIn profiles may vary from actual.