Skip to main content

Security Architecture for Apache Hadoop


Through the years, there has been a clamor and need expressed for robust Apache Hadoop security framework. Considering the massive amount of data that nodes hold, there is an increasing need to focus on security architecture for the cluster. Further, there is a sensitization around the regulatory and legal norms that enterprise firms need to follow.

hadoopsphere.com presents below a security architecture that can be adapted in your Apache Hadoop cluster. Tools may vary based on off-the-shelf utilities or custom in-house monitoring programs. It is essential that each firm depending on its business use case put in essential guards and checks for protecting the Hadoop nodes. The following 10 components should always serve as your discussion guide while implementing security architecture for Apache Hadoop.


Key components required in security architecture for Apache Hadoop:


1. Role based authorization:
- Ensure separation of duties
- Restrict functional access

2. Admin and Configuration:
- Role based administration
- Configurable node and cluster parameters

3. Authentication framework:
- Validate nodes
- Validate client applications
for access to the cluster and  MapReduce jobs

4. Audit Log:
- Log transactions
- Log activities

5. Alerts:
- Real-time alerting
- Constant monitoring

6. File encryption:
- Protect private information (SPI/BPI)
- Comply with regulatory norms

7. Key certificate Server:
- Central key management server to manage different keys for different files.

8. Network security:
- Ensure secure communications between nodes, applications and other interface

9. Resource slim: 
- Minimal consumption of network
- Minimal consumption of resources, threads, process

10. Universal:
- Hadoop agnostic – compatible across distributions
- Heterogeneous support – compatible across ecosystem



© hadoopsphere.com

Comments

Popular posts from this blog

Low latency SQL querying on HBase

HBase has emerged as one of the most popular NoSQL database offering distributed, versioned, non-relational tables hosted on commodity hardware. However, with a large set of users coming from a relational SQL world, it made sense to bring the SQL back in this NoSQL. With Apache Phoenix, database professionals get a convenient way to query HBase through SQL in a fast and efficient manner. Continuing our discussion with James Taylor, the founder of Apache Phoenix, we focus on the functional aspects of Phoenix in this second part of interaction.
Although Apache Phoenix started off with distinct low latency advantage, have the other options like Hive/Impala (integrated with HBase) caught up in terms of performance?
No, these other tools such as Hive and Impala have not invested in improving performance against HBase data, so if anything, Phoenix's advantage has only gotten bigger as our performance improves.  See this link for comparison of Apache Phoenix with Apache Hive and Cloudera Im…

Data deduplication tactics with HDFS and MapReduce

As the amount of data continues to grow exponentially, there has been increased focus on stored data reduction methods. Data compression, single instance store and data deduplication are among the common techniques employed for stored data reduction.
Deduplication often refers to elimination of redundant subfiles (also known as chunks, blocks, or extents). Unlike compression, data is not changed and eliminates storage capacity for identical data. Data deduplication offers significant advantage in terms of reduction in storage, network bandwidth and promises increased scalability.
From a simplistic use case perspective, we can see application in removing duplicates in Call Detail Record (CDR) for a Telecom carrier. Similarly, we may apply the technique to optimize on network traffic carrying the same data packets.
Some of the common methods for data deduplication in storage architecture include hashing, binary comparison and delta differencing. In this post, we focus on how MapReduce and…

Pricing models for Hadoop products

A look at the various pricing models adopted by the vendors in the Hadoop ecosystem. While the pricing models are evolving in this rapid and dynamic market, listed below are some of the major variations utilized by companies in the sphere.
1) Per Node:Among the most common model, the node based pricing mechanism utilizes customized rules for determining pricing per node. This may be as straight forward as pricing per name node and data node or could have complex variants of pricing based on number of core processors utilized by the nodes in the cluster or per user license in case of applications.
2) Per TB:The data based pricing mechanism charges customer for license cost per TB of data. This model usually accounts non replicated data for computation of cost.
3) Subscription Support cost only:In this model, the vendor prefers to give away software for free but charges the customer for subscription support on a specified number of nodes. The support timings and level of support further …