Skip to main content

'Big security data' is not 'Big Data security'

Last week, McAfee through a press release announced the publication of ‘Needle in a Datastack’ report which highlighted how organizations are unable to harness the power of Big Data for security purpose. However, by the time the wire reached the nook and corners of publishing world, the message in many quarters got distorted to something like the following snippets:

Big data mismanagement a security risk: McAfee
Business Spectator-18-Jun-2013

Big data poses security challenge to businesses - McAfee

Big Data causes big problems for security
Infosecurity Magazine-18-Jun-2013

Bemoan the lack of technical expertise in wire reading interns or news writers to get distorted messages. Many of these are reputed portals and with the spike in social sharing, many would just go by the headline in Twitter or Facebook news feed. Hey McAfee, your effort just bit the dust with rhetoric.

Anyways, time for us to clear the wire and set the record straight on the topic. While there may be an overlap among the two themes, Big security data is not equal to Big Data security.
For a simpler definition, Big Data security focuses on protecting and securing your Big Data technology and data assets. On the other hand, for ‘Big security data’ as McAfee explains:
“To achieve real-time threat intelligence in an age where the volume, velocity and variety of information have pushed legacy systems to their limit, businesses must embrace the analysis, storage and management of big security data… With this need to identify complex attacks, organizations should go beyond pattern matching to achieve true risk-based analysis and modeling. Ideally, this approach should be backed by a data management system able to create complex real-time analytics. In addition to the ability to spot threats in real-time, organizations should have the ability to identify potentially sinister long-term trends and patterns.”

- Correlating large volumes of DNS network traffic to identify anomalous DNS behavior and suspicious domains;
- Analyzing the types and sentiment of the communications to see behavior trends that may indicate an employee is upset with the company or his or her management;
- Analyzing large data volumes to more accurately derive a continuous risk score; and
- corroborating a scenario with like events to avoid reaching incorrect judgments.

As per Fran Howarth, the following can be considered to be characteristics of security information that is required for big data analytics:
• It comprises huge volumes of event and log data that accumulates quickly from a wide variety of data sources and it must be stored for long periods of time. Collection must be in real time and ongoing as new event data is constantly being generated.
• Although some data is in easy-to-digest structured form, the majority is unstructured, or semi-structured at best.
• The system used to collect the data must be capable of taking in data from a very diverse range of data sources and types, as well as from a diverse range of endpoints. This requires that the management system uses a common taxonomy, dictionary of terms and event profile schema based on industry standards so that such data sources can be compared directly.
• Event data must be captured once as soon as it is generated, retained in its original form as the single source of the truth, must be time stamped for security in order to find threats and related patterns, and must never be changed in order for its integrity to be maintained and for it to be admissible as evidence. Event data must be reported on based on time, which introduces storage and querying challenges that relational databases do not easily support. Queries need to be performed quickly in real time across massive data volumes spanning long time periods—for example, to comply with regulatory requests.

Corroborant to the complexities listed above, we had elaborated earlier how MapReduce and Big security data could be used to detect DDoS attacks. Based on the enormous interest the publication generated, we have seen people come back to us asking for more details. A large proportion of those queries in fact asked where to get the massive scale data for testing their custom MapReduce tool (for preventing DDoS). And, imagine, on the contrary what organizations are doing – turning off their security alert data feeds to save on storage and ‘improve performance’. As McAfee points out, they are not only leaving a hole for bigger vulnerability, they are in fact missing an opportunity for capturing Big security data that can be used for rich security analytics.

When we published the DDoS study, another common query on Twitter and San Weibo was the fitment to real time scenarios since Hadoop and MapReduce were primarily considered batch oriented. However, much progress has been made in last 2 quarters. While there has been an emergence of integration architecture utilizing Data in motion (Streaming data) with Data at rest, the other technology alternative of real time Hadoop with Cloudera Impala etc. also holds greater promise. Our net recommendation is that: even if you don’t have a security analytics project on the hook right now, start storing the Big security data now so that you don’t lose out to the intruders later.

Top image source: Black Lotus


Popular posts from this blog

Hadoop's 10 in LinkedIn's 10

LinkedIn, the pioneering professional social network has turned 10 years old. One of the hallmarks of its journey has been its technical accomplishments and significant contribution to open source, particularly in the last few years. Hadoop occupies a central place in its technical environment powering some of the most used features of desktop and mobile app. As LinkedIn enters the second decade of its existence, here is a look at 10 major projects and products powered by Hadoop in its data ecosystem.
1)      Voldemort:Arguably, the most famous export of LinkedIn engineering, Voldemort is a distributed key-value storage system. Named after an antagonist in Harry Potter series and influenced by Amazon’s Dynamo DB, the wizardry in this database extends to its self healing features. Available in HA configuration, its layered, pluggable architecture implementations are being used for both read and read-write use cases.
2)      Azkaban:A batch job scheduling system with a friendly UI, Azkab…

Data deduplication tactics with HDFS and MapReduce

As the amount of data continues to grow exponentially, there has been increased focus on stored data reduction methods. Data compression, single instance store and data deduplication are among the common techniques employed for stored data reduction.
Deduplication often refers to elimination of redundant subfiles (also known as chunks, blocks, or extents). Unlike compression, data is not changed and eliminates storage capacity for identical data. Data deduplication offers significant advantage in terms of reduction in storage, network bandwidth and promises increased scalability.
From a simplistic use case perspective, we can see application in removing duplicates in Call Detail Record (CDR) for a Telecom carrier. Similarly, we may apply the technique to optimize on network traffic carrying the same data packets.
Some of the common methods for data deduplication in storage architecture include hashing, binary comparison and delta differencing. In this post, we focus on how MapReduce and…

Top Big Data Influencers of 2015

2015 was an exciting year for big data and hadoop ecosystem. We saw hadoop becoming an essential part of data management strategy of almost all major enterprise organizations. There is cut throat competition among IT vendors now to help realize the vision of data hub, data lake and data warehouse with Hadoop and Spark.
As part of its annual assessment of big data and hadoop ecosystem, HadoopSphere publishes a list of top big data influencers each year. The list is derived based on a scientific methodology which involves assessing various parameters in each category of influencers. HadoopSphere Top Big Data Influencers list reflects the people, products, organizations and portals that exercised the most influence on big data and ecosystem in a particular year. The influencers have been listed in the following categories:

AnalystsSocial MediaOnline MediaProductsTechiesCoachThought LeadersClick here to read the methodology used.

Analysts:Doug HenschenIt might have been hard to miss Doug…