Skip to main content

'Big security data' is not 'Big Data security'

Last week, McAfee through a press release announced the publication of ‘Needle in a Datastack’ report which highlighted how organizations are unable to harness the power of Big Data for security purpose. However, by the time the wire reached the nook and corners of publishing world, the message in many quarters got distorted to something like the following snippets:

Big data mismanagement a security risk: McAfee
Business Spectator-18-Jun-2013

Big data poses security challenge to businesses - McAfee
Siliconrepublic.com-19-Jun-2013

Big Data causes big problems for security
Infosecurity Magazine-18-Jun-2013



Bemoan the lack of technical expertise in wire reading interns or news writers to get distorted messages. Many of these are reputed portals and with the spike in social sharing, many would just go by the headline in Twitter or Facebook news feed. Hey McAfee, your effort just bit the dust with rhetoric.

Anyways, time for us to clear the wire and set the record straight on the topic. While there may be an overlap among the two themes, Big security data is not equal to Big Data security.
For a simpler definition, Big Data security focuses on protecting and securing your Big Data technology and data assets. On the other hand, for ‘Big security data’ as McAfee explains:
“To achieve real-time threat intelligence in an age where the volume, velocity and variety of information have pushed legacy systems to their limit, businesses must embrace the analysis, storage and management of big security data… With this need to identify complex attacks, organizations should go beyond pattern matching to achieve true risk-based analysis and modeling. Ideally, this approach should be backed by a data management system able to create complex real-time analytics. In addition to the ability to spot threats in real-time, organizations should have the ability to identify potentially sinister long-term trends and patterns.”


- Correlating large volumes of DNS network traffic to identify anomalous DNS behavior and suspicious domains;
- Analyzing the types and sentiment of the communications to see behavior trends that may indicate an employee is upset with the company or his or her management;
- Analyzing large data volumes to more accurately derive a continuous risk score; and
- corroborating a scenario with like events to avoid reaching incorrect judgments.


As per Fran Howarth, the following can be considered to be characteristics of security information that is required for big data analytics:
• It comprises huge volumes of event and log data that accumulates quickly from a wide variety of data sources and it must be stored for long periods of time. Collection must be in real time and ongoing as new event data is constantly being generated.
• Although some data is in easy-to-digest structured form, the majority is unstructured, or semi-structured at best.
• The system used to collect the data must be capable of taking in data from a very diverse range of data sources and types, as well as from a diverse range of endpoints. This requires that the management system uses a common taxonomy, dictionary of terms and event profile schema based on industry standards so that such data sources can be compared directly.
• Event data must be captured once as soon as it is generated, retained in its original form as the single source of the truth, must be time stamped for security in order to find threats and related patterns, and must never be changed in order for its integrity to be maintained and for it to be admissible as evidence. Event data must be reported on based on time, which introduces storage and querying challenges that relational databases do not easily support. Queries need to be performed quickly in real time across massive data volumes spanning long time periods—for example, to comply with regulatory requests.





Corroborant to the complexities listed above, we had elaborated earlier how MapReduce and Big security data could be used to detect DDoS attacks. Based on the enormous interest the publication generated, we have seen people come back to us asking for more details. A large proportion of those queries in fact asked where to get the massive scale data for testing their custom MapReduce tool (for preventing DDoS). And, imagine, on the contrary what organizations are doing – turning off their security alert data feeds to save on storage and ‘improve performance’. As McAfee points out, they are not only leaving a hole for bigger vulnerability, they are in fact missing an opportunity for capturing Big security data that can be used for rich security analytics.

When we published the DDoS study, another common query on Twitter and San Weibo was the fitment to real time scenarios since Hadoop and MapReduce were primarily considered batch oriented. However, much progress has been made in last 2 quarters. While there has been an emergence of integration architecture utilizing Data in motion (Streaming data) with Data at rest, the other technology alternative of real time Hadoop with Cloudera Impala etc. also holds greater promise. Our net recommendation is that: even if you don’t have a security analytics project on the hook right now, start storing the Big security data now so that you don’t lose out to the intruders later.


Top image source: Black Lotus

Comments

Popular posts from this blog

Data deduplication tactics with HDFS and MapReduce

As the amount of data continues to grow exponentially, there has been increased focus on stored data reduction methods. Data compression, single instance store and data deduplication are among the common techniques employed for stored data reduction.
Deduplication often refers to elimination of redundant subfiles (also known as chunks, blocks, or extents). Unlike compression, data is not changed and eliminates storage capacity for identical data. Data deduplication offers significant advantage in terms of reduction in storage, network bandwidth and promises increased scalability.
From a simplistic use case perspective, we can see application in removing duplicates in Call Detail Record (CDR) for a Telecom carrier. Similarly, we may apply the technique to optimize on network traffic carrying the same data packets.
Some of the common methods for data deduplication in storage architecture include hashing, binary comparison and delta differencing. In this post, we focus on how MapReduce and…

In-memory data model with Apache Gora

Open source in-memory data model and persistence for big data framework Apache Gora™ version 0.3, was released in May 2013. The 0.3 release offers significant improvements and changes to a number of modules including a number of bug fixes. However, what may be of significant interest to the DynamoDB community will be the addition of a gora-dynamodb datastore for mapping and persisting objects to Amazon's DynamoDB. Additionally the release includes various improvements to the gora-core and gora-cassandra modules as well as a new Web Services API implementation which enables users to extend Gora to any cloud storage platform of their choice. This 2-part post provides commentary on all of the above and a whole lot more, expanding to cover where Gora fits in within the NoSQL and Big Data space, the development challenges and features which have been baked into Gora 0.3 and finally what we have on the road map for the 0.4 development drive.
Introducing Apache Gora Although there are var…

Large scale graph processing with Apache Hama

Recently Apache Hama team released official 0.7.0 version. According to the release announcement, there were big improvements in Graph package. In this article, we provide an overview of the newly improved Graph package of Apache Hama, and the benchmark results that performed by cloud platform team at Samsung Electronics.

Large scale datasets are being increasingly used in many fields. Graph algorithms are becoming important for analyzing big data. Data scientists are able to predict the behavior of the customer, the trends of the market, and make a decision by analyzing the graph structure and characteristics. Currently there are a variety of open source graph analytic frameworks, such as Google’s Pregel[1], Apache Giraph[2], GraphLab[3] and GraphX[4]. These frameworks are aimed at computations varying from classical graph traversal algorithms to graph statistics calculations such as triangle counting to complex machine learning algorithms. However these frameworks have been developed…