Skip to main content

'Big security data' is not 'Big Data security'

Last week, McAfee through a press release announced the publication of ‘Needle in a Datastack’ report which highlighted how organizations are unable to harness the power of Big Data for security purpose. However, by the time the wire reached the nook and corners of publishing world, the message in many quarters got distorted to something like the following snippets:

Big data mismanagement a security risk: McAfee
Business Spectator-18-Jun-2013

Big data poses security challenge to businesses - McAfee
Siliconrepublic.com-19-Jun-2013

Big Data causes big problems for security
Infosecurity Magazine-18-Jun-2013



Bemoan the lack of technical expertise in wire reading interns or news writers to get distorted messages. Many of these are reputed portals and with the spike in social sharing, many would just go by the headline in Twitter or Facebook news feed. Hey McAfee, your effort just bit the dust with rhetoric.

Anyways, time for us to clear the wire and set the record straight on the topic. While there may be an overlap among the two themes, Big security data is not equal to Big Data security.
For a simpler definition, Big Data security focuses on protecting and securing your Big Data technology and data assets. On the other hand, for ‘Big security data’ as McAfee explains:
“To achieve real-time threat intelligence in an age where the volume, velocity and variety of information have pushed legacy systems to their limit, businesses must embrace the analysis, storage and management of big security data… With this need to identify complex attacks, organizations should go beyond pattern matching to achieve true risk-based analysis and modeling. Ideally, this approach should be backed by a data management system able to create complex real-time analytics. In addition to the ability to spot threats in real-time, organizations should have the ability to identify potentially sinister long-term trends and patterns.”


- Correlating large volumes of DNS network traffic to identify anomalous DNS behavior and suspicious domains;
- Analyzing the types and sentiment of the communications to see behavior trends that may indicate an employee is upset with the company or his or her management;
- Analyzing large data volumes to more accurately derive a continuous risk score; and
- corroborating a scenario with like events to avoid reaching incorrect judgments.


As per Fran Howarth, the following can be considered to be characteristics of security information that is required for big data analytics:
• It comprises huge volumes of event and log data that accumulates quickly from a wide variety of data sources and it must be stored for long periods of time. Collection must be in real time and ongoing as new event data is constantly being generated.
• Although some data is in easy-to-digest structured form, the majority is unstructured, or semi-structured at best.
• The system used to collect the data must be capable of taking in data from a very diverse range of data sources and types, as well as from a diverse range of endpoints. This requires that the management system uses a common taxonomy, dictionary of terms and event profile schema based on industry standards so that such data sources can be compared directly.
• Event data must be captured once as soon as it is generated, retained in its original form as the single source of the truth, must be time stamped for security in order to find threats and related patterns, and must never be changed in order for its integrity to be maintained and for it to be admissible as evidence. Event data must be reported on based on time, which introduces storage and querying challenges that relational databases do not easily support. Queries need to be performed quickly in real time across massive data volumes spanning long time periods—for example, to comply with regulatory requests.





Corroborant to the complexities listed above, we had elaborated earlier how MapReduce and Big security data could be used to detect DDoS attacks. Based on the enormous interest the publication generated, we have seen people come back to us asking for more details. A large proportion of those queries in fact asked where to get the massive scale data for testing their custom MapReduce tool (for preventing DDoS). And, imagine, on the contrary what organizations are doing – turning off their security alert data feeds to save on storage and ‘improve performance’. As McAfee points out, they are not only leaving a hole for bigger vulnerability, they are in fact missing an opportunity for capturing Big security data that can be used for rich security analytics.

When we published the DDoS study, another common query on Twitter and San Weibo was the fitment to real time scenarios since Hadoop and MapReduce were primarily considered batch oriented. However, much progress has been made in last 2 quarters. While there has been an emergence of integration architecture utilizing Data in motion (Streaming data) with Data at rest, the other technology alternative of real time Hadoop with Cloudera Impala etc. also holds greater promise. Our net recommendation is that: even if you don’t have a security analytics project on the hook right now, start storing the Big security data now so that you don’t lose out to the intruders later.


Top image source: Black Lotus

Comments

Popular posts from this blog

In-memory data model with Apache Gora

Open source in-memory data model and persistence for big data framework Apache Gora™ version 0.3, was released in May 2013. The 0.3 release offers significant improvements and changes to a number of modules including a number of bug fixes. However, what may be of significant interest to the DynamoDB community will be the addition of a gora-dynamodb datastore for mapping and persisting objects to Amazon's DynamoDB. Additionally the release includes various improvements to the gora-core and gora-cassandra modules as well as a new Web Services API implementation which enables users to extend Gora to any cloud storage platform of their choice. This 2-part post provides commentary on all of the above and a whole lot more, expanding to cover where Gora fits in within the NoSQL and Big Data space, the development challenges and features which have been baked into Gora 0.3 and finally what we have on the road map for the 0.4 development drive.
Introducing Apache Gora Although there are var…

Top Big Data Influencers of 2015

2015 was an exciting year for big data and hadoop ecosystem. We saw hadoop becoming an essential part of data management strategy of almost all major enterprise organizations. There is cut throat competition among IT vendors now to help realize the vision of data hub, data lake and data warehouse with Hadoop and Spark.
As part of its annual assessment of big data and hadoop ecosystem, HadoopSphere publishes a list of top big data influencers each year. The list is derived based on a scientific methodology which involves assessing various parameters in each category of influencers. HadoopSphere Top Big Data Influencers list reflects the people, products, organizations and portals that exercised the most influence on big data and ecosystem in a particular year. The influencers have been listed in the following categories:

AnalystsSocial MediaOnline MediaProductsTechiesCoachThought LeadersClick here to read the methodology used.

Analysts:Doug HenschenIt might have been hard to miss Doug…

5 online tools in data visualization playground

While building up an analytics dashboard, one of the major decision points is regarding the type of charts and graphs that would provide better insight into the data. To avoid a lot of re-work later, it makes sense to try the various chart options during the requirement and design phase. It is probably a well known myth that existing tool options in any product can serve all the user requirements with just minor configuration changes. We all know and realize that code needs to be written to serve each customer’s individual needs.
To that effect, here are 5 tools that could empower your technical and business teams to decide on visualization options during the requirement phase. Listed below are online tools for you to add data and use as playground.
1)      Many Eyes: Many Eyes is a data visualization experiment by IBM Researchandthe IBM Cognos software group. This tool provides option to upload data sets and create visualizations including Scatter Plot, Tree Map, Tag/Word cloud and ge…