Skip to main content

Strata + Hadoop World for you and for me

Big Data has a big marketing problem: it’s been sold as a universal game-changer, but is perceived by some in the general public as overblown hype or a toy for the exclusive benefit of those few companies that can already afford it.
The situation hasn’t been helped by the recent revelations of widespread NSA communications monitoring, which have added to the not-unreasonable fears that always existed around mass-scale data collection.
Industry insiders see the immense progress being made on the technology front and understand the transformative power those shifts hold for society. But society, on the whole, may harbor some doubts.
It’s refreshing, then, to see that this year’s Strata + Hadoop World conference is taking these concerns seriously and making a concerted effort to address them.
Industry ethics, data security, and privacy issues are among the main focuses of this year’s conference. Perhaps more importantly, the event, already a dependable weathervane for big data development, embodies the internal recognition that the industry, for the sake of its own growth as much as society’s, needs to make meaningful results accessible to the broader market.
From the official guest speakers list, the conference reflects the diverse present-day landscape of big data innovation and makes looking forward to a more inclusive future one of its main objectives. Corporate executives will appear alongside startup CTOs now producing the insights that make much of big enterprise data solutions possible. Both will offer their perspectives in a setting shared by journalists, historians, and university experts invited to contextualize the industry’s achievements and frame its challenges moving forward.
One of the main consensuses likely to emerge from the integrated lineup of workshops, panel discussions, presentations, and demonstrations is that the industry’s upside lies in looking inward. Even as fundamental tools like Hadoop, Cassandra, Storm, Spark/Shark, and DrillPublic, extend the broader possibilities of big data further and further, smaller, more specialized services have emerged to deliver usable insights for businesses, governments, and organizations in general. Looking ahead to the next few years, it’s that new bottom-up dynamic that holds some of the industry’s strongest promise.
As a result, a recent IDC report predicts big data, as an industry, will ride 27% compounded annual growth to reach $32.4 billion globally by 2017. According to O’Reilly statistics, data science job posts have already jumped 89% year-over-year, with data engineering openings rising by 38%. Gartner placed “advanced, pervasive, invisible analytics” at #4 on its list of top strategic IT trends for 2015, a prediction informed by the increasing ubiquity of mobile computing devices and standardization of built-in analytics within the mobile app industry. In addition, Gartner noted that the era of the smart machine is upon us, and predicted that it will be the most disruptive in the history of IT.
Strata + Hadoop world is the place to understand this process and its far-reaching implications. Big tech brands were to be expected, but even at a first glance, the sheer range of industries in attendance speaks volumes. Iconic industry names from banking, manufacturing, energy , utilities, telecom have all registered for the conference, eager to make connections with the smaller software services also on display and learn what the newly stratified face of big data could mean in their respective fields.
Big data is finally beginning to incorporate the innovation that carries huge impact for the world and that carries huge impact for you and for me. So, let’s gear up to hear what the Hadoop fraternity has to say about it and how the world responds to it.

About the author:



Sundeep Sanghavi is the CEO and Co-Founder of DataRPM which is an award-winning, industry pioneer in smart machine analytics for big data.


Comments

Popular posts from this blog

Low latency SQL querying on HBase

HBase has emerged as one of the most popular NoSQL database offering distributed, versioned, non-relational tables hosted on commodity hardware. However, with a large set of users coming from a relational SQL world, it made sense to bring the SQL back in this NoSQL. With Apache Phoenix, database professionals get a convenient way to query HBase through SQL in a fast and efficient manner. Continuing our discussion with James Taylor, the founder of Apache Phoenix, we focus on the functional aspects of Phoenix in this second part of interaction.
Although Apache Phoenix started off with distinct low latency advantage, have the other options like Hive/Impala (integrated with HBase) caught up in terms of performance?
No, these other tools such as Hive and Impala have not invested in improving performance against HBase data, so if anything, Phoenix's advantage has only gotten bigger as our performance improves.  See this link for comparison of Apache Phoenix with Apache Hive and Cloudera Im…

Data deduplication tactics with HDFS and MapReduce

As the amount of data continues to grow exponentially, there has been increased focus on stored data reduction methods. Data compression, single instance store and data deduplication are among the common techniques employed for stored data reduction.
Deduplication often refers to elimination of redundant subfiles (also known as chunks, blocks, or extents). Unlike compression, data is not changed and eliminates storage capacity for identical data. Data deduplication offers significant advantage in terms of reduction in storage, network bandwidth and promises increased scalability.
From a simplistic use case perspective, we can see application in removing duplicates in Call Detail Record (CDR) for a Telecom carrier. Similarly, we may apply the technique to optimize on network traffic carrying the same data packets.
Some of the common methods for data deduplication in storage architecture include hashing, binary comparison and delta differencing. In this post, we focus on how MapReduce and…

5 online tools in data visualization playground

While building up an analytics dashboard, one of the major decision points is regarding the type of charts and graphs that would provide better insight into the data. To avoid a lot of re-work later, it makes sense to try the various chart options during the requirement and design phase. It is probably a well known myth that existing tool options in any product can serve all the user requirements with just minor configuration changes. We all know and realize that code needs to be written to serve each customer’s individual needs.
To that effect, here are 5 tools that could empower your technical and business teams to decide on visualization options during the requirement phase. Listed below are online tools for you to add data and use as playground.
1)      Many Eyes: Many Eyes is a data visualization experiment by IBM Researchandthe IBM Cognos software group. This tool provides option to upload data sets and create visualizations including Scatter Plot, Tree Map, Tag/Word cloud and ge…