Sunday, Aug 28, 2016
Nutanix Mission Pack 728×90
Tegile 728×90 Lifetime Storage
HomeTopicsBig DataExecutive Viewpoint 2016 Prediction: Altiscale – The Year that Big Data Goes Big Time

Executive Viewpoint 2016 Prediction: Altiscale – The Year that Big Data Goes Big Time

In 2015, leading-edge organizations began investing in Apache Hadoop and saw real business use cases deliver positive results. This early traction inspired other organizations to experiment with Hadoop, and we are now seeing Big Data projects graduate from the early adopter phase.

Hadoop Grows Faster by Going to the Cloud

While many initial Hadoop implementations were on-premises, more companies are moving Big Data to the cloud. A recent Gartner survey showed that over fifty percent of current and future Hadoop deployments will include some form of cloud delivery. Cloud-based Hadoop solutions shoulder the most difficult parts of Big Data –the infrastructure, software management, and operations—so that customers can focus on getting greater business value from their data than ever before. I predict that, in 2016, we will see the cloud model for Hadoop gain critical traction and recognition in the market.

Data Governance Rises

In terms of Hadoop adoption growth, it is now recognized that Big Data is increasingly mainstream. As new inflows of data appear,  we will see increasing demands to better understand where the data comes from, what it is and how best to use it.

Jumping the “Skills Gap” with Fully Managed Cloud Services

Growth in Hadoop demand will exceed the growth in the talent pool. According to a recent AtScale survey, 61 percent of respondents view the limited talent pool as the biggest challenge to adoption. In order to bypass the need to hire more data scientists and Hadoop admins from a highly competitive field, organizations will choose fully managed, cloud services that include operational support. This frees up existing data science teams to focus their talents on analysis instead of spending valuable time wrangling complex Hadoop clusters.

Spark Ignites

Hadoop deployments will increasingly leverage Apache Spark as a programming framework, due to its versatility and performance benefits. This year, Spark began to move from the topic of conversation to usage. In 2016 we will see businesses create real Spark and Hadoop use cases. As Spark continues to rapidly evolve and mature, organizations tapping the technology will need to keep pace with updates and upgrades. On-premises Spark deployments are problematic, requiring time, resources, and the ability to adapt to feature leaps. On the other hand, cloud deployments will facilitate these updates and upgrades, so users don’t need to worry about falling behind.

Standards Emerge and Rule

In 2016, enterprises will start reaping the benefits of Hadoop ecosystem standards. For example, 2015 saw the launch of the Open Data Platform Initiative (ODPi), now overseen by the Linux Foundation, which established standards for how key projects in the Big Data ecosystem can work together. ODPi doubled in membership over the course of this past year, due to the need for standards. More growth and recognition in 2016 is widely expected. With this growth, new technologies and applications will meet the Hadoop ecosystem standards being established by the ODPi.

Datameer CEO, Stefan Groschupf, plainly expressed the benefits of standards by stating, “the Hadoop market needs more standardization so that whatever software a company develops or buys, they can be confident that it will work on the platform year after year.” Applications adhering to ODPi standards will be able to be easily run on any Hadoop distribution that meets ODPi specifications.

As the enterprise re-architects itself for the modern world, the demonstrated value of Big Data projects that we saw in 2015 will inspire organizations to accelerate and expand Big Data projects in the coming year. 2016 will be the year that Big Data goes Big Time, with much of this expansion moving to the Cloud.

Altiscale

NO COMMENTS

LEAVE A COMMENT