Saturday, Oct 21, 2017
HomeTopicsBig DataExecutive Viewpoint 2017 Prediction: MapR Technologies

Executive Viewpoint 2017 Prediction: MapR Technologies

The Long-standing Wall Between Real-time Operations and Fast Analytics is About to Disappear Forever

In this article, I would like to propose to you that trends shaping up over the past few years are about to create a perfect storm that is going break all the rules held sacred by Enterprise Software Developers and Enterprise Software users. A new era of enterprise replatforming is about to begin. This creates a once-in-a-30-year opportunity for established businesses to radically disrupt themselves or face the all-too-real prospect of getting disrupted.

We saw the early weather pattern of this storm forming in the past few years. AirBnB took 9 years to become a $30B company, Uber took 7 years to become a $68B company and SnapChat took a mere 5 years to be a $25B company. Never before, in the history of high technology industry have we seen such a colossal rate of value creation.

These companies cracked the code by defying the conventional wisdom, embracing the bleeding edge and managing the chaos that ensues with it. In order to blaze the trail, they had to bring together the complex smorgasbord of components, attract and retain the best and the brightest and offer them visions of hot pre-IPO stock options to pull this off.

However, this do-it-yourself approach results in a crisis of complexity that makes it nearly impossible for the rest of the world to go beyond a handful of successful use cases.

Do-it-yourself approach is driving a crisis of complexity

The good news is that the technology trends in compute, storage, networking when combined with  the state-of-the-art of distributed systems software and AI techniques are about to democratize this secret-sauce. It is about to become much easier. And potentially even more disruptively capable systems are going to be available that will help challenge these new incumbents. The figure below shows the requirements that such a “New Stack” would have to provide.

The New Stack

Is it possible to build such a new stack? Before we answer that question, let’s take a small detour to understand the trends in Network, Storage and Compute.

Network

Bottlenecks in processing due to network limitations reduce as distributed data processing systems (like Apache Spark, Apache Flink, Apache Apex) move compute to data and as leaf-spine architectures continue to offer cost effective 10G/25G/40G/100G systems.

Storage

Today the largest capacity storage drive is a SSD (Samsung 16TB, Seagate announced 60TB) and not a spinning disk drive (Samsung 10TB). This removes the hardware storage bottleneck. Media trends also portend that SSDs will beat out Disks from a total cost of ownership perspective in 2017.

Compute

Modern data center CPU’s and GPU’s are evolving to offer complex crypto, security and deep learning optimized systems. The die is cast for the next few years of silicon innovation and engineers that understand and write software to leverage it are the only ones that will survive.

Decades of enterprise software wisdom and vendor folklore has trained IT professionals to hold the following as self-evident truths:

  1. Fast Transactional Real-time operational systems must use Relational Databases. If your scale exceeds the capabilities of such systems then you must keep your ambitions in check and wait for the next silicon cycle that may provide incrementally bigger but significantly more expensive systems.
  2. Large Scale Analytics can at best be provided on a daily basis but most commonly every few days.

Historic Rationale: Software systems were optimized either for analytics (high throughput, scan patterns, high latency) or for operations (low latency, random access patterns, lower throughput) – but never both.

Today’s Reality: With new hardware and sophisticated scale out design systems can be built that allow for large scale analytics AND large scale operations in real-time on the same system. These new platforms are generally called converged data platforms. Gartner Analyst firm predicted this need and coined another term for it called HTAP (Hybrid Transactional/Analytical Processing).

  1. Interconnecting operational systems to exchange data and make joint decisions requires expensive custom message passing systems that are limited in scale and throughput.

Historic Rationale: Store-and-forward messaging systems are slow and therefore specialized messaging systems are needed to have high speed message queues (But they are classically point to point and low in throughput).

Today’s Reality: With advances in memory, network the new generation store-and-forward event streaming systems can address a large majority of legacy use cases and offer a robust platform for multi-site multi-consumer consumption of large scale data exhaust. MapR Streams and Apache Kafka are two examples of such systems.

  1. If you change your business operations and schema — be prepared for a lengthy complex process to incorporate these modifications into your analytical systems. Till that time either the analytics will break or won’t provide visibility into the impact of the new changes.

Historic Rationale: Relational schema based systems are the best way to optimize for flexibility and access. However they demand significant IT intervention to create and maintain schemas. Operational schema is unsuited for analytical use cases and vice versa. This leads to schema transformation and ETL and that leads to delays.

Today’s Reality: Tools like Apache Drill and a whole new industry with data wrangling is making data exploration a reality. Advanced schema discovery and metadata caching techniques in tools like Apache Drill allow for agility and schema changes on the fly.

  1. Data has gravity and therefore you must choose between on-premise and cloud — you cannot have it both ways.

Historic Rationale: Moving big-data is painful and lengthy. There are no good robust tools to move data in real time. It is hard enough to ingest real-time data into one system. It is impossible to do that across geographies. Even if you succeed in moving the data, applications are hard to port over from one environment to the other.

Today’s Reality: Amazon’s SnowBall and recently announced SnowMobile now provide ways to move existing big data to cloud. That coupled with data platforms that provide a wide menu of data movement / synchronization services — from scheduled periodic, to master/slave near-real time to multi-master real-time bring unprecedented flexibility in moving all or subsets of data making the data gravity argument weak. Application portability is now a solved problem with containerized microservices architectures.

Historically compute, storage and networking and distributed system paradigms have never evolved so radically at the same time. It is time to embrace this golden era in technology innovation and disrupt your industry of choice.

MapR

NO COMMENTS

LEAVE A COMMENT

X
})(jQuery)