Sunday, May 5, 2024

may, 2024

Executive Viewpoint 2019 Prediction: Scality – Storage Must Adapt to the Edge and for AI

One constant with storage is that it’s always growing – and must meet even higher standards for 24/7 business demands for all types of data in hybrid multi-cloud environments. Storage in turn must adapt to meet new demands for infinite scalability, availability and performance while offering meaningful returns on infrastructure investments. Here’s our predictions for 2019:

AI is storage-hungry, but you can’t use what you can’t find

The ability to aggregate and orchestrate the data generated by data artificial intelligence and machine learning across a variety of sources will be critical.  Meeting that need, robust scale-out storage solutions will fuel the momentum of AI and ML by paving the way for storing and processing the vast amounts of unstructured data that they generate.  Metadata search spanning a single, navigable namespace, and automatic metadata tagging will emerge as cornerstone capabilities.

The ‘Edge’ continues to grow—in size, and importance

Consolidation, IoT and the growing WFH trend are growing point overall to growth of ‘edge’ cases. Storage and protection of data that’s used, modified and/or generated in edge/ROBO environments or other distributed locations requires careful planning: as remote data collection and generation become more prevalent, centralized, secure data storage and high-performance networks become ever more critical as data is pulled from remote to central datacenters, whether those are public cloud, private cloud or straightforward datacenters.

In 2019, Geo-distributed data storage with efficient resiliency models that exploit advantages like those found in cross-geo erasure coding will make data safer and more readily available whenever and wherever it’s needed by authorized users. Additionally, centralized backups and recoverability will require solid network infrastructure and resilient and efficient central storage that makes performing backups, retrieving backed-up data painless for the remote users and central administrators alike.

Economies—of OPEX as well as of CAPEX—have got to be strong at the data collection, retrieval and long-term retention stages for valuable data. Centralized IT teams will look for low-overhead distributed solutions. Overall business value will continue to be key.

‘Cloud’ becomes ‘Clouds’

Multi-Cloud and Hybrid Cloud are the new ‘cloud’ as the awareness of data as a major enterprise asset continues to increase.  Today’s common model is a Hybrid IT approach:  corporations selectively and consciously choose to use their on-premises IT resources for some things, while moving some workloads and data to a public cloud or clouds. Realizing that one single cloud does not necessarily offer the best in every category, enterprises will continue to use multiple public cloud services across their applications and workloads. Vertical specialization of cloud services and regional services will make it commonplace for enterprises to have workloads and data managed across multiple clouds, both public and private.   This will require a solution that simplifies and automates data workflows across the multi-cloud environment.

Open Source is on the Enterprise requirements list

Enterprises have been embracing Open Source because of two key virtues that it brings: it encourages and accelerates innovation, and provides a safety hatch for escape from vendor lock-in. Open source was gaining even more enterprise credibility on Q4, 2018; IBMs acquisition of Red Hat being a shining example.  Enterprises will continue to evaluate products of all kinds on features, function, support and roadmap; but open source will move from fringe to RFP, as enterprise development teams look to collaborate with developers—and to be able to take the reins if need be—for mission-critical applications.  There will likely be more acquisitions as vendors strive to add open source to their competencies and to back their revenue-producing.

With Red Hat swallowed and Ubuntu looking for external investors, the Linux desktop finally will not be a topic for discussion any longer.

Scality

Giorgio Regni
Giorgio Regni
As Scality’s co-founder and Chief Technology Officer, Giorgio Regni oversees the company’s development, research, and product management. He is a recognized expert in distributed infrastructure software at web scale and has authored multiple US patents for distributed systems. Prior to Scality, Giorgio was a co-founder and VP of Engineering at Bizanga, where he developed anti-abuse software that still protects hundreds of millions of mailboxes across the world. Giorgio holds an engineering degree in computer science from INSA (Institut National des Sciences Appliquées) in Toulouse, France. He is also an accomplished hacker and developer. In his spare time, Giorgio has created mobile phone applications that are currently in use by an installed base of more than 2 million people. On an artistic note, Giorgio is a skilled electric guitar player, drawing his inspiration from guitar legends like Joe Satriani and Steve Vai.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

1,595FansLike
0FollowersFollow
24FollowersFollow
2,892FollowersFollow
0SubscribersSubscribe

Latest News