Wednesday, Jul 26, 2017
HomeFeaturesArticlesAs Complexity Rises, Software Simplifies by Connecting Different Storage Systems

As Complexity Rises, Software Simplifies by Connecting Different Storage Systems

The past few years have brought unprecedented innovation and consolidation in the storage industry. Flash memory and NVMe technologies are rewriting performance standards, while cloud storage solutions deliver greater cost savings and Infrastructure-as-a-Service solutions for numerous companies. As enterprises adopt flash and cloud storage alongside their SAN and NAS systems, what is really needed now is a way to unite these different systems and automatically place data across a storage type that best meets changing business needs.

A recent survey of IT professionals underscores just how much complexity storage administrators are managing today. Over half of the survey respondents manage 10 or more different types of storage, and one-third reported overseeing more than 20 different storage resources. While 36 percent of those surveyed reported having one or two storage vendors, nearly two-thirds of the professionals participating in the survey reported using storage systems from three to nine different providers.

primarydata_smith_062016

All of these different technologies are being adopted to serve the demands of different types of data. Mission-critical and virtualized applications greatly benefit from the fast response times of low latency flash. Business critical application needs often vary, making the automated tiering and consolidation benefits of hyperconverged systems attractive. Scale-out systems cost-effectively streamline the expansion of capacity to the growing data sets that modern analytics applications consume. Cloud storage services reduce the amount of hardware IT has to manage to lower datacenter CapEx and OpEx costs. In addition to these varying new storage systems, many legacy applications still require the services offered by traditional SAN and NAS storage systems.

The problem is that each of these storage systems is a new silo that traps data. Since IT has not been able to move data easily, they must take a bottoms-up approach to provisioning. This means mapping applications to fixed pools of capacity in an effort to identify the “best fit” to business requirements. To avoid delays once data’s storage is provisioned, both IT and application owners commonly overprovision capacity by at least twice its expected need. This results in inefficient storage consumption, a huge datacenter footprint with costs to match, as well as a complex expanse of storage systems, many of which require highly-trained skills to maintain.

As storage types proliferate, so does the need for more storage capacity. In a recent Worldwide Enterprise Storage Systems Forecast, IDC predicted that storage capacity will grow at 41 percent per year. Much of the demand behind this growth can be attributed to the recent increase in personal devices and the content they produce and consume, data from the Internet of Things (IoT), and Big Data analytics.

There’s certainly a lot of talk about data growth rates, however, the real problem isn’t how to store all this data. It’s how to control the cost and complexity of the datacenter, while gaining the agility to proactively meet the evolving business needs. But until recently, there hasn’t been an easy way integrate the diverse capabilities of these different storage investments.

Overcome Storage Silos with Data Virtualization

Data virtualization enables enterprises to integrate existing and new storage solutions into a single global dataspace to overcome storage silos. This reduces the workload and cost of maintaining separate storage systems, while enabling software to automatically and non-disruptively move data as business needs change. As data is aligned to the resource that serves its actual needs, enterprises no longer need to excessively overprovision to protect performance and capacity when applications get hot. They also no longer need to overspend on this performance and capacity for multiple separate storage systems.

Data virtualization brings to storage the same efficiencies that server virtualization brought to compute by facilitating management and reducing costs. When the physical location of data is abstracted from the underlying hardware, different storage types using different access protocols across file, block, and object storage can be united within a single global dataspace. This storage-agnostic approach automates management by enabling software to automatically and non-disruptively place data on the lowest cost storage that meets business objectives for data’s performance, price and protection requirements.

Data virtualization also offers global visibility into all available storage resources. Combined with automatic data mobility, this helps IT maximize use of current hardware to reduce purchasing and maintenance costs. In addition, as data virtualization pools storage resources, it becomes easy for IT to add resources as needed to scale with growth, in real time, rather than overspend years in advance of projected growth.

Enterprises have numerous storage options – what they really need now is a platform that connects the unique capabilities of their different and existing storage types. Data virtualization delivers the agility, speed and cost-effective services required for today’s rapidly evolving business environment. I believe it represents the next leap forward in storage simplification and automation for the IT industry, and is the next big step toward the software defined datacenter. Amidst the countless storage options available today, the ability to connect different storage types into a single ecosystem provides unprecedented control and choice for enterprise IT teams. As business automatically align storage supply with data demands, storage teams finally gain insight into the needs of the data on their storage, and can ensure the right data is in the right place at the right time.

Primary Data

NO COMMENTS

LEAVE A COMMENT

X
})(jQuery)