Data Orchestration Simplifies Enterprise Storage
Billions of dollars are invested in storage every year to keep up with our appetites for data on demand. Historically, it has been difficult to move data, so companies just keep adding more storage silos. The result is significant storage sprawl and management complexity. While enterprises end up with a lot of storage diversity they could theoretically put to good use, the changing value of data over time means that there is a lot of data on the wrong type of storage for its current needs.
Data virtualization software unites flash, shared and cloud storage into a global dataspace for the first time. Once data is virtualized, applications work with its logical location not where it physically resides, intelligent analytics can automatically orchestrate data placement across storage according to policies that align data performance, protection or cost requirements with the demands of the business. These breakthroughs mean we can expect increasing automation and simplification in datacenters in the months to come.
The Post-Performance Era
Reliable, low latency storage performance is critical to business operations today. But as flash, nonvolatile memory and other performance storage technologies become standards rather than exciting new resources, trend watchers will increasingly look to solutions that deliver simpler scalability to grow seamlessly with ever-increasing data demands. Software that automates data placement will improve datacenter efficiency and increase agility. Indeed, Gartner reports that Management Software Defined Storage solutions are moving from emerging technologies into their customer “Advantage” category, while most of the hardware it tracks is moving towards the commoditization area of the 2016 IT Market Clock for Storage. Expect to read less about speeds and feeds and more about how to streamline operations for management simplicity in the year ahead.
Infrastructure Gains Intelligence
Adding intelligent monitoring of data across different storage systems opens up a multitude of new operational efficiencies for IT. A single data orchestration system can now ensure that only newly created and hot data can be stored on flash within application servers, eliminating the need in many instances for separate caching solutions, or hops over the network to get to shared arrays. Meanwhile, cooler data (typically up to 80% of the average company’s data) can be automatically placed on lower cost shared or cloud storage.
In addition, data orchestration technologies make it possible for enterprises to now consider storage leasing models to further optimize datacenter cost structures. This is possible because data orchestration makes data migration a problem of the past. Unlike data migrations, data orchestration automatically ensures the right data is in the right place at the right time, across different types of storage. This can happen on the fly, scheduled when equipment is obsolete, or when warranties or leases run out. Lastly, the use of intelligent data placement can improve scale-out storage solutions by creating pools of storage that can be load-balanced across a number of storage silos, allowing true parallel accesses in a logically clustered storage.
Each of these datacenter advances will improve service levels by keeping unnecessary I/O operations off the network, and making room for performance sensitive data transactions. Intelligent, automated infrastructure also delivers significant savings, as companies no longer have to overload on expensive capacity years in advance. Instead, enterprises can align and add resources according to real needs, rather than rely on well-educated guesses about future business requirements.
Automation Increases, Reducing IT Emergencies
As they say, an ounce of prevention is worth a pound of cure, and in the case of Delta’s 2016 outage, a datacenter misstep cost the company $150 million. While usually less critical than the power outage that led to Delta’s problem, storage performance problems can significantly slow down profits. As high performance storage gets filled up with data that doesn’t need premium speeds, impatient customers can click elsewhere. Innovative enterprises can now take control by defining automatically placing data that needs peak performance on fast storage and cooler on slower, lower cost storage system, according to IT-defined policies. In addition, data virtualization finally makes it easy to scale out by adding more performance or capacity to the global dataspace whenever needed. As a new storage resource is added, data that aligns with the features of that system will automatically move to the newly added storage.
No IT department likes surprises, and in 2017, they will have to tolerate fewer of them as storage management platforms finally deliver insight into what data is hot, what is cold, and how to optimize data placement automatically to align resources to demand. This will bring more of the efficiency of just-in-time manufacturing to enterprise storage, giving innovators a competitive edge while saving costs.
Metadata Makes a Management Move
Given that the amount of data in the world doubles every two years, data needs to be managed much more effectively on the enterprise end of the data consumption chain. Metadata is the data about our data, such as when a file was created, when it was last opened, what application uses it, and so forth. This is similar to how books are logged in a library system – the metadata records the title, author, date of publishing and so on – and that record helps you find the book. In enterprise storage, new data management systems will make better use of metadata to streamline storage efficiency. As IT begins to be able to finally see exactly what data is cold, and what is hot, it will get much easier to align data to the different storage resources available to enterprises today. The result will be much less overspending and much more optimization for companies who begin to manage by the intelligence available in their metadata.