Sunday, Nov 19, 2017
HomeFeaturesExecutive Viewpoint 2016 Prediction: Load DynamiX

Executive Viewpoint 2016 Prediction: Load DynamiX

Flash storage becomes mainstream for Tier 1 applications and Software Defined Storage finds its place. Both technologies are potentially highly cost-effective storage, depending upon the use case. In any scenario, understanding your current productions workload I/O profiles and being able to understand how they can benefit from the new technologies will be key to storage infrastructure evolution decisions.

In 2015 Flash storage (SSDs) moved from evaluation phase to actual production. In 2016, we’ll see significant growth as it gains wide adoption. Flash has been slow to adopt due to the relative high cost of the technology when compared to spinning media. Throughout 2015 and in to 2016, storage professionals will need to make informed decisions on where to employ flash in their storage infrastructure. Every data center would love to deploy flash storage instead of hard disks, but it simply is not yet cost justified for every workload. The challenge is characterizing workloads to know which of them requires flash and which should remain on HDDs or hybrids in order to balance cost/budget and return on that investment.

TLC Flash will be a major new technology due to its low cost per GB. TLC flash (triple level cell flash) is a type of solid-state NAND flash memory that stores three bits of data per cell of flash media. TLC flash is less expensive than single-level cell (SLC) and multi-level cell (MLC) solid-state flash memory, which makes it very appealing. The reliability of these devices is still a challenge, but will improve dramatically in 2016 and have a major impact in lowering the cost of deploying solid state storage systems. TLC will shrink the significant cost/GB gap for many application workloads and help the further penetration of Flash storage for Tier 1 and even Tier 2 workloads.

For service providers and to support Tier 2 enterprise workloads, Software Defined Storage will move from the evaluation phase in 2015 to deployment phase in 2016 – at least for non-mission critical applications. SDS offers great value in larger scale tier 2 deployments and will continue to grow. However SDS offers a big challenge in that the end-user becomes the storage integrator when a company chooses to deploy it. In the past they could rely on NetApp, EMC or IBM to do the integration and scalability testing of their storage products, but that burden now shifts to the end users. The end-user will have to ensure that all of the pieces fit together and everything will perform as desired once the SDS solution is deployed into production.

The key to future deployments will be aligning production application workload performance requirements to storage purchase, configuration and deployment decisions. Without this knowledge, massive overprovisioning/underprovisioning of storage will continue. With SDS, combining multiple vendor pieces together comes with a whole gamut of unknowns. Storage architects and engineers must learn to analyze the I/O profiles of their production storage environment and have the tools necessary to adequately evaluate the multitude of new storage technologies.

Workload analysis and modeling will be essential to understanding production storage environments. It will enable architects and IT managers to successfully evolve their storage systems. In 2016, they will have better tools that offer deeper insight into their production environment. This will empower them to cost-effectively take advantage of all of the new storage technologies that are evolving and cut their storage costs by 50% or more.This is where storage performance analytics solutions like Load DynamiX offer tremendous value as they provide the workload insight that has been so lacking.

Load DynamiX

NO COMMENTS

LEAVE A COMMENT

X
})(jQuery)