The CapEx and OpEx benefits that resulted from consolidating infrastructure through the deployment of hypervisors have peaked in many IT shops. Now, as more virtual machines (VM’s) have been deployed, organizations are realizing that their systems have become more difficult to manage, the performance of their applications has degraded, and their IT expenses have increased as headcounts and licensing fees have risen.
In addition, tuning concurrency demands, data volumes, and analytic workloads is made more complicated as VM’s add entangled variables to load balancing and system performance equations. Consequently, chronic performance issues abound and IT spends more time addressing all the problems resulting from the virtual machines that have taken over the data center.
To be clear, virtualization of the data center is not the problem. Virtualization is all about making the most of IT software and hardware infrastructure. It has a number of advantages, like reducing the dependencies on more hardware and new software licenses, and centralizing network and application management. But it’s the unbounded and haphazard growth of, and the over-allocation of resources to, virtual machines that is the problem many data centers are grappling with today. VM sprawl is a very real problem that’s costing IT shops real time and very serious money.
In essence, if your organization is using more virtual machines than it actually needs, then it is defeating the purpose of virtualization. But dealing with VM sprawl is challenging because it’s not just a technical problem with clearly defined symptoms that can be resolved by changing some settings or by applying a patch. It’s also not always an obvious problem since it usually builds up over time.
Typically, when virtualization is implemented, existing physical servers are converted into virtual machines as part of a consolidation plan. Mindsets change once the VM implementation is complete since purchasing new hardware is no longer necessary when new servers are needed. With just a few clicks of the mouse, a new server – a virtual machine – can be created within minutes.
As a result, requests for new VM’s are quickly fulfilled without any regard for the resources they consume or the possibility that the physical hosts will become overwhelmed as they start filling up with more and more virtual machines. Hosts that started out with 15:1 consolidation ratios can end up having twice the number of VM’s running on them. The result is degraded performance throughout the data center with no clear path to solutions that can fix the intertwined causes.
Additionally, physical servers that host virtual machines typically have multiple, multi-core CPUs, large amounts of RAM, many network and expensive fibre-channel adapters, and large amounts of storage. So, the hardware for a virtual host costs much more than that of a single-use physical server because it must scale to support many virtual machines. On top of the hardware costs, the hypervisor software and the management products needed to operate them also add costs. And if this isn’t bad enough, unchecked VM proliferation puts IT shops at risk of breaching software compliance licenses.
Unfortunately, looking for over-sized, stale, or zombie VM’s is very time consuming and difficult, especially for data centers running hundreds, or more, virtual machines. Consequently, many IT shops are addressing VM sprawl and getting ahead of the problems it generates by deploying performance enhancing infrastructure optimization software.
These new optimization services leverage a physical host’s memory tiers – DRAM and storage – increasing the workload densities that the host can handle, while reducing the number of VM’s required of those workloads. This in turn yields faster applications that allow IT to consolidate workloads from multiple applications into a virtualized, single-server instance. Performance gains are achieved with optimization of the software stack at the storage and memory levels.
While there are a variety of ways to help improve the performance of mission-critical, data-intensive applications, many come with their own challenges. More common approaches address performance concerns at a hardware level—faster storage, more spindles, higher-powered CPUs, and more memory. The underlying commonality across these solutions is that they temporarily alleviate performance concerns at a high cost. Many software approaches either require changes to the application, the data, or in the worst case, both. This leads to lengthy, complex deployments with higher risk of downtime or data loss. With the growing size of application datasets and diversity of application workloads, organizations are thinking outside of the traditional paradigms to meet their performance and scalability requirements while meeting their strict budget requirements. Those who succeed at reducing their VM footprints and costs are searching out and choosing solutions that:
- Are designed to integrate easily with their existing infrastructure – whether it is on premises, in the cloud or both.
- Are quick and easy to implement. Think minutes vs. hours.
- Don’t require changes to existing applications or their data. No new coding or reconfiguration is key to long-term success.
- Able to consolidate by running multiple applications within a single instance.
As organizations continue to modernize their applications infrastructure, the goal of creating a complete, operationally efficient IT infrastructure has never been more important. Effective optimization software solutions enable organizations to get more out of their existing resources while continuing to meet strict performance SLAs. By combining the consolidation capabilities of virtualization with the performance capabilities of memory and flash storage, organizations gain technology that unifies memory and storage to deliver a highly efficient software stack that commonly exceeds the needs of existing application workloads. As a result organizations get more out of what they already paid for while reducing the need to purchase more hardware to meet the increasing capacity and performance demands of their mission-critical applications.