Friday, Jun 23, 2017
HomeFeaturesArticlesThree Keys to Virtualization Optimization

Three Keys to Virtualization Optimization

With cloud migration starting to look like an “Amazing Race” to the cloud, many industry players have begun to assume that the sun is setting on virtualization. On the contrary. Although the technology has evolved considerably alongside its cloudier cousin, virtualization technology is alive and well. But that’s not to say situations and services aren’t changing.

To start, the traditional virtualization landscape has been disrupted by adjacent technologies, such as containers, micro-services, and cloud services, which are forcing us, as IT professionals, to broaden our knowledge bases considerably. It is no longer sufficient for virtualization administrators to operate only within the boundaries of a single vendor, such as VMware, and increasingly, Microsoft.

At the same time, VMware, which has held the top market spot since virtualization first entered the data center, has been increasingly challenged by Microsoft’s Hyper-V as it continues to drive toward delivering parity of key features and functionality previously exclusive to VMware vSphere. For example, vSphere’s ability to allow overcommitting memory of virtual machines (VMs) on a single host—such as running VMs with a total of 32GB virtual RAM on a physical server that has only 16GB of physical RAM—is now supported by Hyper-V, as well. This and other feature parity, along with the way Hyper-V is bundled into Microsoft’s software licensing, are encouraging longtime VMware customers to abandon the market leader for Microsoft.

It is also worth mentioning that, on the whole, we work within a much more saturated technology market these days. The digital transformation has introduced a plethora of mobile applications, digital experiences that are available through a web portal or otherwise hosted, which have become critical to the success of the modern business. Not only do today’s organizations need to be able to pivot and evolve with new technologies, IT departments must be in lockstep with business stakeholders to best enable and ensure the quality delivery of these services. The success of business depends on it.

Previously, we—especially those among us who consider ourselves virtualization administrators—have been primarily focused on optimizing infrastructure elements, the operations aspect of IT. Now, we need to think about how we are optimizing the end-user experience, which is a much bigger task when you consider the rate and amplitude of change today.

So, to help address these new challenges and ensure our environments are best prepared to scale and perform alongside workloads in the cloud, here are three keys to modern virtualization optimization and how to begin implementing each.

1. Know Where You’ve Been

The first step in any optimization strategy is to have a fundamental knowledge of historical data. To make any kind of technical improvements or adjustments, you need to know what your baseline, or normal, environment looks like. This should include not only a real-time look at your system’s health and performance, but an inventory of all the available virtual resources in your data center and how efficiently and effectively they are being utilized.

Why is this baseline key to ensuring that your virtual data center is as modernized and efficient as possible?

First, as you look to add new services and applications alongside newer hardware, which either runs on newer chip sets or acts as a hardware-assisted accelerator, the ability to reference an established health and performance baseline will enable much smoother scalability to support these added workloads.

Second, and perhaps most importantly, having a firm grasp of how far your virtual environment extends is key to protecting your organization’s data, and ensuring the most effective use of available resources. VM sprawl, a common challenge inherent to virtualization, stems from VMs that were ultimately lost in the daily IT operations shuffle, rather than properly retired. Sprawl creates both the potential for wasted or lost resources and serious security threats. A lost or forgotten VM in need of a security patch is an easy opportunity for attackers.

This is especially important because with so many alternatives to traditional virtualization available on the market today, basic mistakes and administrator oversights, like incorrectly sizing virtual resources or mismanaging resources, can lead business units to turn elsewhere to meet their needs. This includes public cloud services and services provided by IT outsourcing. Enter shadow IT. For instance, 71 percent of North American IT professionals who responded to a recent SolarWinds survey estimated that their end-users use non-IT-sanctioned cloud-based applications.

Consequently, you should leverage technology that will help you to capture and maintain records of virtual environment performance. Such a tool will ideally also include the ability to provide analysis as virtual environments change to allow more reliable forecasting in terms of resource allocation, budgets, and performance. Finally, any such tool you implement should help monitor the entire virtual environment, understanding that the resources currently in use and the health of each will allow you to regain confidence that your data center is efficient and effective.

2. Automate to Remediate

After establishing a reliable baseline for your environment, you should then look to automation to relieve yourself of much of the required manual tasks. Truly optimizing a virtualized environment starts with instrumenting the environment with an automated management technology that gathers data, analyzes performance, and provides automatic alerts to form the basis for more advanced automation and orchestration.

Of course, as the data center continues to consolidate and roles become much less siloed, many IT professionals may become accidental virtualization administrators who may lack the fundamental expertise of trained specialists. You may fall into this camp. This challenging situation tops an already difficult one when it comes to remediating virtual environment performance issues. In fact, according to the 2016 State of Data Center Architecture and Monitoring report by ActualTech Media, on average, IT administrators report needing anywhere from an hour to a full day to accurately identify the root cause of a virtualization performance problem, and even longer to remediate.

But the challenge is not insurmountable. The learning curve can be drastically reduced with virtualization management that provides the ability to cut through the noise to quickly surface the root cause of a performance problem and enable near-immediate remediation through recommended and even automated actions that are formed from the analysis of data that is unique to your virtual data center.

This will also offer you relief from manual troubleshooting exercises across key constructs like compute, memory, storage, and the network by automatically analyzing an environment’s historical data to report on how it has grown or been utilized over time and then predict how it will look in the future, based on algorithms that factor in today’s utilization pattern, historical growth spurts, etc. Instead, you can dedicate more time to investigating relevant, adjacent technologies, and honing your modern data center skillsets.

Overall, organizations that successfully transition to greater automation within their virtual environments will create a more resilient and responsive virtualized infrastructure that truly unleashes the benefits of virtualization, including enhanced speed, greater cost savings and simplified end-user servicing.

3. The Report Card

The last critical component of any optimization strategy is reporting. If for no other reason, reporting is the best way for you to translate data into dollars, bridging the gap between the IT department and business management.

You should look to establish a showback system, where reports on consumption of resources—and possibly even theoretical costs—can be shared with business decision makers and end-users. Such reporting helps surface the good stemming from an efficiently managed virtual environment, such as reclaiming 25 percent of virtual resources through optimization processes. Space reclamation of that size could mean an organization no longer needs to invest significant capital in new servers; instead, new VMs can be provisioned within the existing virtual environment.

In addition, and in consideration of end-users especially, who are often eager to take advantage of what appears to be an endless amount of virtual resources, a reporting mechanism is a useful way to demonstrate that virtual resources are not free, and thus curb the potential for sprawl.

It is true that a majority of organizations are increasingly looking to the cloud for benefits like cost savings, speedy application performance, and less infrastructure management. However, that same majority of organizations will continue to maintain an on-premises data center in which virtualization is and will remain king. Thus, as a virtualization administrator, whether by intent or by accident, you should look to these three keys to optimization in order to best capitalize on both new technologies introduced by the cloud and hybrid IT—such as Functions as a Service (serverless), containers, micro-services, public cloud services, etc.—and manage your virtual environment in such a way to more effectively and efficiently deliver a quality experience to your end-users.

SolarWinds

NO COMMENTS

LEAVE A COMMENT

X
})(jQuery)