3 Steps to Accurately Provisioning for Compute Resources When Migrating to the Cloud

Before moving to the cloud, it’s critical to conduct a thorough workload performance analysis and to understand the various cloud options for your workloads. If you don’t have a clear understanding of your projected needs before you move, you’ll likely choose the wrong instances, resulting in cost and performance issues.

When you’re migrating to the cloud, keep in mind that each of the virtual machines you will be tapping into has a finite amount of compute power, represented by processors and RAM. With the cloud, you will get full control over these compute resources with the ability to scale up and down based on your needs. The hardest work is understanding and defining those needs efficiently, so you can ensure optimal performance of your workloads at the lowest possible cost.

In the simplest terms, your process for optimally provisioning for compute resources includes the following three steps:

Provisioning for Compute-1.png

1. Measure Your Workloads: A performance analysis of your workloads is essential to identifying the instances that will ensure your capacity matches your usage and therefore meet performance requirements. Key metrics to consider for understanding your CPU requirements include peak CPU utilization, allocated and peak RAM usage, and usage patterns.

Peak CPU Utilization

Be sure to examine the peaks in your workload CPU usage. Many companies mistakenly size their cloud environments based on averages instead of basing their assessments on all peaks and valleys of CPU usage. If you take an average-based approach, you create a less accurate picture of your requirements If you size your cloud environment based on averages, your infrastructure will suffer serious performance degradation when you hit peaks, and you will incur unnecessary costs during slow periods. The tighter the intervals for when you consider peaks, the more accurate your analysis will be. For some workloads, you may want to consider your usage at 1-hour intervals and for others you may want to consider intervals as short as 5 minutes.

Allocated and Peak RAM Usage

To determine your memory requirements, you want to ensure that you are provisioned for your highest possible usage requirements. Memory peaks are unforgiving. If you reach 95% CPU usage, you will see everything slow down. If you reach that same level in memory usage, you can expect a crash.

In the following graph, we tracked the observed memory on a virtual machine and its available memory on a particular AWS instance. You can see how the projected AWS instance will provide 30GB of memory for the workload, which is sufficient to cover the occasional peaks of 25GB.

Memory Chart.png

2. Predict the Workloads: After you determine your typical peaks for CPU and memory, you will be able to provision for them in the cloud and add capacity. However, migrating your workload to AWS requires an understanding of the available options in terms of instance classes, as well as which instance will provide the best performance at the lowest cost.

For example, if your workload is CPU-intensive, with a peak CPU utilization target of 25%, will it be better to use m3.large or c3.medium? Or is there another instance among the 60+ different types AWS offers that may offer a better fit? And what will be the performance of your workload when you migrate your application to the cloud? This is all guesswork until you migrate your application and measure the performance on AWS.

At Cloudamize, our platform can show you what the observed CPU usage is as well as the projection for the CPU usage on a particular instance. With a chart that shows the observed CPU usage on the virtual machine and projected CPU usage on an optimal EC2 instance on AWS, you can see your own data and how that will look on AWS. If you have different performance target in mind, you can easily see which EC2 machine you will require for that performance target.

As an example, we analyzed a workload on a virtual machine with 6 vCPUs and 32 GB of RAM. Based on this, our platform recommended an AWS EC2 instance size of r3.2xlarge and determined costs of $5,825 annually. Our projected performance analysis showed this company what their projected CPU usage would be on this recommended instance and what it would cost.

3. Calculate Cloud TCO: Once you’ve measured your workloads and predicted your optimal instances, it’s time to calculate the total cost of ownership (TCO) of your cloud instances so that you can know your costs before you migrate. You may find that certain instances cost more to move to the cloud than to host on-premises, and you can make the strategic decision keep those workloads on-premises. Others will be more optimal and you’ll be able to calculate the TCO of moving that particular workload onto its optimal AWS instance. Most importantly, you’ll know that you’re making the right choices and calculating accurate TCO costs because your estimates will be based on your performance and usage analysis.

Boiling It All Down

As you’re getting ready to migrate, understanding what you should be migrating to and how to optimally provision your workloads is about finding the right compute resources to move onto based on an in-depth performance analysis. By following these steps, your resulting projected performance analysis will prepare you to migrate to the cloud on the optimal instance while also giving you a clear idea about what you can expect after you migrate. Skipping any of these steps, however, can put your organization at risk of under- or over-provisioning your workloads and flying blind into the cloud.

Guide: What To Think About When You're
Thinking About Moving to the Cloud

Related Posts