How Right-Sizing Reduced One Company’s TCO Calculation by 74%--and Accelerated Adoption of AWS

Despite the increasingly common success stories of companies migrating to the cloud, cloud adoption doesn’t happen quickly or easily. Many companies face uncertainties on what their cost and performance will be in the cloud, causing systems integrators and cloud service providers to encounter hesitation and friction around adoption —especially when it comes to presenting a TCO calculation.

Many companies see their TCO calculation and decide that it’s too expensive. Others may suspect that the calculation they’ve been provided is inaccurate, or they may be curious about what the analysis was based upon.

The challenge is that many TCO calculations don’t account for the changes that systems will undergo upon migration. On-premises infrastructure is over-provisioned, and calculations that are not based on what a company’s precise right-sized cloud will be are more expensive than they should be.

To accelerate cloud adoption and overcome “TCO friction,” systems integrators and cloud service providers should base TCO calculations on a company’s predicted right-sized cloud. This should include identifying both the right-sized compute and storage options for each machine in the company’s environment. Doing this correctly, however, relies on collecting and analyzing several data points. Take a look at the following steps for a guide on how to calculate TCO correctly, and see a real example of how this helped a company confidently build the business case for moving to the cloud.

Step 1: Conduct a Performance Analysis of Compute and Storage Resources

A performance analysis of your workloads is essential to identifying the compute and storage options in the cloud that will ensure capacity matches usage and therefore meet performance requirements.

Essential metrics for a performance analysis of storage resources include required peak IOPS, available maximum IOPS, disc capacity, disc occupancy, required peak throughput, available maximum throughput, and operation size.

Key metrics to analyze for understanding CPU requirements in the cloud include peak CPU utilization, allocated and peak RAM usage, CPU type, number of CPU cores, and usage patterns.

Peak CPU Utilization

Be sure to examine the peaks in your workload CPU usage. Many companies mistakenly size their cloud environments based on averages rather than on all peaks and valleys of CPU usage. If you size your cloud environment based on averages, your infrastructure will suffer serious performance degradation when you hit peaks, and you will incur unnecessary costs during slow periods. The tighter the intervals for when you consider peaks, the more accurate your analysis will be. For some workloads, you may want to consider your usage at 1-hour intervals and for others you may want to consider intervals as short as 5 minutes.

Allocated and Peak RAM Usage

To determine memory requirements, you want to ensure that you are provisioned for your highest possible usage requirements. Memory peaks are unforgiving. If you reach 95% CPU usage, you will see everything slow down. If you reach that same level in memory usage, you can expect a crash.

Usage Patterns

Your compute and storage performance analysis should also include usage patterns. Identify idle compute resources and unused storage volumes for each machine so in the cloud you can turn off what you’re not using. Keep track of all compute resources: how many times each instance is on/off, how often is it being accessed and when during the day is it being accessed the most/least. You will be able to see if the resource is idle most of the time or actively used. From there you can determine if it can be turned off at certain times or taken down altogether.

Step 2: Right-Size Compute and Storage

Cloud service providers offer multiple compute and storage options to accommodate different use cases. These options include different combinations of CPU, memory, storage, and capacity to provide the optimal resources for all applications - and there are millions of potential configurations. Once you’ve conducted a performance analysis, the next step is to project these workload characteristics on every single available compute and storage option to identify the configuration for each workload that will deliver the performance you’re seeking. This task involves data analysis, performance benchmarking, and predictive analytics. Without automation, it’s incredibly complex and time-consuming.

Once you find the compute and storage options for each workload that matches performance requirements, you can choose the least expensive options. Doing this for every workload will ensure you meet performance requirements at the lowest possible cost, and therefore provide your clients their most precise right-sized cloud.

As an example, in the following chart, the target CPU threshold is 60%. The dark blue line is the observed CPU utilization on the on-premises box, which has 8 cores. If the company purchases an instance with 8 cores, the CPU is at 25% - significantly over-provisioned from the target of 60%.

compute like to like.png

As shown in the following chart, the company can reduce the cores and get a smaller instance size, moving the CPU up to 45%. They can further reduce the cores, but it won’t meet their performance target. In this case, the m2.xlarge instance gives them 33% cost savings. Their CPU and memory are just the right size, and they can purchase more capacity when needed in the future. 

compute rightsize.png

Step 3: Find the Best Pricing Plan

Armed with an in-depth performance profile, you will be able to choose the pricing plan that is extremely well-matched to your client's specific needs, which can help you cut costs significantly. For example, AWS offers an on-demand pricing plan and different types of reservation plans. Reserved Instance (RI) plans range from no up-front 1-year to 3-year all up-front RIs and can provide savings from 15% all the way up to 75% on top of on-demand pricing. 

Step 4: Calculate TCO

After identifying the compute and storage configuration for each workload that meets performance requirements at the lowest cost, add each of these costs up to calculate the TCO of moving to the cloud. This TCO will be the most accurate - and the least expensive -  given it’s based on your client’s right-sized cloud configuration. You can then have your TCO account for the best pricing plan option to further lower costs.

Case Study: Right-Sizing = Big Cost Savings in TCO Calculation

Precise, right-sized TCO calculations can accelerate cloud adoption - and here's a great example.

For one company, using comprehensive performance analysis and usage patterns to right-size its cloud resulted in a TCO calculation that saved the company nearly 40% on its annual cloud costs compared with a calculation based on what it could cost if the company simply forklifted its on-premises environment into the cloud without changing any hardware requirements.

This large asset management company was planning to migrate 840 servers and 180 applications to AWS. If the company was to forklift its environment and put it in the cloud without changing any hardware requirements, then its annual cloud costs would be $4.2 million.

Like_to_Like.png

 

If the company was to right-size its compute and storage resources in the cloud based on its workload performance profiles, then its annual cloud cost would be $2.6 million, a 38% cost savings.

worload_optimized.png

 

If the company then purchased 3-yr RIs to optimize costs further, then its cost would be $1.7 million, a 60% cost savings.

RI_Purchase.png

 

If the company further optimized its environment for cloud elasticity, such as turning off instances when not in use, or employing autoscaling, it could realize 74% cost savings.

improve_elasticity.png

In the case of this company, its on-premises cost was $5 million. Simply moving its infrastructure to the cloud without any modifications would have reduced its infrastructure costs. However, the right metrics and an accurate TCO calculation showed that it could save anywhere from 38-74% in additional cost savings annually.

Today’s Companies Want Systems Integrators and Sales Professionals to “Show Their Work”

One of the biggest challenges with all high-level calculations is that companies migrating their infrastructure may struggle to trust a calculation when they can’t tell how it was derived. In addition to being expensive, many high level calculators don’t provide information on exactly why the TCO calculation is what it is. Recommendations sourced from a third-party application can provide additional credibility to cloud calculations.

Ultimately, systems integrators and sales professionals can accelerate cloud adoption with the detailed, precise data that gives clients the confidence and insights they need to move forward.

See a Sample Analytics Report Generated From the Cloudamize Platform

Subscribe to our Blog