Costs can frequently be the deciding factor for IT decisions rather than pure functionality or performance. Cloud IaaS and PaaS solutions are relatively new and it is sometimes difficult for IT leaders to understand what the costs are for these newer solutions as they begin to catch on.
irst, some brief IT history:
Let's deploy a mainframe, and have many users share it. That'll be cost-efficient!
x86 servers are the way to go. Much cheaper than those mainframe dinosaurs!
Server virtualization is the way to fly. We can use all of that compute capacity in one physical server, and save money!
Migrate everything to the cloud. Pay for only what you use and save money!
Despite each generation's breathless claims, each great shift in enterprise IT contains elements of both new efficiencies and new costs.
What's typically driving the cost side of the coin in public cloud IaaS deployments?
The pay-per-use model of IaaS platforms allows us to do some great things (automatic scaling, rapid deployment of services, advanced application functionality with normal infrastructure knowledge, and many more). The "catch" however is that these compute, storage, and network resources have a per-use cost, charged in small fractions that can quickly add up if not monitored and controlled.
The human IT tendency to deploy server instances in response to a new resource request starts the costing clock. The old physical server sprawl became virtual machine sprawl, which can quickly become public cloud usage sprawl. Unfortunately…
The next incremental virtual machine deployed out of your traditional private cloud infrastructure typically has a close to zero immediate incremental cost. Assuming a certain level of resource availability, you can typically continue to deploy a number of virtual machines before triggering the next server or storage upgrade. And when that additional capacity is required, the costs are not directly associated to the server that created the need for more resources as future capacity is also considered during these investments. Additionally, unless you've a system of resource monitoring and chargebacks for your private infrastructure, many resources, including:
Power and cooling
Storage TB and IOPS
Data networking out of your server subnets
Differing levels of hardware redundancy
Varying levels of data restoration capabilities
System monitoring
Message queuing, caching applications, etc.
...are generally charged as "overhead," rather than allocable to a specific application and/or business group.
Each of the above certainly has hard $ costs, but they're generally spread across the enterprise IT budget, rather than minutely focused on a server or application-specific basis. More importantly, the costs typically hit the IT department budget in a "chunky" fashion, rather than in small, real-time increments. Unfortunately…
When looking at a client's typical internal IT management tool belt, we rarely see capabilities to track, analyze, allocate, and prevent the incremental costs mentioned above. When they do their initial public IaaS deployments, they're presented with monthly billing that can be:
Difficult to decipher
Reflecting what happened, rather than what's going to happen
Not tied to an objective measure of whether or not the costs are truly important and authorized
A lack of real-time information combined with hard to understand historical reporting can result in an erosion of trust by the business that IT can control its budget and efficiently manage spend.
Planning for and controlling public cloud IaaS costs can be challenging. Costing cloud solutions is different than the typical way IT departments control spend and budget future costs. It pays out in the long run to understand what you’ll be getting into with any migration of services to the cloud.
West Monroe works with our clients on important decisions around cloud hosting. Contact us for more information.