Lays out the case for how using virtualization can save very significant amounts of energy, especially in large data centers. Breaks down resource requirements in terms of RAM, storage and ultimately cooling for a stand alone server configuration and a comparable virtualization configuration and builds a case for why and how virtualization can significantly reduce energy usage requirements for data centers.

by Steve Carl, Senior Technologist and Green IT Spokesperson for BMC Software. Read his blog, Green IT. Follow him on Twitter @stevecarl.

In all the news and (let us admit it) hype around virtualization, one thing about it is generally accepted as true: Virtualization saves power. If you spend less on power, you therefore also save money, and emit less CO2.

Its intuitively obvious. I wondered, based on our current technology, what does that actually look like? Is it also measurably obvious? I have been talking here about some of the false paths Intuition take one on… How much are we really saving? What is the ROI?

I want to build this out carefully, and show assumptions and factors being used so that it should be reversible into other situations and platforms. This is a bit dry, and has math and stuff, but there is a pot of gold at the other end!

First of all: What is being replaced?

The Source

The root cause of why virtualization works is that computers are fantastically underused most of the time. My Linux laptop here, that I am writing this one, has 4 processors and 8GB of RAM. To write this blog, have email up, have Firefox up, A virtual machine running MS Windows, and a few things like weather widgets and inbox monitors is using about 8% CPU on average. 58% of the memory is in use, and most of that is dedicated to the virtual machine.

If the VM was not up, I would be using far less of everything. This laptop has a 130 watt power supply. If I was replacing a my Linux PC over there (see it? The one on the left…), it has 6GB of RAM that is 20% in use (the VM is not up) and the CPU is averaging about 5%. It has a 305 watt power supply rated at 76% efficiency. The older PC’s we have in some of the R&D data centers that are acting like servers have up to 450 watt power supplies.

Figuring out how much you are using, and of what resource is always the first problem in virtualization if you have to do the ROI ahead of buying the infrastructure, which of course you do. You can not just jump up and down with money in your hand and sing about how it will save money to virtualize. They will not believe you. They might lock you up.

If you have no idea at all how much power your systems might be using, you can add up all the wattages of all the power supplies, and multiply that by .6 to get in the ballpark. I talk about this diversity factor in my “By the Numbers” post.

Here is where having a good performance and capacity planning strategy pays off: If you know how utilized these systems are, and how much power they use, you can figure all this out in the tool, or at least in a spreadsheet using data from the tool. Money people like spreadsheets.

The Target

You know what you have, now you need to decide where you want to go, both Hypervisor and hardware-wise. It matters what chipsets you use for this: For example in X86 space, the latest ones from AMD and Intel have many features that help improve the performance of the virtual machine to nearly native levels of execution. One or two generations back, and AMD was ahead in the virtualization assist department. Power 7 is better at virtualization then Power 6. T3 is better than T2 because more threads and memory slots. Etc.

For X86 we use a mix of hypervisors (KVM, Xen, VMware, Hyper-V, etc) and servers (Dell, Cisco UCS, Sun X series, etc) here because we do R&D and we support a wide range of platforms. Almost any virtualization one does will end up saving power. Your exact numbers depend only on your choices and whether you are able to convince people that fewer larger systems uses less power than more smaller ones. That the up-front acquisition costs have short or at least medium term cost recovery in them.

For ease of internal pricing and provisioning, we classify our virtual machines into several categories:

  • Small: 1 VCPU, 2 GB RAM, 40 GB of disk space
  • Medium: 1 / 4 / 60
  • Large: 2 / 8 / 100
  • X-Large: 4 / 8 / 180

We can also do custom versions but these are the standard sizes. Each one has a different cost allocation, so R&D can look at their budget and then pick what sizes they need from there.

What is key here is knowing how many of any given size of these we can fit on a VM server.

The Target: RAM It

Most of the time, unless you are doing something obviously CPU intensive like analyzing seismic data or crunching SETI results, the key is RAM. Buy as much of it as you can, then buy some more. For this example, we’ll use the Dell R810 with 256GB of RAM.

The R810 is a nice green server. Two redundant 1100 watt power supplies. 2U rack space. Can go up to 512 GB of RAM, although that means using very expensive DIMM’s, so 256GB is a good compromise between price and density (Please future people reading this: This was state of the art in 2011. Try not to laugh at our puny memory configs. We know that you’ll have that in your Android phones soon…).

Memory is always our limiting factor. On average over our 10,000+ R&D VM’s the CPU will be at about 50% and the memory over 80% utilized. That makes it easy to figure out how many VM’s of any given size will fit on a server. For our example R810:

  • Small: 124
  • Medium: 62
  • Large: 31
  • Extra Large: 31

Knowing this, I need one other data point to figure out my first pass at watts per VM: What will the average virtual machine size be? Not everyone will buy the same size VM. Depends on what they are doing, and what OS they are running and whether there is an RDB inside there and all sorts of similarly unpredictable things.

Again, here is where that capacity planning pays off.

Our numbers of interest here are 1.4 Virtual CPU’s and 3.3 GB of RAM on average per VM. Allowing some RAM for the hypervisor, that means I can run 75 VM’s of average size on our target R810.

Taking into account the diversity factor, on average computers here consume 314 watts each (523 watts * .6 diversity factor) or a total of 23,550 watts. Even if the R810 was using all 1100 watts (which it isn’t) it is clear to see the power reductions look promising. Intuition may be right after all.

The Net of it

This is also about 5 Ethernet ports per R810 rather than 75. 4 regular Ethernet ports, plus a management port. We’ll use one to go off into a private network for the ISCSI storage, one for VC management, and 2 for general VM traffic. If you were using 24 port switches, this dropped 4 down to 2. One for public network, and one for private ISCSI traffic. A network switch only uses about 100 watts though, so that reduction is only about 400 watts down to 200. Not huge. Of course the second R810, and the third, and the forth don’t need new network switches either.

The power reductions are not dramatic, but the capital outlay is. I bring it up only to drop it from the discussion, since I am also not going to look at the floor space reductions, the DC size reductions, the fewer lights it will take inside the new smaller DC, etc.

Related post: “Is Virtualization a Valid ‘Green’ Technology for Emerging Companies?” looks at how virtualization is an emerging green technology that is suitable for small and medium sized businesses to adopt.

Storage Story

A stand alone server powers not just it’s memory and CPU with its internal power supply, but its internal disk as well. We could put disks in our example R810: Easily enough to hold 75 VM’s, but that does not scale out. A real VM deployment of any size is going to need external, sharable disks.

I need a watts per GB, and an average numbers of GB per VM to get to the next step of this story. Since we are talking about ISCSI and Dell stuff here, I’ll keep it in that range and figure out the watts per GB for Equalogic. Our standard config for that is one:

  • PS6000X: Quantity 1, for faster storage, 511 watts (computed from max BTU per hour rating)
  • PS6000E: Quantity 2, for less accessed data. 456 watts x 2 = 912 watts (computed from max BTU per hour rating)

Total GB in RAID 50 with hot spares: 31,200. Total Watts: 1423. Watts per GB:.046

I can now use the standard and average sizes to computer that:

  • Small: 40GB: 1.84 watts
  • Medium: 60GB: 2.76 watts
  • Large: 100 GB: 4.6 watts
  • Extra Large: 180GB: 8.3 watts
  • Average VM GB allocation: 58.3 GB: 2.7 watts.

Reminder at this point: When I use averages here, these are our averages, derived from our performance data. Caution here: These VM sizes are ones we picked based off our studies of what our internal customer needed. Your sizes and mileages may vary, but the techniques for figuring our this ROI stays the same.

Keeping it cool

We are being fair to the stand alone server in the last section because its power supply had to power its storage. Seemed only right since this section will not be a happy one for the stand alone server.

HVAC

The stand alone machine would more than likely prefer we forgot about it.

A watt of power generates about 3.4 BTU of heat. 23,550 watts for the stand alone servers is 80,070 BTU that needs to be cooled back out of the room. The R810 running 75 VM’s plus storage is going to be about 1200 watts or 4,080 BTU.

So, what is a good number for how many watts of HVAC is needed per BTU to be dealt with?

It varies. A lot.

Are you doing hot and cold aisling? Are your HVAC units maintained? How new are they? Do you have any option to use outside air to cool your DC? Is free cooling an option?

One the other hand, these stand alone servers or this virtual host are sitting in the same DC, and whichever number we find will be used for both, so it stays fair even if not 100% accurate for any given situation.

Common wisdom is that data center HVAC takes an additional 50-60% of whatever the power of the power of the DC is. If the DC is using 100 KW, then the HVAC is using another 50-60KW. Let go with the lower number for this, to assume that the data center is slightly more modern, and is using more efficient HVAC. I tried to get a look at the power nameplates for our 10 and 20 ton units here, and they are hidden away against the wall, or we would have a real number to use.

50% makes for easy math though.

  • 75 Standalone Servers: 23,550 watts + 11,775 HVAC watts = 35,325 watts (about 35 KW).
  • 1 R810 with 75 VM’s + storage: 4,080 watts + 2040 HVAC watts = 6,120 watts (about 6 KW)

Round Numbers and the Pot of Gold

The stand alone servers are using just under six times as much power, and that directly maps to six times as much money and CO2. This does not even count that 75 servers would need three or four racks versus a 2U slot plus 12U for the shared storage. What does that look like in terms of money and CO2?

Money today. Rates vary from country to country and coast to coast. Here in the US it is ranging from 8.2 to 16 cents per kilowatt-hour for our offices. Park your DC next to a hydroelectric dam like Google did and you can probably do better.

Non-leap-years have 8760 hours in them and we’ll look at three years lifecycle (26,280 hours), so:

3 Years Gold… err.. ROI

     

8.2 cents per 1 KW

     

16 cents per 1 KW

     

Per 1 KW

     

$2,155.00

     

$4,205.00

     

Standalone Power Price range

     

$75,425.00

     

$147,175.00

     

VM Price Range

     

$12,930.00

     

$25,230.00

     

Cost Savings

     

$62,495.00

     

$121,945.00

     

Note that I used the *full* rating of the virtualization servers power for this, and applied a diversity factor to the physical server, giving another slight advantage to the physical over the virtual… and still it came out this way. This is only one server, so it does not matter that much, but think about this same math applied across 10,000 real machines that later became virtual machines. Multiply the above numbers by about 133… big power reductions. Big 3 year ROI.

Also note that I used an all-Dell example here: The math applies to Cisco UCS or IBM X series or HP DL’s… You can even use the same approach for LDOMS and LPARS and IVM’s.

You just have to plug in the wattages and figure out how many VM’s of a typical size can run on a given model

Yeah: Just that. OK: Did I mention that having a capacity planning capability is key here yet?

Check out our related post: “Top 10 Things Data Centers Forget About PUE” that points out ten areas that are not being currently captured by Power Usage Effectiveness (PUE) metrics; some of which have significant implications for PUE measurements.

Line Break

Author: Steve Carl (2 Articles)

Steve Carl has been working in the IT industry in a wide variety of roles for three decades. Currently he is Senior Technologist and Green IT Spokesperson for BMC Software, where he has worked for the last 22 years. He works in a wide variety of roles, including data center design and implementation, technology solutions for R&D, Performance and Capacity Planning (including not just traditional performance metrics but also thermal / power performance). He is the author of two customer facing blogs for BMC: “Adventures in Linux”, where he chronicles his work in both data center and desktop Linux use in the Enterprise, and “Green IT, which covers his work in BMC’s green datacenter efforts. Follow him on Twitter @stevecarl