I started reading Eric Siebert‘s book this afternoon - Maximum vSphere: Tips, How-Tos, and Best Practices for Working with VMware vSphere 4 (which is a great book - highly recommended!!), and for some reason during the part that he was speaking about licensing and the features, a thought crossed my mind.
I put out a feeler this evening on Twitter with this question:
Quick twitter poll - what is your average CPU usage on your ESX hosts? <25% - <50% - <75% - >75% - Interested to hear...— Maish Saidel-Keesing (@maishsk) October 30, 2010
All the answers I received all pointed to the same conclusion.
The constraint that almost everyone hits first is RAM, not CPU. Some admins cannot expand on the amount of RAM in their server, because the cost of the bigger DIMM's are too high, and there are not enough slots left in the server. Which leaves them with servers that are nearing memory capacity, but not not anywhere close on utilizing the CPU power of the server.
Many people are purchasing dual-socket servers for redundancy or because of the fear of not having the server perform well enough.
From my own environment I can say that my hosts are utilizing around 30% of their CPU, with 2 Quad Core CPU's. And from the answers I got tonight on my question above - the results are pretty much the same.
Now perhaps a sacrilegious thought. What would happen if we only used one physical processor in a server?
Today we are talking about a six or eight core processors and this number is rising. The amount of cores available are more or less the same, as what the majority of people are using today, 8 cores - 2 x Quad-core processors.
Now you might ask, but here I lose the redundancy. This could be true, but how many of you have actually lost a CPU due to malfunction in a server? I personally have not. Ever. I would also suppose - that if a physical CPU barfs on you during a production workload - then it will not be pretty. The VM's that were running from that Processor will obviously kill over and die, but I suppose the rest of the host will not be happy either. From my experience with faulty memory, you are more likely to crash the whole host with a PSOD than having the host function with one DIMM less. I guess that with a CPU - it will probably be the same. So having redundant CPU's does not really cover it. I could be wrong here, and if so I would appreciate your feedback with more information.
Now I am sure there are other implications here, regarding the spread of memory and load over the two channels from both processors, and I am also sure that there are other internal ESX performance implications as well. So it is not a simple matter.
How will this change the game though? Well it will cut costs - in two ways.
- Licensing. ESX licenses are now counted per processor, and not per sets of 2. Removing one processor, will lower ESX host licensing costs by half.
- Server hardware. With one processor less, you are able to cut costs on each server.
So are we destined to run only a 1 Socket ESX host? I would interested in hearing your thoughts and insights on this one.