It is less than 1 month to VMWorld 2009 in the Moscone Center in San Francisco (August 31-September 3, 2009). A great conference to start with, and picking up more and more momentum as we draw closer.
This is the main Virtualization Event of the year, and the only one for another 14 months - seeing that VMware has decided to “consolidate” the European and US events back-to-back, which I personally think is not a good idea (but enough has been said on the blogosphere about this subject)
What I would like to discuss is what we can expect at this conference. vSphere was released less than 3 months ago - and the latest family of product, including AppSpeed, Lab Manager, and Chargeback were announced just 3 weeks ago.
Speculation from Sven Huisman about releasing vCenter for free was and interesting one that came up not long ago.
I will add some speculations of my own of one of the features that I think will be announced in the upcoming conference.
I am talking about Storage DRS.
Storage vendors today are incorporate the usage of SSD (Solid State Disks) into their products. The benefits of SSD disks are not a small thing. In a presentation I saw last week from EMC, I was shown a graph of the comparison of what was needed to achieve a level of Disk I/O from a storage array. The ratio of the amount of disks Solid State:Conventional was approximately 1:10 to achieve the same performance. True - at the moment you would not save that much on pricing at the moment because SSD disk are much more expensive, but on power, cooling, space, and other overhead, I am sure you can see where the savings will come from.
DRS - is a built in feature (now only available from Enterprise Plus versions and up) that balances your virtual machines according to the load on the ESX host, allowing for overall better performance for all you VM’s and your clusters. The benefits and increase (15%-47%) in performance can be seen for example in this article on VROOM. The variables that are taken into account are only RAM and CPU.
Back to Storage DRS, what if you could, on a defined schedule/policy, Storage vMotion your VM’s to faster storage to allow for better performance during peak times, and when the peak was over move them back to slower storage?
Let us take for the following scenario for example. I have a storage array with 3 different kinds of disks, SATA 7200 RPM, Fiber Disks, and Solid State drives. These drives are layered at 3 different tiers: Budget, Standard and Premium (SATA, Fiber and SSD - accordingly).
VM1 runs a front-end application that needs x amount of I/O. Your client comes and tells you that during the day, between peak hours of 12.00-16.00 the application is slow. After testing and monitoring the performance, you see that during those peak hours the amount of disk I/O that this VM needs increase by 200%. and your lower end storage is the bottleneck.
You now have two options:
- Move the VM to faster storage - and therefore solve your bottleneck problem, but in doing that you are allocating the VM faster storage which it does not need for 20 out of 24 hours in the day.
- Use the built in tools - to set a policy that sVmotion’s the VM at when the I/O to the disk becomes the bottleneck to Premium Tier and move it back again when things calm down back to Budget Tier.
In the EMC presentation I saw, this concept was presented as being a feature which is soon to be released. I received an additional hint to confirm this from Vaughn Stewart from NetApp on the last VMTN Communities Roundtable Podcast last Wednesday night. It will most probably be as a plug-in to vCenter which will allow proper integration
This is most probably one feature/announcement that we should expect during VMworld 2009 later this month. I do truly hope that the vendors will continue to develop tools like this which will enable us to provide better performance, better control, and better management of our resources in our virtual Infrastructures.
Please keep up the good work.
What other products/features do you think will be get during the conference?
I was also pointed to this post by Daniel Eason, which discusses the same subject, Thanks Daniel!
Your comments are always welcome.