One of the few drawbacks of the Intel NUC, HP Microserver or most other small form factor motherboards is that most of these systems have only two memory DIMM slots. With current technology, this leaves you with a maximum of 2x8GB DIMMs or 16GB RAM in total. A couple of months ago the following tweet peaked my interest in trying to maximise the memory availability in my Nanolab environment, which consists of 2 NUC boxes with a total of 32GB of RAM in the cluster.
New blog post: Large Memory Pages and Shrinking Consolidation Ratios bit.ly/ZXmS15
— Jason Boche (@jasonboche) March 19, 2013
One of the ways in which VMware maximise VM consolidation ratios when virtualising is through a technology called Transparent Page Sharing (or TPS for short).
I was only running a relatively small number of VMs on my environment but was starting to run out of RAM and was disappointed with the levels of memory sharing I was seeing. What I had completely forgotten was that large memory page support is enabled by default, and these large pages will only be split down into 4k chunks in the event that my cluster was in memory contention.
I decided that as I am not running any particular high performance apps in my lab, I would prefer the visibility of how much RAM I actually still had available, instead of gaining maximum application performance through large page support. Enabling this was very simple, and simply required changing the advanced setting Mem.AllocGuestLargePage to 0 and waiting for TPS to kick in later that day.
My memory utilisation before was as follows:
After it was as follows (you can see that not only can I see the RAM savings, but I have also added several more VMs in between screenshots):
Assuming you have a large number of similar VMs within your home lab, disabling large memory page support can allow you to gain easy visibility of your maximum memory savings and actual available RAM. Implementing this in your production environments may not be ideal based on your specific workloads, however if your production policy is to be reasonably aggresive with memory overcommitment, I recommend you highlight this issue to your capacity management team to ensure they don’t go out buying extra servers or RAM unnecessarily early!
Further reading:
- http://www.gabesvirtualworld.com/large-pages-transparent-page-sharing-and-how-they-influence-the-consolidation-ratio/
- http://www.gabesvirtualworld.com/memory-overcommit-in-production-yes-yes-yes/
- http://www.gabesvirtualworld.com/memory-management-and-compression-in-vsphere-4-1/
- http://www.yellow-bricks.com/2010/11/07/how-many-pages-can-be-shared-if-large-pages-are-broken-up/
- http://www.vreference.com/?p=1065
- http://frankdenneman.nl/2011/01/25/re-impact-of-large-pages-on-consolidation-ratios/
- http://www.yellow-bricks.com/2011/01/26/re-large-pages-gabvirtualworld-frankdenneman-forbesguthrie/