Maximising Perceived Memory Utilisation in vSphere

One of the few drawbacks of the Intel NUC, HP Microserver or most other small form factor motherboards is that most of these systems have only two memory DIMM slots. With current technology, this leaves you with a maximum of 2x8GB DIMMs or 16GB RAM in total. A couple of months ago the following tweet peaked my interest in trying to maximise the memory availability in my Nanolab environment, which consists of 2 NUC boxes with a total of 32GB of RAM in the cluster.

 

One of the ways in which VMware maximise VM consolidation ratios when virtualising is through a technology called Transparent Page Sharing (or TPS for short).

I was only running a relatively small number of VMs on my environment but was starting to run out of RAM and was disappointed with the levels of memory sharing I was seeing. What I had completely forgotten was that large memory page support is enabled by default, and these large pages will only be split down into 4k chunks in the event that my cluster was in memory contention.

I decided that as I am not running any particular high performance apps in my lab, I would prefer the visibility of how much RAM I actually still had available, instead of gaining maximum application performance through large page support. Enabling this was very simple, and simply required changing the advanced setting Mem.AllocGuestLargePage to 0 and waiting for TPS to kick in later that day.

My memory utilisation before was as follows:

Large Page Support Enabled

After it was as follows (you can see that not only can I see the RAM savings, but I have also added several more VMs in between screenshots):

Large Page Support Disabled

Assuming you have a large number of similar VMs within your home lab, disabling large memory page support can allow you to gain easy visibility of your maximum memory savings and actual available RAM. Implementing this in your production environments may not be ideal based on your specific workloads, however if your production policy is to be reasonably aggresive with memory overcommitment, I recommend you highlight this issue to your capacity management team to ensure they don’t go out buying extra servers or RAM unnecessarily early!

Further reading:

Intel NUC, NanoLab, VMware , , , , , , , ,