While doing some lab testing and benchmarking for my upcoming VCAP-DCD exam I came across some interesting results when messing about with NFS and iSCSI using FreeNAS. I plan to re-run the same set of tests soon using the EMC Celerra simulator once I have it set up.
The results are from very simplistic testing using a simple buffered read test only (it would be reasonable to expect write times to be the same or slower, and this is just a quick test for my own info). For this I used the following sample hdparm command in some Ubuntu VMs:
sudo hdparm -t /dev/sda
A sample output from this command would be:
/dev/sda: Timing buffered disk reads: 142 MB in 3.11 seconds = 45.59 MB/sec
As this was a quick performance comparison I only repeated the test three times per storage and protocol type, but even with this simplistic testing the results fairly conclusive.
Test HW was based on my 24GB RAM ESX and ESXi cluster in a box solution [hence some results will be faster than you can achieve over a gig network as this is all running on one host] running under Windows 7 64-bit with VMware Workstation 8. Components are:
- 4x ESX/ESXi 4.1 hosts running in Workstation 8 with an SSD datastore. 4GB RAM and 2x vCPUs each.
- 1x FreeNAS 0.7.2 instance running in Workstation 8 with an SSD datastore and a SATA datastore. I use this over FreeNAS 8 as it has a significantly smaller memory footprint (512mb instead of 2GB). 1vCPU and 512 MB RAM.
- 64-bit Ubuntu Linux VMs running nested under the ESX(i) virtual hosts. 1vCPU and 512 MB RAM each.
Storage components are:
- SATA 2 onboard ICH10R controller
- 1x Crucial M4 128GB SSD (500MB/sec Read, 175MB/Sec Write)
- 1x Seagate 250GB 7200RPM SATA
The results of the testing are as follows:
|FreeNAS 0.7.2 w/ NFS
|FreeNAS 0.7.2 w/ NFS
|FreeNAS 0.7.2 w/ iSCSI
|FreeNAS 0.7.2 w/ iSCSI
As you can see, FreeNAS with NFS does not play nice with ESX(i) 4. I can confirm that I have seen stats and posts confirming these issues are not aparent in the real world, with NetApp FAS or Oracle Unified Storage (which is aparently awesome on NFS) but for your home lab, the answer is clear:
For best VM performance using FreeNAS 7, stick to iSCSI!
, EMC Celerra
If like me, you originally implemented some of your VMs in VMware Server (or Workstation) as thick provisioned, and have subsequently changed your mind, the process for converting them back to thin is very simple. In my case, the driver was the purchase of a couple of new SSDs, which obviously have a lot less space than my old 7200RPM SATA disks, but the performance is significantly better.
Before shrinking your VM, be sure to zero out all of the unused space (left over from file deletes). This is especially important if you have been using the VM for some time, as deleted files do not actually zero the space (just remove the pointers). I recommend using SDelete from SysInternals. Simply run “sdelete -c” to zero out all the deleted file space (experiences may vary!!!).
Once you have cleaned up (and ideally backed up) your VM, The process of migrating to a new datastore and safely shrinking / converting the VMDK files to thin provisioned, is as follows:
- Create your new datastore directory on the new drive and specify the location in VMware Server (if required), e.g.
- Create a new directory for the VM to be migrated, e.g.
- Run the following command:
“<path-to-vmware-install>\vmware-vdiskmanager.exe” -r “<old-ds-path>\<VM-Name>.vmdk” -t 0 “<new-ds-path>\<VM-Name>.vmdk”
“C:\Program Files (x86)\VMware\VMware Server\vmware-vdiskmanager.exe” -r “C:\VMs\TestVM\TestVM.vmdk” -t 0 “D:\VMs\TestVM\TestVM.vmdk”
- The VM will be converted and copied to the new location with no risk to the original file.
- Copy the remaining files from the original datastore location (minus the VMDK / vmdk-flat of course).
- Remove the old VM from your VMware Server/Workstation inventory (don’t delete the originals until you have tested the new VM!).
- Add the VM back into VMware Server / Workstation using the new datastore location, and start it up, specifying “I moved it” when prompted.
- Sit back and enjoy the extra space! 🙂
Note the same process will work for converting VMDK files between all file types, by simply replacing -t 0 with your preferred option from the list below:
0 : single growable virtual disk
1 : growable virtual disk split in 2GB files
2 : preallocated virtual disk
3 : preallocated virtual disk split in 2GB files
4 : preallocated ESX-type virtual disk
5 : compressed disk optimized for streaming
NOTE: You can also run “<path-to-vmware-install>\vmware-vdiskmanager.exe” -k “<ds-path>\<VM-Name>.vmdk” to shrink your vmdk in place, but you will need enough spare space on the same drive to do this (as much as the current vmdk file size if you don’t gain much), you lose the ability to roll back to your original file, and this wont work on thick provisioned disks.