Tag Archive for SATA

Quick Tip: Install a VIB into an Existing vSphere 5.5 ESXi Host

The following will likely work in other versions of vSphere, but I used it in vSphere 5.5 a while ago, then forgot to hit publish on this post!

In that case I had installed a new ESXi host and not included the custom VIB with the drivers for the SATA card. I did this deliberately as I thought I would have no need at this time to use the local HBA. The thing I forgot is that the host profiles I had created from other hosts included a local HBA, therefore the host profiles would not remediate without one. Annoying! So I used the following steps to manually add the specific VIB I needed (in this case sata-xahci-1.10-1.x86_64.vib).

SSH to your ESXi host (having enabled the SSH server from the vSphere Client):

# ssh [email protected]<hostip>
# cd /tmp

 

Copy the vib file into the host image (in my case I had it stored on my web server, but you could equally use any other standard method to get the file onto the host):

# wget http://www.tekhead.org/wp-uploads/www.tekhead.org/sata-xahci-1.10-1.x86_64.zip

 

Unzip the vib file:

# unzip sata-xahci-1.10-1.x86_64.zip

 

Install the vib:

# esxcli software vib install -v file:/tmp/sata-xahci-1.10-1.x86_64.vib
 
Installation Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed: VFrontDe_bootbank_sata-xahci_1.10-1
VIBs Removed:
VIBs Skipped:

 

Check that the vib is installed:

# esxcli software vib list | grep -i <vib name in my case ahci>
sata-xahci   1.10-1   VFrontDe   CommunitySupported   2014-10-31

 

Remove the old files (no longer needed):

# rm sata-xahci-1.10-1.x86_64.*

 

Finally, reboot your ESXi host, job done!

 

FreeNAS 0.7.2 NFS and iSCSI Performance in a vSphere 4.1 Lab

While doing some lab testing and benchmarking for my upcoming VCAP-DCD exam I came across some interesting results when messing about with NFS and iSCSI using FreeNAS. I plan to re-run the same set of tests soon using the EMC Celerra simulator once I have it set up.

The results are from very simplistic testing using a simple buffered read test only (it would be reasonable to expect write times to be the same or slower, and this is just a quick test for my own info). For this I used the following sample hdparm command in some Ubuntu VMs:

sudo hdparm -t /dev/sda

A sample output from this command would be:

/dev/sda:  Timing buffered disk reads: 142 MB in  3.11 seconds =  45.59 MB/sec

As this was a quick performance comparison I only repeated the test three times per storage and protocol type, but even with this simplistic testing the results fairly conclusive.

Test HW was based on my 24GB RAM ESX and ESXi cluster in a box solution [hence some results will be faster than you can achieve over a gig network as this is all running on one host] running under Windows 7 64-bit with VMware Workstation 8. Components are:

  • 4x ESX/ESXi 4.1 hosts running in Workstation 8 with an SSD datastore. 4GB RAM and 2x vCPUs each.
  • 1x FreeNAS 0.7.2 instance running in Workstation 8 with an SSD datastore and a SATA datastore. I use this over FreeNAS 8 as it has a significantly smaller memory footprint (512mb instead of 2GB). 1vCPU and 512 MB RAM.
  • 64-bit Ubuntu Linux VMs running nested under the ESX(i) virtual hosts. 1vCPU and 512 MB RAM each.

Storage components are:

  • SATA 2 onboard ICH10R controller
  • 1x Crucial M4 128GB SSD (500MB/sec Read, 175MB/Sec Write)
  • 1x Seagate 250GB 7200RPM SATA

The results of the testing are as follows:

Protocol Storage Type Read MB/sec
Local VMFS SSD 383
Local VMFS SATA 88
FreeNAS 0.7.2 w/ NFS SSD 11
FreeNAS 0.7.2 w/ NFS SATA 5
FreeNAS 0.7.2 w/ iSCSI SSD 175
FreeNAS 0.7.2 w/ iSCSI SATA 49

As you can see, FreeNAS with NFS does not play nice with ESX(i) 4. I can confirm that I have seen stats and posts confirming these issues are not aparent in the real world, with NetApp FAS or Oracle Unified Storage (which is aparently awesome on NFS) but for your home lab, the answer is clear:

For best VM performance using FreeNAS 7, stick to iSCSI!

Safely Shrinking VMware Server / Workstation .vmdk Files

If like me, you originally implemented some of your VMs in VMware Server (or Workstation) as thick provisioned, and have subsequently changed your mind, the process for converting them back to thin is very simple. In my case, the driver was the purchase of a couple of new SSDs, which obviously have a lot less space than my old 7200RPM SATA disks, but the performance is  significantly better.

Before shrinking your VM, be sure to zero out all of the unused space (left over from file deletes). This is especially important if you have been using the VM for some time, as deleted files do not actually zero the space (just remove the pointers). I recommend using SDelete from SysInternals. Simply run “sdelete -c” to zero out all the deleted file space (experiences may vary!!!).

Once you have cleaned up (and ideally backed up) your VM, The process of migrating to a new datastore and safely shrinking / converting the VMDK files to thin provisioned, is as follows:

  1. Create your new datastore directory on the new drive and specify the location in VMware Server (if required), e.g.
    D:\VMs
  2. Create a new directory for the VM to be migrated, e.g.
    D:\VMs\TestVM
  3. Run the following command:
    “<path-to-vmware-install>\vmware-vdiskmanager.exe” -r “<old-ds-path>\<VM-Name>.vmdk” -t 0 “<new-ds-path>\<VM-Name>.vmdk”
    e.g.
    “C:\Program Files (x86)\VMware\VMware Server\vmware-vdiskmanager.exe” -r “C:\VMs\TestVM\TestVM.vmdk” -t 0 “D:\VMs\TestVM\TestVM.vmdk”
  4. The VM will be converted and copied to the new location with no risk to the original file.
  5. Copy the remaining files from the original datastore location (minus the VMDK / vmdk-flat of course).
  6. Remove the old VM from your VMware Server/Workstation inventory (don’t delete the originals until you have tested the new VM!).
  7. Add the VM back into VMware Server / Workstation using the new datastore location, and start it up, specifying “I moved it” when prompted.
  8. Sit back and enjoy the extra space! 🙂

Note the same process will work for converting VMDK files between all file types, by simply replacing -t 0 with your preferred option from the list below:

0                   : single growable virtual disk
1                   : growable virtual disk split in 2GB files
2                   : preallocated virtual disk
3                   : preallocated virtual disk split in 2GB files
4                   : preallocated ESX-type virtual disk
5                   : compressed disk optimized for streaming

NOTE: You can also run “<path-to-vmware-install>\vmware-vdiskmanager.exe” -k “<ds-path>\<VM-Name>.vmdk” to shrink your vmdk in place, but you will need enough spare space on the same drive to do this (as much as the current vmdk file size if you don’t gain much), you lose the ability to roll back to your original file, and this wont work on thick provisioned disks.

%d bloggers like this: