Tag Archive for ESXi

VMware vSphere NanoLab – Part 4 – Network and Storage Choices

Over the past few posts I have gone into the detail on configuring a high WAF vSphere NanoLab, mainly from the perspective of compute. In my case this consists of two Intel NUC nodes, running  dual core 1.8GHz core i3 processors and 16GB of RAM each. The main question people  have been asking me since I published the series is, what do I use for networking and storage?

Prior to the NanoLab, I have always gone for a vInception type of setup, i.e. everything running inside a single powerful workstation with plenty of RAM. This limits your options a bit, in my case it meant simply using local SSD & SATA storage, presented either as iSCSI from my Windows 2008 R2 server  or a nested FreeNAS 7 VM. For a bit of extra capacity I also had a couple of spare disks in an HP Microserver N36L presented via another FreeNAS 7 VM under ESXi.

The most frustrating thing with running your VMFS storage from a Windows host, is the monthly patching and reboots, meaning you have to take down your entire environment every time. In my case this also includes this blog, which is hosted as  a VM on this environment, so moving forward I wanted to have something a little more secure, flexible and robust, which also adhered to the cost, noise and size requirements you might expect for a NanoLab.

Storage

Speed of storage can make or break you experience and productivity when running a home lab. My requirements for a storage device / NAS were:

  • Minimal size
  • Silent or as near silent as possible
  • Low power consumption
  • Minimum 4 disk slots and ability to do RAID 5 (to minimise disk cost and provide flexibility for later growth)
  • Reasonable price

Optionally:

  • VAAI support
  • Decent warranty (if not a home build)
  • Reasonable component redundancy
  • USB3 support in case I want to add any external drives later for some speedy additional storage / backup

After going back and forth between a home-made solution based on another HP Microserver, or a pre-configured NAS, I decided that the additional features available in the Synology “Plus” line were too good to pass up. These include:

  • VAAI support for Hardware Assisted Locking (ATS), Block Zero, Full Copy, Thin Provisioning
  • iSCSI snapshot and backup
  • Link aggregation support for the dual gigabit NICs
  • 2-3 year warranty depending on the model
  • iSCSI or NFS (VAAI on iSCSI volumes only)

They were also recommended by a number of vExperts such as Jason Nash, Chris Wahl and Julian Wood, which is always a good justification to go for one! 🙂

The 1512+ was very tempting, but I in the end I chose the DS412+ due to its near-silent sub-20db operation (thanks to an external power brick and 2 hot-swap silent cooling fans), low power consumption618_348_backup-plans-synology-ds412 (max 44w under heavy load),  minimal footprint and reduced cost. I was tempted to wait and see if a DS413+ comes out any time soon, but Synology are being cagey and I needed the lab upgrades to be done and dusted in a short period. I already have a DS413j which I use for backups, so I can confirm they are very well built little machines, and the noise level claims are indeed accurate!

 

Into the 412+ I have loaded a pair of 240GB SANDisk Extreme SSDs using SHR (Synology Hybrid Raid). This is effectively just RAID1 mirroring when only two drives are installed but gives me the ability to expand out to RAID5 equivalent as I need more space and the price of SSDs (inevitably) comes down. Eventually the box will have around ~720GB or more of useable SSD storage, more than enough for a decent bunch of lab VMs! Another alternative would be a pair of SSDs for VM boot partitions / config files, and a pair of SATA drives for VM data partitions.

Networking

Although you can easily build a great home lab on a flat network with any old cheap switch, the ability to experiment with more advanced features is highly desirable. My requirements for a managed switch were:

  • Minimal size
  • Passive cooling (for silent operation)
  • Low power consumption
  • Minimum of 8x 1 gigabit ports (or preferably more)
  • Link aggregation
  • QoS
  • Security – VLANs, PVLANs, ACLs, & Layer 3 switching
  • SSH access for command line management

Optionally:

  • I am studying for a few Cisco exams over the next year so a Cisco branded router would be preferential
  • Decent warranty

After a great suggestion from Jasper and reading an article by Vladan I ended up going for the ten port Cisco SG300-10.

SG300-10

This 10-port switch will allow for:

  • 1-2 ports per NUC (for 2-4 NUC boxes)
  • 2 LACP enabled ports for the Synology lab storage
  • 2 ports for my personal data storage server (might replace this with a second mid-range Synology NAS later)
  • 2 uplink ports (In my case for a router and a second wireless access point)

This switch is passively cooled, only uses around 10w power, and as an added bonus Cisco include a limited lifetime warranty! Great if you are going to invest that much in a switch for home!

“As long as the original End User continues to own or use the Product, provided that: fan and power supply warranty is limited to five (5) years. In the event of discontinuance of product manufacture, Cisco warranty support is limited to five (5) years from the announcement of discontinuance.” http://www.cisco.com/en/US/docs/general/warranty/English/LH2DEN__.html

If I had been going for a switch purely on cost I would probably have chosen one of the HP models as these have some great bang for your buck, but I did want to stick to a Cisco branded one. I would also have loved to go for the PoE model so I could plug in a VoiP phone later, but the cost for the SG300-10P / MP was at least 50% more, and power consumption would be higher, even when idle.

WAF

The entire NanoLab setup above of 2 NUC boxes, DS412+ and SG300-10 in total take up about the same volume of space as a large shoe box, are virtually silent, and combine for an idle power level of 50-60 watts, and under 100 watts even under load. That’s less than a couple of halogen light bulbs!

In my next post I will go through the process of configuring the network and storage, including link aggregation and suggested VLAN configuration.

Earlier parts of this article may be found here:
NanoLab – Running VMware vSphere on Intel NUC – Part 1
NanoLab – Running VMware vSphere on Intel NUC – Part 2
NanoLab – Running VMware vSphere on Intel NUC – Part 3

NanoLab – Running VMware vSphere on Intel NUC – Part 3

I have really been enjoying messing about with my NanoLab for the past few days and it has already proved invaluable in a couple of projects I’m dealing with  at work (mainly in testing some ideas I had for solutions).

These are just a couple of very quick tips for your NUC lab which I came across throughout the week. They will also apply to any other single NIC configuration for a vSphere cluster (e.g. HP Microserver with no extra PCI card), and for booting your cluster from a USB pen drive.

The tips are both simple fixes to remove the (slightly annoying) warning messages you get on each ESXi host in your cluster after you do your initial config.

The host currently has no management network redundancy. System logs on host <hostname> are stored on non-persistent storage.

Single Management NIC Causes Warning in vCenter

The host currently has no management network redundancy.

To get rid of this (assuming you dont plan to add further NICs), simply follow KB1004700, which is summarised as follows:

To suppress this message on ESXi/ESX hosts in the VMware High Availability (HA) cluster, or if the warning appears for a host already configured in a cluster, set the VMware HA advanced option das.ignoreRedundantNetWarning to true and reconfigure VMware HA on that host.

To set das.ignoreRedundantNetWarning to true:

  1. From the VMware Infrastructure Client, right-click on the cluster and click Edit Settings.
  2. Select vSphere HA and click Advanced Options.
  3. In the Options column, enter das.ignoreRedundantNetWarning
  4. In the Value column, enter true.
    Note: Steps 3 and 4 create a new option.
  5. Click OK.
  6. Right-click the host and click Reconfigure for vSphere HA. This reconfigures HA.

singlenetwork

Booting from USB Pen Drive Causes Warning

System logs on host <hostname> are stored on non-persistent storage

This is caused by booting from the USB device. It is very simple to remove by redirecting logs to a syslog server. A prime example for your home lab would be the syslog server which comes as standard with the vCenter Server Appliance, but commonly your home NAS may have this functionality, you could run a Linux VM to collect the logs, or alternatively you could use a great product to centralise logs called Splunk (free for up to 500mb of logs per day!).

To point your ESXi hosts to any syslog server, simply:

  1. From the VMware Infrastructure Client, select the host.
  2. Select the Configuration tab, then click Advanced Settings.
  3. In the left column expand Syslog, then click global.
  4. In the right panel, in the Syslog.global.logHost box, enter the IP or hostname of your syslog server.
  5. Click OK.
  6. Your host is now configured to forward all logs to your syslog server and the non-persistent storage error will be suppressed.

syslog

Once you have enabled the redirection you also need to open the outbound port on your ESXi hosts (thanks to Sam for the reminder).

  1. From the VMware Infrastructure Client, select the host.
  2. Select the Configuration tab, then select Security Profile.
  3. Next to Firewall, click Properties…
  4. Scroll down to syslog and tick the check box to open ports 514/1514.
  5. Click OK.

open syslog ports

If anyone else comes across any useful NUC related homelab tips, please feel free to comment or mail them to me and I’ll add them to the list.

UPDATE: Duncan Epping describes the das.ignoreRedundantNetWarning fix on his blog, using the vSphere Web Client here:
http://www.yellow-bricks.com/2015/05/21/this-host-currently-has-no-network-management-redundancy/

Other parts of this article may be found here:
NanoLab – Running VMware vSphere on Intel NUC – Part 1
NanoLab – Running VMware vSphere on Intel NUC – Part 2
VMware vSphere NanoLab – Part 4 – Network and Storage Choices

NanoLab – Running VMware vSphere on Intel NUC – Part 2

As I confirmed in my recent post, it is indeed possible (and I would now say highly recommended!) to install ESXi onto an Intel NUC DC3217IYE. This article will confirm the process for achieving this. The method I used is one of many possible, but that which I found to be the simplest, based on the tools I had to hand.

It’s also worth mentioning at this point that most ESXi features are supported on the platform, including FT. The key features not supported are VMDirectPath I/O, and DPM (due to the lack of iLO / IPMI). They do support WoL so you can manually bring nodes online as required, using any standard WoL tool.

I am currently investigating possible options for additional NICs, and it seems that most of the Mini-PCIe NICs are based on a Realtek chipset which is fully supported in ESXi, so happy days! I will post further updates on this subject should I go ahead and expand the NUCs with extra ports.

Requirements

  • A USB Stick. This should work on anything over 1-2GB but personally am using 8GB PNY Micro Sleek Attache Pendrives as they’re tiny, so less likely to catch on anything as they stick out the back of the NUC box, and they cost less than £5 each.
  • A copy of VMware Workstation 8 or newer.
  • ESXi-Customizer (created by Andreas Peetz)
    http://v-front.blogspot.com/p/esxi-customizer.html
  • The ESXi driver for and Intel® 82579V Gigabit Ethernet Controller (created by Chilly)
    http://dl.dropbox.com/u/27246203/E1001E.tgz

Process Overview

  • Install the RAM into your NUC (I maxed mine out with 2x8GB sticks).
  • Create a customised ISO with the additional Intel driver.
  • Install ESXi to your USB stick using VMware Workstation and the customised ISO.
  • Plug in your NUC, insert the USB stick, boot and go!

Detailed Steps
I wont go into the detail of installing the RAM, suffice to say you unscrew the four screws on the base of the unit, carefully take it apart, install the two SODIMM modules, ensuring they click firmly into place, then screw the unit back together… simples!

Part One – Create the Custom ISO

  1. Run the ESXi-Customizer-v2.7.1.exe (latest version at time of writing).
  2. This will extract the customer to the directory of your choosing.
  3. Navigate to the new directory.
  4. Run the ESXi-Customizer.cmd batch file. This will open up the GUI, where you can configure the following options:
  • Path to your ESXi Installer
  • Path to the Intel driver downloaded previously
  • Path where you want the new ISO to be saved
  1. Ensure you tick the Create (U)EFI-bootable ISO checkbox.

This will output a new custom ESXi installer ISO called ESXi-5.x-Custom.iso or similar, in the path defined above.

Part Two – Install bootable ESXi to the USB stick.
I stress that this is my preferred way of doing this as an alternative is simply to burn your customised ISO to a CD/DVD and boot using a USB DVD-ROM. That would however be a whole lot slower, and waste a blank CD!

  1. Plug your chosen USB stick into your PC.
  2. Open VMware Workstation (8 or above), VMware Fusion, or whatever you use, ideally supporting the Virtualize Intel VT-x/EPT or AMD-V/RVI option (allowing you to nest 64-bit VMs).
  3. Create a new VM, you can use any spec you like really, as ESXi always checks on boot, but I created one with the same specs as my intended host, i.e. 16GB RAM, single socket, 2vCPU cores. This does not require a virtual hard disk.
  4. Once the VM is created, and before you boot it, edit the CPU settings and tick the Virtualize Intel VT-x/EPT or AMD-V/RVI checkbox. This will reduce errors when installing ESXi (which checks to ensure it can virtualise 64-bit operating systems).

  1. Set the CD/DVD (IDE) configuration to Use ISO image file, and point this to the customised ISO created earlier.
  2. Once the above settings have been configured, power on the VM.
  3. As soon as the VM is powered on, in the bottom right of the screen, right click on the flash disk icon, and click Connect (Disconnect from Host).

  1. This will mount the USB stick inside the VM, and allow you to do a standard ESXi installation onto the stick. At the end of the installation, disconnect the stick, un-mount and unplug it.

Part Three – Boot and go!
This is the easy bit, assuming you don’t have any of the HDMI issues I mentioned in the previous post!

  1. Plug your newly installed USB stick into the back of the NUC.
  2. Don’t forget to plug in a network cable (duh!) and keyboard for the initial configuration. If you wish to modify any bios settings (optional), you will also need a mouse as the NUC runs Visual BIOS.
  3. Power on the NUC…
  4. Have fun!

That pretty much covers it. If anyone has any questions on the process, please don’t hesitate to ask!

References
Thanks to Ivo Beerens who originally detailed the ISO customisation process here:
http://www.ivobeerens.nl/2011/12/13/vmware-esxi-5-whitebox-nic-support/

Other parts of this article may be found here:
NanoLab – Running VMware vSphere on Intel NUC – Part 1
NanoLab – Running VMware vSphere on Intel NUC – Part 3
VMware vSphere NanoLab – Part 4 – Network and Storage Choices

NanoLab – Running VMware vSphere on Intel NUC – Part 1

Having been looking to do a home lab tech refresh of late, I have been spending quite a bit of time examining all the options. My key requirements, mostly determined by their relative WAF score (Wife Acceptance Factor) were as follows:

  1. Silent or as quiet as possible (the lab machines will sit behind the TV in our living room where my current whitebox server sits almost silently but glaringly large!).
  2. A minimum of 16GB RAM per node (preferably 32GB if possible).
  3. A ‘reasonable’ amount of CPU grunt, enough to run 5–10 VMs per host.
  4. Minimal cost (I haven’t got the budget to go spending £500+ per node, trying to keep it under £300)
  5. Smallest form factor I can find to meet requirements 1–4.
  6. Optional: Remote access such as IPMI or iLO.

I have previously invested in an HP N36L, which while great for the price (especially when the £100 cashback offer was still on) is a bit noisy, even with a quiet fan mod. Its actually also fairly big when you start looking at buying multiples and stacking them behind the telly! Even so I was still sorely tempted by the new N54L MicroServers which are just out (AMD Dual Core 2.2GHz) and max 16GB RAM) and are within my budget.

Similarly I looked into all the Mini-ITX and Micro-ATX boards available, where the Intel desktop / small servers ones seemed to be best (DBS1200KP / DQ77MK / DQ67EP are all very capable boards). Combined with an admittedly slightly expensive Intel Xeon E3-1230 V2, these would all make brilliant white box home labs, but for me they are still limited by either their size or cost.

In late November, Intel announced they were releasing a range of bare bones mini-PCs called “Next Unit of Computing”. The early range of these 10cm square chassis contain an Intel Core i3 i3-3217U CPU (“Ivy Bridge” 22 nm, as found in numerous current ultrabooks), two SODIMM slots for up to 16GB RAM, and 2 mini-PCIe slots. It’s roughly the same spec and price as an HP MicroServer, but in a virtually silent case approximately the same size as a large coffee cup!

Even better, when you compare the CPU to the latest HP N54L, it achieves a benchmark score of 2272 on cpubenchmark.net, compared to the AMD Turion II Neo N54L Dual-Core at only 1349, putting it in a different class altogether in terms of raw grunt. Not only that, but with the cashback offer from HP now over, it’s about the same price or less than a MicroServer, just £230 inc VAT per unit!

On top of the above, there is an added bonus in the extremely low power consumption of just 6-11 watts at idle, rising to ~35 watts under high load. Comparing this to the HP MicroServer, which idles at around the 35 watt mark, spiking to over 100 watts, the NUC shows a marked improvement to your “green” credentials. If you are running a two node cluster, you could conservatively save well over £30 per year from your electricity bill using NUCs instead of MicroServers. Add to that a 3-year Intel warranty and I was pretty much sold from the start!

This all sounded too good to be true, and in all bar one respect it is actually perfect. The only real drawback is that the Intel 1gbps NIC (82579V) is not in the standard driver list currently supported by ESXi. This was a slight cause for concern as some people had tried and failed to get it to work with ESXi and held me off purchasing until this week when I spotted this blog post by “Stu” who confirmed it worked fine after injecting the appropriate driver to the ESXi install iso.

I immediately went to my favourite IT vendor (scan.co.uk) and purchased the following:

Intel ICE Canyon NUC Barebone Unit – DC3217IYE
16GB Corsair Kit (2x8GB) DDR3 1333MHz CAS 9
8GB PNY Micro Sleek Attache Pendrive

Total cost: ~£299 inc vat… bargain!

IMPORTANT: You will also need a laptop / clover leaf style kettle cable (C5) or your country’s equivalent. In the box you get the power block, but not the 3 pin cable. These can be picked up on ebay for next to nothing.

With very little time or effort I was able to create a new ESXi installer with the correct e1000 drivers, boot the machine and I am now happily running ESXi on my first node.

Intel NUC with ESXi 5.1

I should add that as part of the install I discovered a bug which Intel are looking to resolve with a firmware fix soon. This was the fact that I was unable to press F2 to get into the bios (it just rebooted each time I pressed it). Another symptom of this same bug was ESXi getting most of the way through boot and coming up with an error saying “multiboot could not setup the video subsystem”. This is not a VMware fault. I resolved this by simply plugging the HDMI cable into a different port on my TV (ridiculous!). You might also try a different HDMI cable. Either way it was not serious enough to stop me ordering a second one the same night I got it running!

Disclaimer: Mileage may vary! I will not be held responsible if you buy a b0rk3d unit. 🙂

In Part 2 of this article, I will expand on the process for installing ESXi to the NUC, and my experiences with clustering two of them (second unit arrived in the post today so will be built and tested this weekend).

Other parts of this article may be found here:
NanoLab – Running VMware vSphere on Intel NUC – Part 2
NanoLab – Running VMware vSphere on Intel NUC – Part 3
VMware vSphere NanoLab – Part 4 – Network and Storage Choices

%d bloggers like this: