NanoLab – Running VMware vSphere on Intel NUC – Part 3

I have really been enjoying messing about with my NanoLab for the past few days and it has already proved invaluable in a couple of projects I’m dealing with  at work (mainly in testing some ideas I had for solutions).

These are just a couple of very quick tips for your NUC lab which I came across throughout the week. They will also apply to any other single NIC configuration for a vSphere cluster (e.g. HP Microserver with no extra PCI card), and for booting your cluster from a USB pen drive.

The tips are both simple fixes to remove the (slightly annoying) warning messages you get on each ESXi host in your cluster after you do your initial config.

The host currently has no management network redundancy. System logs on host <hostname> are stored on non-persistent storage.

Single Management NIC Causes Warning in vCenter

The host currently has no management network redundancy.

To get rid of this (assuming you dont plan to add further NICs), simply follow KB1004700, which is summarised as follows:

To suppress this message on ESXi/ESX hosts in the VMware High Availability (HA) cluster, or if the warning appears for a host already configured in a cluster, set the VMware HA advanced option das.ignoreRedundantNetWarning to true and reconfigure VMware HA on that host.

To set das.ignoreRedundantNetWarning to true:

  1. From the VMware Infrastructure Client, right-click on the cluster and click Edit Settings.
  2. Select vSphere HA and click Advanced Options.
  3. In the Options column, enter das.ignoreRedundantNetWarning
  4. In the Value column, enter true.
    Note: Steps 3 and 4 create a new option.
  5. Click OK.
  6. Right-click the host and click Reconfigure for vSphere HA. This reconfigures HA.


Booting from USB Pen Drive Causes Warning

System logs on host <hostname> are stored on non-persistent storage

This is caused by booting from the USB device. It is very simple to remove by redirecting logs to a syslog server. A prime example for your home lab would be the syslog server which comes as standard with the vCenter Server Appliance, but commonly your home NAS may have this functionality, you could run a Linux VM to collect the logs, or alternatively you could use a great product to centralise logs called Splunk (free for up to 500mb of logs per day!).

To point your ESXi hosts to any syslog server, simply:

  1. From the VMware Infrastructure Client, select the host.
  2. Select the Configuration tab, then click Advanced Settings.
  3. In the left column expand Syslog, then click global.
  4. In the right panel, in the box, enter the IP or hostname of your syslog server.
  5. Click OK.
  6. Your host is now configured to forward all logs to your syslog server and the non-persistent storage error will be suppressed.


Once you have enabled the redirection you also need to open the outbound port on your ESXi hosts (thanks to Sam for the reminder).

  1. From the VMware Infrastructure Client, select the host.
  2. Select the Configuration tab, then select Security Profile.
  3. Next to Firewall, click Properties…
  4. Scroll down to syslog and tick the check box to open ports 514/1514.
  5. Click OK.

open syslog ports

If anyone else comes across any useful NUC related homelab tips, please feel free to comment or mail them to me and I’ll add them to the list.

UPDATE: Duncan Epping describes the das.ignoreRedundantNetWarning fix on his blog, using the vSphere Web Client here:

Other parts of this article may be found here:
NanoLab – Running VMware vSphere on Intel NUC – Part 1
NanoLab – Running VMware vSphere on Intel NUC – Part 2
VMware vSphere NanoLab – Part 4 – Network and Storage Choices


  1. […] Other parts of this article may be found here: NanoLab – Running VMware vSphere on Intel NUC – Part 1 NanoLab – Running VMware vSphere on Intel NUC – Part 3 […]

  2. I also got me one with 16G mem and 64G crucial mSSD

    Going to test out the power usage and performance soon!

  3. I have 6 VM’s running now on my Intel NUC. What I notice is that CPU is lacking. When VM’s boot CPU normally goes to 100% but it stays at 30% max.

    This makes my setup extremely slow. what is your experience?

    • Hi Marco, I have about half a dozen running simultaneously across 2 NUCs at the moment and haven’t experienced any issues to date. Just working on getting my AutoLab setup at the moment (as and when I have time). Your experience will vary completely depending on how many vCPUs you assign per VM (generally start with 1 and add more on the fly if you need it, this will allow you to get much better scheduler performance on your ESXi host), what OS and apps you’re using etc. Faster storage will also improve the performance of all of your VMs so I would always recommend putting VM boot partitions on SSD, even in a lab (or especially in a lab as you spend more time messing with the machines). Hope this helps?

  4. Sam says:

    I ran into trouble with the syslog setting. There’s no problem setting which disabled the warning but logs never turned up on the remote syslog server. After a lot of head scratching I worked out you’ll need to edit the ESXi firewall to allow outbound syslog packets.

  5. Dave says:

    Love this series of posts!

    Can you share with us what you are using for storage and networking?

    • Thanks very much!

      Currently I’m just using a mixture of iSCSI / NFS storage from a FreeNAS VM on another box, some iSCSI storage on a physical Windows server and a little bit of NFS from my Synology backup server. Moving forward I need to tech refresh my storage too. I was planning to probably purchase a Synology DS412+ or DS413+ if it’s out soon [VAAI, link aggregation, 4 slots and lots of cool new stuff in DSM 4.2], and pop 4x240GB SSDs in there for about 720GB usable storage. That would use something like 30w of power and be less than 20db for a nice high WAF! Alternatively I might mirror a couple of 240GB SSDs for VM boot vmdks, then use a pair of WD red spindles for mass storage, which are also pretty quiet.

      As regards networking, I currently have a flat network on a dumb 1gbps switch. In order to take advantage of link aggregation, VLANs etc, I will need a managed gigabit switch which is passively cooled [WAF again]. I’m still looking at options on this. I’m studying for some Cisco certs at the moment so Cisco would be ideal. I would love something like a Cisco 2960CG, but they’re rather prohibitively expensive so I’ll probably end up looking at other vendors such as HP which are much easier on the pocket.

      • Jasper says:

        Great series!

        Check Cisco’s SG300-10(P) switch. If 10 ports are sufficient, this is a great switch for this kind of setups. They are Layer3, fanless and offer nearly the same features as a 3560!

  6. I have build up 2 NUC’s and described the complete process.
    They take about 15W a piece now full running with around 10VM’s per NUC.

    They are connected to my low power NAS box, Nexenta CE.

    please read:

  7. Tony says:

    Hi Alex,

    Since your setup is running ESXi from the USB flashdrive, have you tested to run the VM on vmfs volume created on another USB flashdrive directly plugged to the NUC (assuming we don’t need shared storage setup) ?

    Is such setup possible? Any major drawbacks from doing so in your opinion?

  8. MartyParty says:

    Hi Alex

    Great posts. I have been actively investigating a replacement for my home lab. I currently have a single Dell T610 PowerEdge Server, which is great…Except for the fact it is so damn noisy and sucks a heap of power!

    My lab resides in our spare room, so I always have to leave the server off when I have guests over. How do you find the noise generated by the NUCs? Is it intrusive?

    Secondly, are there any issues with running vSphere with vMotion, iSCSI, VM networks etc over the single GigE? Are there any other ways to add more network ports to the NUC?


    • The NUCs are totally silent even at a fairly high temp. If you really hammer the CPU then the fan will spin up but the rest of the time you cant hear them at all (and I am very noise averse!). Easily quiet enough to sleep with them in the same room.

      There’s no reason not to run it all on a single NIC in a lab environment, many people do this with HP Microservers today. You can either do this on a flat network, or invest in a managed switch and split the traffic into different VLANs on the same port for added security. You can use NetIOC This is what I am currently working on, and will post my setup for this soon. In the mean time I have just posted about the kit I’m using for network and storage.

      There is a potential way to add NICs to the NUC using Mini-PCIe, but I haven’t had the chance to buy one to test this yet.

  9. […] parts of this article may be found here: NanoLab – Running VMware vSphere on Intel NUC – Part 2 NanoLab – Running VMware vSphere on Intel NUC – Part 3 VMware vSphere NanoLab – Part 4 – Network and Storage […]

  10. […] Earlier parts of this article may be found here: NanoLab – Running VMware vSphere on Intel NUC – Part 1 NanoLab – Running VMware vSphere on Intel NUC – Part 2 NanoLab – Running VMware vSphere on Intel NUC – Part 3 […]

  11. Frederik says:


    Nice blog post there.

    I was wondering if it would be possible to install ESXi directly onto a mSata SSD instead of using an USB stick?

    • Thanks! 🙂

      You absolutely should be able to do this, and I believe other people have. This would give you local storage instead of using shared.



  12. czx says:

    Hey,I have a problem.I install ESXi directly onto a mSata SSD instead of using an USB stick on a inte NUC.
    But I can’t use local storage. System displayed there is no local storage.

  13. Johnny Hansson says:

    Hi Alex,

    I’m curious to know how many hosts you are running on your Nanolab simultaneously? I’m a developer and want to setup my own virtual environment at home to run build servers and web servers for testing purposes. Primarily I need the ability to run 2 – 3 virtual servers with Windows and 1 – 2 hosts with Linux at the same time. I wonder if you think the Intel NUC with the I5 4250U will manage it or if the CPU is too slow?

    • Hi Johnny, I’m running about 26 simultaneous VMs on 2 i3 nodes today, mostly Windows 2008 R2 with a few appliances thrown in, and using the VCSA. If you need more CPU grunt, go for the i7, but the DC53427HYE version of the NUC (which also includes vPro) is pretty awesome. The Mrs is getting me one for Xmas so I will then have 3 nodes in my cluster (2x i3 and 1x i5).

  14. James B. says:

    Hi Alex,

    Great blog posts, was really helpful for me getting set up.

    Got a question. I’m running ESXi on a DC53427HYE NUC, and would like to install a 3G modem as one of the mini PCIe slots. I have a mSSD in the first slot, and am using a USB wireless dongle and passing to a Windows 7 VM for wireless. Would I be able to access the mini PCIe 3G modem form the Windows 7 VM? The USB passed through fine, and of course the mSSD drive works fine, but would this other mini PCIe hardware get passed into the Windows 7 VM?

    16GB ram
    256GB ssd
    53427HYE NUC

    • Hi James, thanks for the feedback!

      I would love to say definitely yes, but anything using vt-d which isn’t a simple NIC can be very variable in terms of results. If you live somewhere like the uk, order online. Distance selling rules give you 14 days to return, so if it doesn’t work you at least have a way out. Elsewhere look for vendors with a no quibble returns policy.

      Sorry I couldn’t be more help!