NanoLab – Running VMware vSphere on Intel NUC – Part 1

Having been looking to do a home lab tech refresh of late, I have been spending quite a bit of time examining all the options. My key requirements, mostly determined by their relative WAF score (Wife Acceptance Factor) were as follows:

  1. Silent or as quiet as possible (the lab machines will sit behind the TV in our living room where my current whitebox server sits almost silently but glaringly large!).
  2. A minimum of 16GB RAM per node (preferably 32GB if possible).
  3. A ‘reasonable’ amount of CPU grunt, enough to run 5–10 VMs per host.
  4. Minimal cost (I haven’t got the budget to go spending £500+ per node, trying to keep it under £300)
  5. Smallest form factor I can find to meet requirements 1–4.
  6. Optional: Remote access such as IPMI or iLO.

I have previously invested in an HP N36L, which while great for the price (especially when the £100 cashback offer was still on) is a bit noisy, even with a quiet fan mod. Its actually also fairly big when you start looking at buying multiples and stacking them behind the telly! Even so I was still sorely tempted by the new N54L MicroServers which are just out (AMD Dual Core 2.2GHz) and max 16GB RAM) and are within my budget.

Similarly I looked into all the Mini-ITX and Micro-ATX boards available, where the Intel desktop / small servers ones seemed to be best (DBS1200KP / DQ77MK / DQ67EP are all very capable boards). Combined with an admittedly slightly expensive Intel Xeon E3-1230 V2, these would all make brilliant white box home labs, but for me they are still limited by either their size or cost.

In late November, Intel announced they were releasing a range of bare bones mini-PCs called “Next Unit of Computing”. The early range of these 10cm square chassis contain an Intel Core i3 i3-3217U CPU (“Ivy Bridge” 22 nm, as found in numerous current ultrabooks), two SODIMM slots for up to 16GB RAM, and 2 mini-PCIe slots. It’s roughly the same spec and price as an HP MicroServer, but in a virtually silent case approximately the same size as a large coffee cup!

Even better, when you compare the CPU to the latest HP N54L, it achieves a benchmark score of 2272 on, compared to the AMD Turion II Neo N54L Dual-Core at only 1349, putting it in a different class altogether in terms of raw grunt. Not only that, but with the cashback offer from HP now over, it’s about the same price or less than a MicroServer, just £230 inc VAT per unit!

On top of the above, there is an added bonus in the extremely low power consumption of just 6-11 watts at idle, rising to ~35 watts under high load. Comparing this to the HP MicroServer, which idles at around the 35 watt mark, spiking to over 100 watts, the NUC shows a marked improvement to your “green” credentials. If you are running a two node cluster, you could conservatively save well over £30 per year from your electricity bill using NUCs instead of MicroServers. Add to that a 3-year Intel warranty and I was pretty much sold from the start!

This all sounded too good to be true, and in all bar one respect it is actually perfect. The only real drawback is that the Intel 1gbps NIC (82579V) is not in the standard driver list currently supported by ESXi. This was a slight cause for concern as some people had tried and failed to get it to work with ESXi and held me off purchasing until this week when I spotted this blog post by “Stu” who confirmed it worked fine after injecting the appropriate driver to the ESXi install iso.

I immediately went to my favourite IT vendor ( and purchased the following:

Intel ICE Canyon NUC Barebone Unit – DC3217IYE
16GB Corsair Kit (2x8GB) DDR3 1333MHz CAS 9
8GB PNY Micro Sleek Attache Pendrive

Total cost: ~£299 inc vat… bargain!

IMPORTANT: You will also need a laptop / clover leaf style kettle cable (C5) or your country’s equivalent. In the box you get the power block, but not the 3 pin cable. These can be picked up on ebay for next to nothing.

With very little time or effort I was able to create a new ESXi installer with the correct e1000 drivers, boot the machine and I am now happily running ESXi on my first node.

Intel NUC with ESXi 5.1

I should add that as part of the install I discovered a bug which Intel are looking to resolve with a firmware fix soon. This was the fact that I was unable to press F2 to get into the bios (it just rebooted each time I pressed it). Another symptom of this same bug was ESXi getting most of the way through boot and coming up with an error saying “multiboot could not setup the video subsystem”. This is not a VMware fault. I resolved this by simply plugging the HDMI cable into a different port on my TV (ridiculous!). You might also try a different HDMI cable. Either way it was not serious enough to stop me ordering a second one the same night I got it running!

Disclaimer: Mileage may vary! I will not be held responsible if you buy a b0rk3d unit. 🙂

In Part 2 of this article, I will expand on the process for installing ESXi to the NUC, and my experiences with clustering two of them (second unit arrived in the post today so will be built and tested this weekend).

Other parts of this article may be found here:
NanoLab – Running VMware vSphere on Intel NUC – Part 2
NanoLab – Running VMware vSphere on Intel NUC – Part 3
VMware vSphere NanoLab – Part 4 – Network and Storage Choices


  1. […] I confirmed in my recent post, it is indeed possible (and I would now say highly recommended!) to install ESXi onto an Intel NUC […]

  2. […] parts of this article may be found here: NanoLab – Running VMware vSphere on Intel NUC – Part 1 NanoLab – Running VMware vSphere on Intel NUC – Part […]

  3. Chris Parker says:

    Hi Alex

    What a brilliant article on running vSphere on the NUC. I’m going down the idea ot of clustering two of these boxes also.

    I’m going to advertise your testing and config within a Facebook group ive created for all things NUC. Please check it out!

  4. […] power adapter and a single gigabit network (@AlexGalbraith has a great series of post on running ESXi on his Intel NUC […]

  5. Tyson says:

    Hey, I am just about to press “go” to purchase one of these units, but I can’t work out what I need to be able to add an SSD to this. I presume I need something to use one of the Mini PCIe slots, but what?

    This post has been a revelation by the way… THANKS!

    • Hi Tyson, thanks for the feedback! 🙂

      If you want internal storage on the NUC then you would use an mSATA SSD. From the tech spec:

      SATA port:
      ― One internal mSATA port (PCI Express Full-Mini Card) for SSD support

      Personally I just boot from a £5 USB stick, as my VMs run on shared storage from my NAS.

      Hope that helps?

      • Tyson says:

        Hi, yep… I had eventually figured it out myself… but to be honest, my show-stopper issue with the NUC is on the networking side. Out of the box, the only way that I can see to add an additional NIC is by using a mini pci-e wireless card, and I didn’t have the confidence that esxi would play nicely with that. In the end I’ve decided to do a proper upgrade of my full-sized lab box…. but I might look at these again in future as I love the idea of a vSphere box in such a tiny package.

  6. […] NanoLab – Running VMware vSphere on Intel NUC – Part 1 […]

  7. […] NanoLab – Running VMware vSphere on Intel NUC – Part 1 | […]

  8. […] and very small home lab equipment, and Alex does a good job of providing all the details. Check out part 1 here and part 2 […]

  9. […] AM #35 I was thinking of getting one of these little guys, they seem quite nice. NanoLab – Running VMware vSphere on Intel NUC – Part 1 | Quote […]

  10. […] parts of this article may be found here: NanoLab – Running VMware vSphere on Intel NUC – Part 1 NanoLab – Running VMware vSphere on Intel NUC – Part 2NanoLab – Running VMware vSphere on […]

  11. […] parts of this article may be found here: NanoLab – Running VMware vSphere on Intel NUC – Part 1 NanoLab – Running VMware vSphere on Intel NUC – Part 2 NanoLab – Running VMware vSphere on […]

  12. Andrew Hale says:

    Alex, I really like your articles on the NUC. Thank you for providing such a great resource for those looking create a similar lab environment.

    You basically have my dream for a home lab; 2 small low-power hosts, NAS, managed swtich, etc. Unfortunately, I don’t have the budget for all of it right now. I’m looking to build one NUC system, use a 256Gb SSD for local storage/VM data store and back it up to a USB harddrive until I can get a proper shared storage solution (NAS+drives).

    Would this work, provided I understand I wouldn’t get to play with the cool clustering features? Any advice you could provide would be most appreciated.

    Thank you and keep up the great work here.


    • Thanks 🙂

      I don’t see why it wouldn’t work. The SSD would need to be a mini-PCI one unless you plan to access it via USB (I havent actually tested if VMFS is possible on a USB disk, but would give some interesting storage expansion options!). You could also practice clustering by nesting VMs if you were really keen (vInception style), meaning you would have ESXi running 2 copies of ESXi, which then run your VMs! This is also achievable using VMware workstation on any normal machine too (with enough RAM).

      • Andrew Hale says:


        Thank you for the info. Glad to know that I can still play with only 1 host, even it did turn into vInception, :).

        How many VMs are you able to run with the NUC? Also are you running any CPU heavy services like Exchange?


  13. I’m not running anything heavy at the moment, but having run enough 2008 R2 VMs to pretty much max out the 32 GB RAM across the two nodes, my CPU still only averages <1Ghz across the whole cluster, as its just a test lab so mostly idle.

    Looking into the USB side of things, as I understand it you cant directly mount USB disks to ESXi, all you can do is pass them through to VMs, which is a shame!

  14. Dan says:

    I have not been able to customize an ESXi 5.1 ISO to include the E1001 driver. Has anyone built such an ISO they might be willing to share via Dropbox or similar, or have instructions that work for ESXi 5.1? Hoping to use this little microbox I just received as part of my test lab.

    • I’m not sure this can be done legally as I doubt VMware would want people distributing their software from third party sites (never mind the security implications). What seems to be the issue? Is it an NUC you’re using?

      • Dan says:

        I did ultimately get the NUC booted on ESXi 5.1.0 and apply the patches that are available. So far it’s not working overly well. I find the system hangs when there is heavy activity, but I haven’t determined the source of the hang yet. Trying to do a Storage vMotion of a VM, whether that VM is running or not, usually hangs the system. That could be due to network, disk or other. Overall, it looks like the NUC has some issues. I’m not sure of BIOS versions or if there are fixes to BIOS that might affect what I’m seeing.

        So far, the NUC is a lot less useful in my lab setup than I’d hoped.

        • Good to hear that you got it working! I did a bunch of SvMs when I first got my setup configured and didn’t have any issues at all. Have had just one hang in the 3 months since I bought mine which was a few days ago. The chassis do get quite warm when in operation, do you have the NUC in a reasonably uncluttered area to allow the heat to radiate away? Let us know how you get on!

  15. […] 5.1 Hypervisor onto the Intel NUC DC3217IYE kit is a fairly straight forward process.  I followed an excellent post by Alex Galbraith and was up and running in no time.  I’m not going to copy-paste from his blog, so just pop over […]

  16. Server vs VM says:

    […] PM #8 Take a look here: NanoLab – Running VMware vSphere on Intel NUC – Part 1 | Quote […]

  17. […] did some research and ended up here where a nice gentleman explains how to install VMWare ESXi 5.1 bare metal on an Intel NUC. I was […]

  18. […] in VMware ESXi 5.1, so no need to mess around with additional drivers. Although thanks to Alex Galbraith this is pretty easy for the […]

  19. […] the more niggly issues you tend to find with any technology. I spun up a 6 virtual server lab in my NanoLab to make it as close to a real design as possible. Obviously you could collapse this all the way […]

  20. James P says:

    Thanks a lot!! I was stuck in the multiboot bug. The solution was ridiculously simple too, use the hdmi1 port of the TV and everything worked!

  21. Eric says:

    Hi Alex,

    Here’s how i was thinking about running my lab:
    4th Gen i3 brix or NUC.
    then run ESXi off a 8gb usb (like you)
    create a VM for nexentastor of freenas and run that either from another USB or mSATA configuring passthrough to the external drive.
    create more VM’s and storing them on nexentastor or freenas (external drive).
    My worry is performance. How well would this perform?
    I’m trying to avoid buying a proper NAS at this stage, but if performance is going to be horrible then I think I don’t have any other options.
    Any thoughts?

    • I think it will work ok, very similar to the way that a vInception setup works. The only thing is making sure of your dependencies as what happens if you lose power? You need to ensure your storage VM comes up first and can see the USB passthrough (assuming USB passthrough to the VM works) or the mSata storage. Another option (depending how many nodes you have) could be getting the larger case version which takes sata drives. You can then either use them as local storage or even better, run VSAN!

  22. Greg says:

    Hi Alex,

    Have you had any encounters getting an Apple Thunderbolt ethernet adaptor working with a NUC and ESXi? I’m aware of the obvious result from Google but they’re not working for me (hanging on the “Loading Installer” screen) and I was wondering if you’d had any different experience.


  23. […] detail the new ESXi hosts. I recently discovered the Intel NUC via Twitter and a couple of blog posts. This box is small form-factor (about 4in  by 4in) Core i3 1.8GHz dual core machine, supporting up […]