Having been looking to do a home lab tech refresh of late, I have been spending quite a bit of time examining all the options. My key requirements, mostly determined by their relative WAF score (Wife Acceptance Factor) were as follows:
- Silent or as quiet as possible (the lab machines will sit behind the TV in our living room where my current whitebox server sits almost silently but glaringly large!).
- A minimum of 16GB RAM per node (preferably 32GB if possible).
- A ‘reasonable’ amount of CPU grunt, enough to run 5–10 VMs per host.
- Minimal cost (I haven’t got the budget to go spending £500+ per node, trying to keep it under £300)
- Smallest form factor I can find to meet requirements 1–4.
- Optional: Remote access such as IPMI or iLO.
I have previously invested in an HP N36L, which while great for the price (especially when the £100 cashback offer was still on) is a bit noisy, even with a quiet fan mod. Its actually also fairly big when you start looking at buying multiples and stacking them behind the telly! Even so I was still sorely tempted by the new N54L MicroServers which are just out (AMD Dual Core 2.2GHz) and max 16GB RAM) and are within my budget.
Similarly I looked into all the Mini-ITX and Micro-ATX boards available, where the Intel desktop / small servers ones seemed to be best (DBS1200KP / DQ77MK / DQ67EP are all very capable boards). Combined with an admittedly slightly expensive Intel Xeon E3-1230 V2, these would all make brilliant white box home labs, but for me they are still limited by either their size or cost.
In late November, Intel announced they were releasing a range of bare bones mini-PCs called “Next Unit of Computing”. The early range of these 10cm square chassis contain an Intel Core i3 i3-3217U CPU (“Ivy Bridge” 22 nm, as found in numerous current ultrabooks), two SODIMM slots for up to 16GB RAM, and 2 mini-PCIe slots. It’s roughly the same spec and price as an HP MicroServer, but in a virtually silent case approximately the same size as a large coffee cup!
Intel bare bones mini-PCs reg.cx/1YR9 < New #VMware #vSphere #homelab? Intel Pro 1G NIC, i3 Dual 1.8Ghz (w/ vt-x). + 2x8GB SODIMM?— Alex Galbraith (@alexgalbraith) November 12, 2012
Even better, when you compare the CPU to the latest HP N54L, it achieves a benchmark score of 2272 on cpubenchmark.net, compared to the AMD Turion II Neo N54L Dual-Core at only 1349, putting it in a different class altogether in terms of raw grunt. Not only that, but with the cashback offer from HP now over, it’s about the same price or less than a MicroServer, just £230 inc VAT per unit!
On top of the above, there is an added bonus in the extremely low power consumption of just 6-11 watts at idle, rising to ~35 watts under high load. Comparing this to the HP MicroServer, which idles at around the 35 watt mark, spiking to over 100 watts, the NUC shows a marked improvement to your “green” credentials. If you are running a two node cluster, you could conservatively save well over £30 per year from your electricity bill using NUCs instead of MicroServers. Add to that a 3-year Intel warranty and I was pretty much sold from the start!
This all sounded too good to be true, and in all bar one respect it is actually perfect. The only real drawback is that the Intel 1gbps NIC (82579V) is not in the standard driver list currently supported by ESXi. This was a slight cause for concern as some people had tried and failed to get it to work with ESXi and held me off purchasing until this week when I spotted this blog post by “Stu” who confirmed it worked fine after injecting the appropriate driver to the ESXi install iso.
I immediately went to my favourite IT vendor (scan.co.uk) and purchased the following:
Intel ICE Canyon NUC Barebone Unit – DC3217IYE
16GB Corsair Kit (2x8GB) DDR3 1333MHz CAS 9
8GB PNY Micro Sleek Attache Pendrive
Total cost: ~£299 inc vat… bargain!
IMPORTANT: You will also need a laptop / clover leaf style kettle cable (C5) or your country’s equivalent. In the box you get the power block, but not the 3 pin cable. These can be picked up on ebay for next to nothing.
With very little time or effort I was able to create a new ESXi installer with the correct e1000 drivers, boot the machine and I am now happily running ESXi on my first node.
I should add that as part of the install I discovered a bug which Intel are looking to resolve with a firmware fix soon. This was the fact that I was unable to press F2 to get into the bios (it just rebooted each time I pressed it). Another symptom of this same bug was ESXi getting most of the way through boot and coming up with an error saying “multiboot could not setup the video subsystem”. This is not a VMware fault. I resolved this by simply plugging the HDMI cable into a different port on my TV (ridiculous!). You might also try a different HDMI cable. Either way it was not serious enough to stop me ordering a second one the same night I got it running!
Disclaimer: Mileage may vary! I will not be held responsible if you buy a b0rk3d unit. 🙂
In Part 2 of this article, I will expand on the process for installing ESXi to the NUC, and my experiences with clustering two of them (second unit arrived in the post today so will be built and tested this weekend).
Other parts of this article may be found here:
NanoLab – Running VMware vSphere on Intel NUC – Part 2
NanoLab – Running VMware vSphere on Intel NUC – Part 3
VMware vSphere NanoLab – Part 4 – Network and Storage Choices
Pingback: NanoLab – Running VMware vSphere on Intel NUC – Part 2 | tekhead.org
Pingback: NanoLab – Running VMware vSphere on Intel NUC – Part 3 | tekhead.org