Over the past few posts I have gone into the detail on configuring a high WAF vSphere NanoLab, mainly from the perspective of compute. In my case this consists of two Intel NUC nodes, running dual core 1.8GHz core i3 processors and 16GB of RAM each. The main question people have been asking me since I published the series is, what do I use for networking and storage?
Prior to the NanoLab, I have always gone for a vInception type of setup, i.e. everything running inside a single powerful workstation with plenty of RAM. This limits your options a bit, in my case it meant simply using local SSD & SATA storage, presented either as iSCSI from my Windows 2008 R2 server or a nested FreeNAS 7 VM. For a bit of extra capacity I also had a couple of spare disks in an HP Microserver N36L presented via another FreeNAS 7 VM under ESXi.
The most frustrating thing with running your VMFS storage from a Windows host, is the monthly patching and reboots, meaning you have to take down your entire environment every time. In my case this also includes this blog, which is hosted as a VM on this environment, so moving forward I wanted to have something a little more secure, flexible and robust, which also adhered to the cost, noise and size requirements you might expect for a NanoLab.
Speed of storage can make or break you experience and productivity when running a home lab. My requirements for a storage device / NAS were:
- Minimal size
- Silent or as near silent as possible
- Low power consumption
- Minimum 4 disk slots and ability to do RAID 5 (to minimise disk cost and provide flexibility for later growth)
- Reasonable price
- VAAI support
- Decent warranty (if not a home build)
- Reasonable component redundancy
- USB3 support in case I want to add any external drives later for some speedy additional storage / backup
After going back and forth between a home-made solution based on another HP Microserver, or a pre-configured NAS, I decided that the additional features available in the Synology “Plus” line were too good to pass up. These include:
- VAAI support for Hardware Assisted Locking (ATS), Block Zero, Full Copy, Thin Provisioning
- iSCSI snapshot and backup
- Link aggregation support for the dual gigabit NICs
- 2-3 year warranty depending on the model
- iSCSI or NFS (VAAI on iSCSI volumes only)
They were also recommended by a number of vExperts such as Jason Nash, Chris Wahl and Julian Wood, which is always a good justification to go for one! 🙂
The 1512+ was very tempting, but I in the end I chose the DS412+ due to its near-silent sub-20db operation (thanks to an external power brick and 2 hot-swap silent cooling fans), low power consumption (max 44w under heavy load), minimal footprint and reduced cost. I was tempted to wait and see if a DS413+ comes out any time soon, but Synology are being cagey and I needed the lab upgrades to be done and dusted in a short period. I already have a DS413j which I use for backups, so I can confirm they are very well built little machines, and the noise level claims are indeed accurate!
Into the 412+ I have loaded a pair of 240GB SANDisk Extreme SSDs using SHR (Synology Hybrid Raid). This is effectively just RAID1 mirroring when only two drives are installed but gives me the ability to expand out to RAID5 equivalent as I need more space and the price of SSDs (inevitably) comes down. Eventually the box will have around ~720GB or more of useable SSD storage, more than enough for a decent bunch of lab VMs! Another alternative would be a pair of SSDs for VM boot partitions / config files, and a pair of SATA drives for VM data partitions.
Although you can easily build a great home lab on a flat network with any old cheap switch, the ability to experiment with more advanced features is highly desirable. My requirements for a managed switch were:
- Minimal size
- Passive cooling (for silent operation)
- Low power consumption
- Minimum of 8x 1 gigabit ports (or preferably more)
- Link aggregation
- Security – VLANs, PVLANs, ACLs, & Layer 3 switching
- SSH access for command line management
- I am studying for a few Cisco exams over the next year so a Cisco branded router would be preferential
- Decent warranty
After a great suggestion from Jasper and reading an article by Vladan I ended up going for the ten port Cisco SG300-10.
This 10-port switch will allow for:
- 1-2 ports per NUC (for 2-4 NUC boxes)
- 2 LACP enabled ports for the Synology lab storage
- 2 ports for my personal data storage server (might replace this with a second mid-range Synology NAS later)
- 2 uplink ports (In my case for a router and a second wireless access point)
This switch is passively cooled, only uses around 10w power, and as an added bonus Cisco include a limited lifetime warranty! Great if you are going to invest that much in a switch for home!
“As long as the original End User continues to own or use the Product, provided that: fan and power supply warranty is limited to five (5) years. In the event of discontinuance of product manufacture, Cisco warranty support is limited to five (5) years from the announcement of discontinuance.” http://www.cisco.com/en/US/docs/general/warranty/English/LH2DEN__.html
If I had been going for a switch purely on cost I would probably have chosen one of the HP models as these have some great bang for your buck, but I did want to stick to a Cisco branded one. I would also have loved to go for the PoE model so I could plug in a VoiP phone later, but the cost for the SG300-10P / MP was at least 50% more, and power consumption would be higher, even when idle.
The entire NanoLab setup above of 2 NUC boxes, DS412+ and SG300-10 in total take up about the same volume of space as a large shoe box, are virtually silent, and combine for an idle power level of 50-60 watts, and under 100 watts even under load. That’s less than a couple of halogen light bulbs!
In my next post I will go through the process of configuring the network and storage, including link aggregation and suggested VLAN configuration.
Earlier parts of this article may be found here:
NanoLab – Running VMware vSphere on Intel NUC – Part 1
NanoLab – Running VMware vSphere on Intel NUC – Part 2
NanoLab – Running VMware vSphere on Intel NUC – Part 3
Pingback: NanoLab – Running VMware vSphere on Intel NUC – Part 2 | tekhead.org
Pingback: NanoLab – Running VMware vSphere on Intel NUC – Part 3 | tekhead.org