VMware vSphere NanoLab – Part 4 – Network and Storage Choices

Over the past few posts I have gone into the detail on configuring a high WAF vSphere NanoLab, mainly from the perspective of compute. In my case this consists of two Intel NUC nodes, running  dual core 1.8GHz core i3 processors and 16GB of RAM each. The main question people  have been asking me since I published the series is, what do I use for networking and storage?

Prior to the NanoLab, I have always gone for a vInception type of setup, i.e. everything running inside a single powerful workstation with plenty of RAM. This limits your options a bit, in my case it meant simply using local SSD & SATA storage, presented either as iSCSI from my Windows 2008 R2 server  or a nested FreeNAS 7 VM. For a bit of extra capacity I also had a couple of spare disks in an HP Microserver N36L presented via another FreeNAS 7 VM under ESXi.

The most frustrating thing with running your VMFS storage from a Windows host, is the monthly patching and reboots, meaning you have to take down your entire environment every time. In my case this also includes this blog, which is hosted as  a VM on this environment, so moving forward I wanted to have something a little more secure, flexible and robust, which also adhered to the cost, noise and size requirements you might expect for a NanoLab.


Speed of storage can make or break you experience and productivity when running a home lab. My requirements for a storage device / NAS were:

  • Minimal size
  • Silent or as near silent as possible
  • Low power consumption
  • Minimum 4 disk slots and ability to do RAID 5 (to minimise disk cost and provide flexibility for later growth)
  • Reasonable price


  • VAAI support
  • Decent warranty (if not a home build)
  • Reasonable component redundancy
  • USB3 support in case I want to add any external drives later for some speedy additional storage / backup

After going back and forth between a home-made solution based on another HP Microserver, or a pre-configured NAS, I decided that the additional features available in the Synology “Plus” line were too good to pass up. These include:

  • VAAI support for Hardware Assisted Locking (ATS), Block Zero, Full Copy, Thin Provisioning
  • iSCSI snapshot and backup
  • Link aggregation support for the dual gigabit NICs
  • 2-3 year warranty depending on the model
  • iSCSI or NFS (VAAI on iSCSI volumes only)

They were also recommended by a number of vExperts such as Jason Nash, Chris Wahl and Julian Wood, which is always a good justification to go for one! 🙂

The 1512+ was very tempting, but I in the end I chose the DS412+ due to its near-silent sub-20db operation (thanks to an external power brick and 2 hot-swap silent cooling fans), low power consumption618_348_backup-plans-synology-ds412 (max 44w under heavy load),  minimal footprint and reduced cost. I was tempted to wait and see if a DS413+ comes out any time soon, but Synology are being cagey and I needed the lab upgrades to be done and dusted in a short period. I already have a DS413j which I use for backups, so I can confirm they are very well built little machines, and the noise level claims are indeed accurate!


Into the 412+ I have loaded a pair of 240GB SANDisk Extreme SSDs using SHR (Synology Hybrid Raid). This is effectively just RAID1 mirroring when only two drives are installed but gives me the ability to expand out to RAID5 equivalent as I need more space and the price of SSDs (inevitably) comes down. Eventually the box will have around ~720GB or more of useable SSD storage, more than enough for a decent bunch of lab VMs! Another alternative would be a pair of SSDs for VM boot partitions / config files, and a pair of SATA drives for VM data partitions.


Although you can easily build a great home lab on a flat network with any old cheap switch, the ability to experiment with more advanced features is highly desirable. My requirements for a managed switch were:

  • Minimal size
  • Passive cooling (for silent operation)
  • Low power consumption
  • Minimum of 8x 1 gigabit ports (or preferably more)
  • Link aggregation
  • QoS
  • Security – VLANs, PVLANs, ACLs, & Layer 3 switching
  • SSH access for command line management


  • I am studying for a few Cisco exams over the next year so a Cisco branded router would be preferential
  • Decent warranty

After a great suggestion from Jasper and reading an article by Vladan I ended up going for the ten port Cisco SG300-10.


This 10-port switch will allow for:

  • 1-2 ports per NUC (for 2-4 NUC boxes)
  • 2 LACP enabled ports for the Synology lab storage
  • 2 ports for my personal data storage server (might replace this with a second mid-range Synology NAS later)
  • 2 uplink ports (In my case for a router and a second wireless access point)

This switch is passively cooled, only uses around 10w power, and as an added bonus Cisco include a limited lifetime warranty! Great if you are going to invest that much in a switch for home!

“As long as the original End User continues to own or use the Product, provided that: fan and power supply warranty is limited to five (5) years. In the event of discontinuance of product manufacture, Cisco warranty support is limited to five (5) years from the announcement of discontinuance.” http://www.cisco.com/en/US/docs/general/warranty/English/LH2DEN__.html

If I had been going for a switch purely on cost I would probably have chosen one of the HP models as these have some great bang for your buck, but I did want to stick to a Cisco branded one. I would also have loved to go for the PoE model so I could plug in a VoiP phone later, but the cost for the SG300-10P / MP was at least 50% more, and power consumption would be higher, even when idle.


The entire NanoLab setup above of 2 NUC boxes, DS412+ and SG300-10 in total take up about the same volume of space as a large shoe box, are virtually silent, and combine for an idle power level of 50-60 watts, and under 100 watts even under load. That’s less than a couple of halogen light bulbs!

In my next post I will go through the process of configuring the network and storage, including link aggregation and suggested VLAN configuration.

Earlier parts of this article may be found here:
NanoLab – Running VMware vSphere on Intel NUC – Part 1
NanoLab – Running VMware vSphere on Intel NUC – Part 2
NanoLab – Running VMware vSphere on Intel NUC – Part 3

Related posts you might enjoy!


  1. […] Other parts of this article may be found here: NanoLab – Running VMware vSphere on Intel NUC – Part 1 NanoLab – Running VMware vSphere on Intel NUC – Part 3 VMware vSphere NanoLab – Part 4 – Network and Storage Choices […]

  2. […] Other parts of this article may be found here: NanoLab – Running VMware vSphere on Intel NUC – Part 1 NanoLab – Running VMware vSphere on Intel NUC – Part 2 VMware vSphere NanoLab – Part 4 – Network and Storage Choices […]

  3. jym says:

    I must admit that certainly I am following your steps to build my home lab. I bought one Intel NUC with same config and installed ESXi on it. Now I am planning to build or buy shared storage system like yours. But I am confuse how do you connect the storage with NUC as it has only one NIC port, are you doing separate port group and VLAN tagging with your Cisco switch?

    • Cool 🙂

      It is possible to buy mini-PCI Ethernet cards which would work in the NUC, but I haven’t tested this yet as they’re about £50 each. My current setup is based on a simple flat network, but I am planning to move to VLAN trunking in the next couple of weeks, at which point I’ll write an article about it.

      • Lars Stokholm says:

        Hi. Just bought a NUC for testing – and it looks really good. Looking forward to your next post since I really want the extra Ethernet 🙂

      • Nigel Hardy says:

        Hi Alex,
        I’m also building a NUC-based lab. Did you get anywhere with the idea of a 2nd NIC using a mini-PCIE card? Without a 2nd NIC I’m finding it impossible to use a DVS.

        It doesn’t seem possible to migrate a vCenter server and the host it is running on to a DVS at the same time (fails, rolls back, leaves things broken). I can’t seem convert a host to DVS and then vMotion a vCenter to it from a standard switch as the destination doesn’t have the right portgroup. Can’t cold migrate it as the DVS needs vCenter to allocate a DVS port.

        There’s a YouTube video showing DVS on a single NIC ESXi host but that seems to be done using an off-cluster vCenter. It may be possible by creating an internal-only vSwitch and connecting a new vmknic and vCenter to it. Something to try when i’ve got a few hours.

        There are some gigabit NIC mini-PCI cards around. They’re full length which means I won’t be able to use the full-length 64GB mSATA local cache drives I’ve got, but it might be possible to fit them if using a half-size mSATA drive or no internal storage. Some case modification would be needed to pass the connector out, if the internal 8-pin connector will fit inside the NUC.

        I managed to pick up an i5 NUC for £240 before Dabs realised their pricing mistake. Nice thing with the i5 NUC is it has vPro/AMT giving you iLO-type console access via VNC.

        • Glad to hear you got your i5 at a great price! I think I may be looking to purchase one (or maybe two) in the next couple of months too, to take my lab up to 3 hosts, and perhaps use one as a replication host / cluster for Veeam etc testing. The vPro feature is the biggest selling point! When I do that I will give it a go with setting up a local dvswitch on the single NIC (though theyre not that cheap – about £50 last I looked).

          I haven’t had the chance to buy or test the gig micro NICs yet. The only issue is routing the port outside of the case (not sure I want to hacksaw it?) so currently running on standard switches. If you want dvswitch for testing only and cant get it to work, what about running ESXi nested?

  4. Mike says:

    I am curious as to the performance impact of software iSCSI on the Intel NUC? I currently have an NL40 (FreeNAS) with mirrored drives that I plan to soon test as shared storage for vSphere. My vSphere server needs to be updated and the NUC looks like a great option for 2x 16Gb servers.

    • Hi Mike, I think it comes down to which iSCSI server software you use. It may also be worth looking at NFS as well, as you can get significantly different performance depending on your appliance, even running on the same hardware. I used to use FreeNAS 0.7 quite a bit.

      • Mike says:

        Thanks for the reply – I was referring to the software iSCSI initiator in vSphere … I’ll look into NFS again too – been a long time since I looked at the pros and cons of each with regard to vSphere! Thanks again 🙂

        • No probs. I think the key thing is what features you get from your NFS/iSCSI target. In my case I used to use FreeNAS with NFS as I tested it and found the performance to be better with iSCSI for that appliance, but NFS on other appliances / OS has been good too. Now I use the DS which has VAAI support on iSCSI only, and it runs like a beast!

          In terms of vSphere pros and cons I think for a homelab setup it doesn’t make a huge difference and you should go with whatever is most simple, inexpensive and performant. Once you start scaling up to production workloads it becomes more key to get deep into the guts of the differences.

          The other thing is to ask yourself what features you want to learn more about. If its VMFS etc then iSCSI is definitely the way to go.

          Whichever you choose, I recommend doing some simple testing for performance as there can be some significant differences. I did a bit of testing with FreeNAS iSCSI vs NFS in an article last year which showed iSCSI on FreeNAS 0.7 to be WAY faster:


          Note this was done using a vInception model, hence I got >1Gbps speeds for the iSCSI reads.

          • Mike says:

            Thanks for the link – NFS looks pretty awful in your test. I’ll have to test with FreeNAS 8.x. I’ll probably have to get another HP Microsever too (I just wish they were a bit more energy efficient). It’s a shame that it’s not possible to do a USB datastore with USB 3 coming on the i5 NUC.

  5. Happy One says:

    All right. as you know, I am so happy to have found the article series. i have mine, and i bit the bullet and bought a DS1812+ (kind of nervious once the wife notices the silence and wonders what i did (bought) to make that happen) 🙂
    Anyway, for a couple of years now, i’ve been wanting to use ZFS, and i had bought drives to replace my existing stuff, and planned to rebuild, move from openfiler and into OI+NappIt or something like that.
    Well, OpenFiler disks died. 3 (one by one during resilvering) on my raid6. After cursing for hours, i bought the synology above. Here is the thing, synology and those NAS devices are still using just RAID level protection and file systems like ext3-4, etc.
    I am racking my head into trying to figure out how to use the awesome features of the synology and still get ZFS for my files and NUC.
    Initially i was going to do a RAID6 of the drives, and then create iSCSI targets as Files (one of the synology options). I would then mount those into the NUC’s ESXi and vRDM into a virtual ZFS VM. Here are two issues:
    1. iSCSI as a file in synology is known to be pretty slow. they recommend to do iSCSI from a whole Volume
    2. I only have 1 NIC in the NUC (read that fast and often…it sounds funny). iSCSI can use a lot of bandwidth and definitely a lot of memory.

    I am not sure if with OI+nappit i can mount the iSCSI targets directly (OS based initiator) and bypass the RDM, but even then, i am still constrained on that single GigE.

    Hellllppp. What do you suggest?

    • Hi Happy 🙂

      Personally I haven’t had any issues with bandwidth to the NUC or indeed the storage performance of file base iSCSI even when running up to 10 VMs per host. This is even better as iSCSI on the DS+ range supports VAAI. To give you an example I have happily pushed 1200 IOPS in Storage vMotion jobs from my Windows based iSCSI target to my DS412+, and I can clone a 10GB VM template in under 30 seconds (I use SSDs in my DS)!

      If you are really worried about iSCSI (file) based shares on your DS (which I wouldn’t be if I were you), you could just do iSCSI volume based shares and ZFS the the VMDKs in your VMs, but this seems like a really nasty way to do things, with lots of inherent risks just for a slight perceived performance improvement?…

      Remember that using ZFS or SHR or any raid based system is NOT a backup. Personally I invested in a second “j” range Synology (DS413j, though they also have a DS213j now) which are dirt cheap, and I back everything up to it. Between SHR and a backup to another DS using the built in backup tools, you will be in a much better place than using one machine with ZFS anyway.

      Hope that helps?

      • Happy One says:

        It absolutely does. Thank you so much. The issue with having another nas is going to be the cost of disks, but…. i do have a couple laying around, and still have my old enclosure. i just need something quiet to drive it. hhahahha. Anyway, I do backup online (Crashplan+) and will be figuring something else out for local backup as well (I wish i could do ZFS in an old Buffalo terastation).
        Now, on to the main part. If i understood you correctly, you agree that in your case (SSDs and all) things work well in the following setup:
        – In the DS1812+
        * Create a large volume as if you were going to share files straight from it.
        * Create an iSCSI-FileBased Target that is as large as you can (up to the volume size potentially)
        – In ESXi (the awesome NUC)
        * attach the iSCSI target and format as VMFS so that it is setup as a Data Store
        * Create a VM for whatever system you will use (Nexenta/OI + napp-it), create the disks in that data store (thin prov i guess)
        * boot up that VM, configure it as if it was just using straight disks, create whatever shares i need for files, music, etc
        * and love a quiet life.

        did i get that right?

        My wanting to store the files that we use at home (pics, video, etc) on zfs is related to Bit-rot, but i know i still need to backup.

        Last, and yes i have thick skin (sniff sniff), am i overcomplicating it? Should i just let go of wanting to use ZFS for bit-rot, and instead just rely on my backups (and spend time on figuring out the local backup strategy which could be as simple as some sata drives attach to the 1812+ and crashplan dumping a copy of backups there?)

        Still confused, but a lil’ more hopeful thanks to you. let me know your thoughts pls.
        thx. 🙂

        • It may be over complicating things, it depends on how much storage you need. If it’s a lot, the easiest thing to do is simply share the files straight off the DS1812 using the shared folders feature. This can be mounted as NFS or CIFS so will work with any OS, and can use rsync for block level incremental backups to minimise bandwidth if you happen to backup externally. Then as you said create an iSCSI target (remembering to tick the box to enable VAAI support, though I think this may be checked by default) and create as many virtual machines as you like from there to mess about with.

          If ZFS is an absolute must for you then go on ahead and store your files inside a VM, just make sure you have a good OS-based backup method, or replicate / back up the files to another physical machine / external disk for safety. I use the iSCSI backup system built into the DS NAS to snap and backup the iSCSI LUNs to my DS413j. This is useful as a simple crash-consistent restore of my production VMs, but I still like a file level backup of key data if possible too, it depends how risk averse you are! 🙂

  6. Steve says:

    “In my next post I will go through the process of configuring the network and storage, including link aggregation and suggested VLAN configuration”

    Have you released this article yet as I’m very interested in reading it and am unable to find it using the site search?! Thanks!

    • Hi Steve, it’s in the pipeline. I want to get some screenshots when I setup my second ds412+. Work has been uber busy of late so haven’t had the time yet. Sorry about that!

      • Pete says:

        Come on, my man. Let’s get that article out of you! 🙂

        Thanks a ton for all this info. Incredibly helpful. I’m starting to do the research for a home lab for myself, and these Intel boxes look pretty sweet. Shame about the limited networking, but it is a lab, after all. My next step is to justify the cost to my wife (somehow). Thanks again.

  7. Jonathan Fearnley says:

    I too would like very much to see your network configuration with one network interface card found on the intel nuc. Please do post.

  8. Dharms says:

    Hi Alex looking forward to your networking article, preferably if possible using 2 physical nic ports as I have 2 desktop intel pc’s I would like to use. I see you are studying your ccna so hoping the article is not too long away 🙂

  9. Dharms says:

    Just wanted to let you know about a catch 22, when wanting to have a vcentre vm on the same cluster as the vm is built on, hope that makes sense. Here are two links that explain how to get round the issue.

%d bloggers like this: