Windows Server 2012 Storage Spaces Missing Disks

This is just an annoying quick bug I came across today while messing with Windows 2012 Storage Spaces. The bug apparently affects a significant number of RAID controllers, including the embedded AMD SATA controller in the HP Microserver N36L which is what I am currently in the process of configuring as a remote personal backup server.

As you can see from the screenshot below, the main symptom is that it effectively causes the storage spaces UI not to show all of the available disks in the primordial storage pool. There are actually 3 1TB physical drives in the server below, however only a single drive appears (which can be any one of the three drives in slot 2/3/4 when I refresh the view):

Primordial storage space only showing a single physical drive

Primordial storage space only showing a single physical drive

This is caused by the RAID controller presenting all disks with the same UniqueID. You can list your UniqueIDs by typing the following command into a PowerShell window:

Get-PhysicalDisk | ft FriendlyName, UniqueId, ObjectId, BusType –auto

 

The result looks something like this:

3 identical UniqueIDs

3 identical UniqueIDs

This is an annoying bug, but a simple workaround is available for Microserver users, and I’m sure a similar approach could be taken on other platforms. Simply load up the AMD RAIDXpert UI (or boot into the BIOS) and configure each individual drive as a single RAID Ready device as follows:

Use RAIDXpert to create individual RAID Ready drives

Use AMD RAIDXpert to create individual RAID Ready drives

Complete RAID Ready Drive List

Complete RAID Ready Drive List

This causes the RAID controller to present an individual UniqueID for each drive through to the OS:

Actually unique UniqueIDs!

Actually unique UniqueIDs!

You can then go ahead and create your storage space as normal from the primordial pool:

Primordial Storage Space now shows all 3 unique drives

Primordial Storage Space now shows all 3 physical drives

Hope this helps a few people as it drove me potty before I worked out what was going on!

Bonus Tip: Another wee tip I read recently is that storage spaces are NOT supported inside a virtual machine. I know you would need a quite specific (read: odd) use case to even consider doing this, just don’t! 🙂

Microsoft , , , , , , , , , , ,

NanoLab – Harder, Better, Faster, Stronger Intel NUC Models Out Soon!

This story actually broke about a week ago, but its been quite a busy one for me so I didn’t get around to posting (other than on Twitter for those who follow me). I thought for people who may have missed it, it would be worth a short post.

In essence, for people who have held out from buying either an Intel NUC or even an HP Microserver for your home lab due to the lack of CPU grunt, good news is on the way! The specs were leaked last week for the new range of Intel NUC boxes due out in Q2, featuring Intel Core i5 and i7 processors. The specs were published by Computer Base and are as follows:

D53427RK - Rend Lake

D53427RK – Rend Lake

D53427HYE - Horse Canyon

D53427HYE – Horse Canyon

D73537KK - Skull Canyon

D73537KK – Skull Canyon

Looking at the new models the best (and most feature rich) for me is the i5-3427U D53427HYE (Horse Canyon – includes enclosure). This model includes vPro / AMT support, a superbly useful feature if you plan to run these machines headless, as I currently do. It seems strange not to include this feature with the i7 version (Skull Canyon – DC73537SY). The i5 is likely to be a little easier on the pocket than the i7 whilst still allowing turbo to 2.8GHz, and with a basic clock speed of 1.8GHz it will hopefully run a little cooler than the i7 (even the i3 chassis can get very warm indeed!). Both models come with USB 3.0 which is unfortunately of limited use, unless you plan to mount a USB drive into your VMs via VT-d which is now also included with both new CPU models.

Comparing the CPUs via their CPU Benchmark scores, we can see that the i5 gives a great performance leap from the older i3 line (DC3217IYE), but not such a great jump to the i7, which also doesn’t include vPro. I have included the scores for the HP Microserver line for comparison:

ModelCores / Threads / Logical CPUsClock Speed / TurboCacheMax TDPCPU Benchmark
Intel Core i3-3217U2 / 2 / 41.80 GHz / None3 MB17 Watts2272
Intel Core i5-3427U2 / 2 / 41.80 GHz / 2.80 GHz3 MB17 Watts3611
Intel Core i7-3537U2 / 2 / 42.00 GHz / 3.10 GHz4 MB17 Watts3766
AMD Athlon II Neo N36L2 / 1 / 21.30 GHz / None2 MB12 Watts751
AMD Turion II Neo N40L2 / 2 / 41.50 GHz / None2 MB15 Watts946
AMD Turion II Neo N54L2 / 2 / 42.20 GHz / None2 MB25 Watts1314

My guess is that two things will probably happen when it comes to pricing. The current line of NUCs will drop their prices a bit, and the new line will probably come in at a higher price bracket. This means a premium for people wanting the extra grunt, but better prices for everyone else! Personally I have not found any issues with the grunt I get from the 1.8GHz i3, especially when running off SSDs (where your bottleneck usually lies in a lab or production!) so I will probably stick with my i3 pair for now… at least until the i5 range become so cheap I feel compelled to buy a couple!

If I hadn’t already invested, I would be sorely tempted to start my Intel NUC lab with the i5 range, but if a key decision driver is cost, the i3 won’t let you down! 🙂

Other NanoLab articles may be found here:
NanoLab Articles

Intel NUC, NanoLab, VMware , , , , , , , , , , , , , , , ,

VMware vSphere NanoLab – Part 4 – Network and Storage Choices

Over the past few posts I have gone into the detail on configuring a high WAF vSphere NanoLab, mainly from the perspective of compute. In my case this consists of two Intel NUC nodes, running  dual core 1.8GHz core i3 processors and 16GB of RAM each. The main question people  have been asking me since I published the series is, what do I use for networking and storage?

Prior to the NanoLab, I have always gone for a vInception type of setup, i.e. everything running inside a single powerful workstation with plenty of RAM. This limits your options a bit, in my case it meant simply using local SSD & SATA storage, presented either as iSCSI from my Windows 2008 R2 server  or a nested FreeNAS 7 VM. For a bit of extra capacity I also had a couple of spare disks in an HP Microserver N36L presented via another FreeNAS 7 VM under ESXi.

The most frustrating thing with running your VMFS storage from a Windows host, is the monthly patching and reboots, meaning you have to take down your entire environment every time. In my case this also includes this blog, which is hosted as  a VM on this environment, so moving forward I wanted to have something a little more secure, flexible and robust, which also adhered to the cost, noise and size requirements you might expect for a NanoLab.

Storage

Speed of storage can make or break you experience and productivity when running a home lab. My requirements for a storage device / NAS were:

  • Minimal size
  • Silent or as near silent as possible
  • Low power consumption
  • Minimum 4 disk slots and ability to do RAID 5 (to minimise disk cost and provide flexibility for later growth)
  • Reasonable price

Optionally:

  • VAAI support
  • Decent warranty (if not a home build)
  • Reasonable component redundancy
  • USB3 support in case I want to add any external drives later for some speedy additional storage / backup

After going back and forth between a home-made solution based on another HP Microserver, or a pre-configured NAS, I decided that the additional features available in the Synology “Plus” line were too good to pass up. These include:

  • VAAI support for Hardware Assisted Locking (ATS), Block Zero, Full Copy, Thin Provisioning
  • iSCSI snapshot and backup
  • Link aggregation support for the dual gigabit NICs
  • 2-3 year warranty depending on the model
  • iSCSI or NFS (VAAI on iSCSI volumes only)

They were also recommended by a number of vExperts such as Jason Nash, Chris Wahl and Julian Wood, which is always a good justification to go for one! 🙂

The 1512+ was very tempting, but I in the end I chose the DS412+ due to its near-silent sub-20db operation (thanks to an external power brick and 2 hot-swap silent cooling fans), low power consumption618_348_backup-plans-synology-ds412 (max 44w under heavy load),  minimal footprint and reduced cost. I was tempted to wait and see if a DS413+ comes out any time soon, but Synology are being cagey and I needed the lab upgrades to be done and dusted in a short period. I already have a DS413j which I use for backups, so I can confirm they are very well built little machines, and the noise level claims are indeed accurate!

 

Into the 412+ I have loaded a pair of 240GB SANDisk Extreme SSDs using SHR (Synology Hybrid Raid). This is effectively just RAID1 mirroring when only two drives are installed but gives me the ability to expand out to RAID5 equivalent as I need more space and the price of SSDs (inevitably) comes down. Eventually the box will have around ~720GB or more of useable SSD storage, more than enough for a decent bunch of lab VMs! Another alternative would be a pair of SSDs for VM boot partitions / config files, and a pair of SATA drives for VM data partitions.

Networking

Although you can easily build a great home lab on a flat network with any old cheap switch, the ability to experiment with more advanced features is highly desirable. My requirements for a managed switch were:

  • Minimal size
  • Passive cooling (for silent operation)
  • Low power consumption
  • Minimum of 8x 1 gigabit ports (or preferably more)
  • Link aggregation
  • QoS
  • Security – VLANs, PVLANs, ACLs, & Layer 3 switching
  • SSH access for command line management

Optionally:

  • I am studying for a few Cisco exams over the next year so a Cisco branded router would be preferential
  • Decent warranty

After a great suggestion from Jasper and reading an article by Vladan I ended up going for the ten port Cisco SG300-10.

SG300-10

This 10-port switch will allow for:

  • 1-2 ports per NUC (for 2-4 NUC boxes)
  • 2 LACP enabled ports for the Synology lab storage
  • 2 ports for my personal data storage server (might replace this with a second mid-range Synology NAS later)
  • 2 uplink ports (In my case for a router and a second wireless access point)

This switch is passively cooled, only uses around 10w power, and as an added bonus Cisco include a limited lifetime warranty! Great if you are going to invest that much in a switch for home!

“As long as the original End User continues to own or use the Product, provided that: fan and power supply warranty is limited to five (5) years. In the event of discontinuance of product manufacture, Cisco warranty support is limited to five (5) years from the announcement of discontinuance.” http://www.cisco.com/en/US/docs/general/warranty/English/LH2DEN__.html

If I had been going for a switch purely on cost I would probably have chosen one of the HP models as these have some great bang for your buck, but I did want to stick to a Cisco branded one. I would also have loved to go for the PoE model so I could plug in a VoiP phone later, but the cost for the SG300-10P / MP was at least 50% more, and power consumption would be higher, even when idle.

WAF

The entire NanoLab setup above of 2 NUC boxes, DS412+ and SG300-10 in total take up about the same volume of space as a large shoe box, are virtually silent, and combine for an idle power level of 50-60 watts, and under 100 watts even under load. That’s less than a couple of halogen light bulbs!

In my next post I will go through the process of configuring the network and storage, including link aggregation and suggested VLAN configuration.

Earlier parts of this article may be found here:
NanoLab – Running VMware vSphere on Intel NUC – Part 1
NanoLab – Running VMware vSphere on Intel NUC – Part 2
NanoLab – Running VMware vSphere on Intel NUC – Part 3

Cisco, Intel NUC, NanoLab, VMware , , , , , , , , , , , , , , , , , ,

NanoLab – Running VMware vSphere on Intel NUC – Part 3

I have really been enjoying messing about with my NanoLab for the past few days and it has already proved invaluable in a couple of projects I’m dealing with  at work (mainly in testing some ideas I had for solutions).

These are just a couple of very quick tips for your NUC lab which I came across throughout the week. They will also apply to any other single NIC configuration for a vSphere cluster (e.g. HP Microserver with no extra PCI card), and for booting your cluster from a USB pen drive.

The tips are both simple fixes to remove the (slightly annoying) warning messages you get on each ESXi host in your cluster after you do your initial config.

The host currently has no management network redundancy. System logs on host <hostname> are stored on non-persistent storage.

Single Management NIC Causes Warning in vCenter

The host currently has no management network redundancy.

To get rid of this (assuming you dont plan to add further NICs), simply follow KB1004700, which is summarised as follows:

To suppress this message on ESXi/ESX hosts in the VMware High Availability (HA) cluster, or if the warning appears for a host already configured in a cluster, set the VMware HA advanced option das.ignoreRedundantNetWarning to true and reconfigure VMware HA on that host.

To set das.ignoreRedundantNetWarning to true:

  1. From the VMware Infrastructure Client, right-click on the cluster and click Edit Settings.
  2. Select vSphere HA and click Advanced Options.
  3. In the Options column, enter das.ignoreRedundantNetWarning
  4. In the Value column, enter true.
    Note: Steps 3 and 4 create a new option.
  5. Click OK.
  6. Right-click the host and click Reconfigure for vSphere HA. This reconfigures HA.

singlenetwork

Booting from USB Pen Drive Causes Warning

System logs on host <hostname> are stored on non-persistent storage

This is caused by booting from the USB device. It is very simple to remove by redirecting logs to a syslog server. A prime example for your home lab would be the syslog server which comes as standard with the vCenter Server Appliance, but commonly your home NAS may have this functionality, you could run a Linux VM to collect the logs, or alternatively you could use a great product to centralise logs called Splunk (free for up to 500mb of logs per day!).

To point your ESXi hosts to any syslog server, simply:

  1. From the VMware Infrastructure Client, select the host.
  2. Select the Configuration tab, then click Advanced Settings.
  3. In the left column expand Syslog, then click global.
  4. In the right panel, in the Syslog.global.logHost box, enter the IP or hostname of your syslog server.
  5. Click OK.
  6. Your host is now configured to forward all logs to your syslog server and the non-persistent storage error will be suppressed.

syslog

Once you have enabled the redirection you also need to open the outbound port on your ESXi hosts (thanks to Sam for the reminder).

  1. From the VMware Infrastructure Client, select the host.
  2. Select the Configuration tab, then select Security Profile.
  3. Next to Firewall, click Properties…
  4. Scroll down to syslog and tick the check box to open ports 514/1514.
  5. Click OK.

open syslog ports

If anyone else comes across any useful NUC related homelab tips, please feel free to comment or mail them to me and I’ll add them to the list.

UPDATE: Duncan Epping describes the das.ignoreRedundantNetWarning fix on his blog, using the vSphere Web Client here:
http://www.yellow-bricks.com/2015/05/21/this-host-currently-has-no-network-management-redundancy/

Other parts of this article may be found here:
NanoLab – Running VMware vSphere on Intel NUC – Part 1
NanoLab – Running VMware vSphere on Intel NUC – Part 2
VMware vSphere NanoLab – Part 4 – Network and Storage Choices

Intel NUC, NanoLab, VMware , , , , , , , , , , , , ,