Software Defined Storage Virtualisation – How useful is that then?

Ignoring the buzzword bingo post title, storage virtualisation is not a new thing (and for my American cousins, yes, it should be spelt with an s!Ā šŸ™‚ ).

NetApp have for example been doing a V-Series controller for many years which could virtualise pretty much any storage you stick in the back of it. It would then present it as NFS and layer on all of the standard ONTAP features.

The big advantage then was that you can use the features which might otherwise be missing from your primary or secondary storage tiers, as well as being able to mix and match different tiers of storage from the same platform.

In a previous role, we had an annual process to full backup and restore a 65TB Oracle database from one site to another over a rather slow link, using an ageing VTL that could just about cope with incrementals and not much more on a day to day basis. End to end this process took a month!

Then one year we came up with a plan to used virtualised NFS storage to do compressed RMAN backups, replicate the data using snap mirror and restore on the other side. It took us 3 days; an order of magnitude improvement!

That was 4 years ago, when the quantity of data globally was about 4x less than it is now; the problem of data inertia is only going to get worse as the worlds storage consumption doubles roughly every two years!

What businesses need is the flexibility to use a heterogeneous pool of storage of different tiers and vendors in different locations to move our data around as required to meet our current IT strategy, without having to change paths to data or take downtime (especially on non virtualised workloads which don’t have the benefits of StorageĀ vMotion etc). These tiers need to provide the consistent performance defined by individual application requirements.

It’s for this reason that I was really interested in the presentation from Primary Data at Storage Field Day 8. They were founded just two years ago, came out of stealth at VMworld 2015, and plan to go GA with their first product in less thanĀ a month’s time. They also have some big technical guns in the form of their Chief Scientist, the inimitable Steve Wozniak!

One of the limitations of the system I used in the past was that it was ultimately a physical appliance, with all the usual drawbacks thereof. Primary Data are providing the power to abstract data services based on software only, presented in the most appropriate format for the workload at hand (e.g. for vSphere, Windows, Linux etc), so issues with data gravity and inertia are effectively mitigated. I immediately see three big benefits:

  • Not only can we decouple the physical location of the data from it’s logical representation and therefore move that data at will, we can also very quickly take advantage of emerging storage technologies such as VVOLs.
    Some companies who shall remain nameless (and happen to have just been bought by a four letter competitor) won’t have support for VVOLs for up to another 12 months on some of theirĀ products, but with the “shim” layer of storage virtualisation from Primary Data, we could do it today on virtually any storage platform whether it is VVOL compliant or not. Now that is cool!
  • By virtualising the data plane and effectively using the underlying storage as object storage / chains of blocks, they enable additional data services which may either not be included with the current storage, or may be an expensive add-on license. A perfect example of this is sync and async replication between heterogenous devices.
    Perhaps then you could spend the bulk of your budget on fast and expensive storage in your primary DC from vendor A, then replicate to your DR site asynchronously onto cheaper storage from vendor B, or even a hyper-converged storage environment using all local server media. The possibilities are broad to say the least!
  • The inclusion of policy based Quality of Service from day one. In Primary Data parlance, they call them SLOs – Service Level Objectives for applications with specific IOPS, latency etc.
    QoS does not even exist as a concept on many recent storage devices, much to the chagrin of many service providers for example, so being able to retrofit it would protect the ROI on existing spend whilst keeping the platform services up to date.

There are however still a few elements which to me are not yet perfect. Access to SMB requires a filter driver in Windows in front of the SMB client, so the client thinks it’s talking to an SMB server but it’s actually going via the control plane to route the data to the physical block chains. A bit of a pain to retrofit to any large legacy environment.

vSphere appears to be a first class tenant in the Primary Data solution, with VASA and NFS-VAAI supported out of the “virtual” box, however it would be nice to have Primary Data as a VASA Client too, so it could read and then surface all capabilities from the underlying storage straight through to the vSphere hosts.

You will still have to do some basic administration on your storage back end to present it through to Primary Data before you can start carving it up in their “Single Pane of Glass”. If they were to create array plugins which would allow you to remote manage many common arrays this would really make that SPoG shine! (Yes, I have a feverish unwavering objection to saying that acronym!)

I will certainly be keeping an eye on Primary Data as they come to market. Their initial offering would have solved a number of issues for me in previous roles if it had been available a few years earlier, and I can definitely see opportunities where it would work well in my current infrastructure. I guess it now becomes up to the market to decide whether they see the benefits too!

Further Reading
Some of the other SFD8 delegates have their own takes on the presentation we saw. Check them out here:

Ray LucchesiĀ –Ā Primary dataā€™s path to better data storage presented at SFD8

Dan Frith –Ā Primary Data Ā Because we all want our storage to do well

Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 8 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.

Storage, Tech Field Day , , , , , , , , , , , , , , , , , , , , , ,

NanoLab – Part 9 ā€“ Installing VMware vSphere ESXi 5.5 on Intel NUC

I successfully ranĀ my VMware vSphere ESXi 5.1 Nanolab for 18 months on my pair of Intel NUCĀ DC3217IYE hosts. Early this year I got around to upgrading toĀ 5.5. I had experienced some issues with my vCenter Server Appliance so ended up just rebuilding the lab from scratch and reattaching my old data stores. Having written all of this up, I then promptly forgot to post it! So for the sake of continuity (before I do the same for 6.0 shortly), this article covers the process.

In addition I also purchased a 3rd node for my lab, the 4thĀ Gen D34010WYKH model (also with a Core i3), with which I was able to test and prove the process on as it uses the same NIC chipset.

The following are updated instructions for installing vSphere 5.5 on Intel NUC (any model with theĀ IntelĀ® 82579V or IntelĀ® I218V onboard NIC should work).

I recommend before you start, you upgrade the NUCĀ to the latest firmware, to avoid any potential bugs (of which there were a few when they were first released). Copy the latest firmare image onto a USB stick, boot the NUC, hit F7 at the bios, find your firmware on the USB stick and let it do it’s thing:

Intel NUC Firmware Upgrade

Intel NUC Firmware Upgrade

vSphere 5.5 Install Requirements

  • A USB Stick. This should work on anything over 1-2GB but personally am using 8GB PNY Micro Sleek Attache & 16GB Kinston DataTraveler Micro drives as they’re tiny, so less likely to catch on anything as they stick out the back of the NUC box, and they cost less than Ā£5 each.
  • A copy of VMware Workstation 8 / Fusion 6 or newer.
  • ESXi-Customizer 2.7.2 (created by Andreas Peetz)
    http://v-front.blogspot.com/p/esxi-customizer.htmlĀ for adding VIBs to your image. NOTE: This can also be done by Powershell, but I like the GUI as it’s easy! (http://blogs.vmware.com/vsphere/2012/04/using-the-vsphere-esxi-image-builder-cli.html)
  • The ESXi driver for theĀ IntelĀ® 82579V Gigabit Ethernet ControllerĀ (e.g. for the originalĀ models using ESXi 5.5):
  • OR The ESXi driver for theĀ IntelĀ® I218V Gigabit Ethernet Controller (e.g. for the Haswell based D34010U models):
  • (AND) The ESXi AHCI driver for the SATA controller (if you want to use local drives in theĀ Ā Haswell based D34010U models):
    • sata-xahci-1.10-1.x86_64
    • If you do choose to add this in as well to your image, simply run the customiser twice, once for the network VIB, then a second time for the SATA vin, using the interim image as your source for the final image.

Process Overview

  • Create a customised ISO with the additional Intel driver.
  • Install ESXi to your USB stick using VMware Workstation / VMware Fusion and the customised ISO you will create below.
  • Plug in your NUC, insert the USB stick, boot and go!

Part One – Create the Custom ISO

  1. Run theĀ ESXi-Customizer-v2.7.2.exeĀ (latest version at time of writing).
  2. This will extract the customer to the directory of your choosing.
  3. Navigate to the new directory.
  4. Run theĀ ESXi-Customizer.cmdĀ batch file. This will open up the GUI, where you can configure the following options:
  • Path to your ESXi Installer
  • Path to the Intel driver downloaded previously
  • Path where you want the new ISO to be saved
  1. Ensure you tick theĀ Create (U)EFI-bootable ISO checkbox.
ESXi-Customizer with 2.3.2 vib

ESXi-Customizer with 2.3.2 vib

This will output a new custom ESXi installer ISO calledĀ ESXi-5.x-Custom.isoĀ or similar, in the path defined above.

Part Two – Install bootable ESXi to the USB stick.
I stress that this is my preferred way of doing this as an alternative is simply to burn your customised ISO to a CD/DVD and boot using a USB DVD-ROM. That would however be a whole lot slower, and waste a blank CD!

  1. Plug your chosen USB stick into your PC.
  2. Open VMware Workstation (8 or above), VMware Fusion, or whatever you use, ideally supporting theĀ Virtualize Intel VT-x/EPT or AMD-V/RVIĀ option (allowing you to nest 64-bit VMs).
  3. Create a new VM, you can use any spec you like really, as ESXi always checks on boot, but I created one with the similarĀ specs as my intended host, single socket, 2vCPU cores. RAM doesn’t really matter either but I use at least 4GB normally. This does not require a virtual hard disk.
  4. Once the VM is created, and before you boot it, edit the CPU settings and tick theĀ Virtualize Intel VT-x/EPT or AMD-V/RVIĀ checkbox. This will reduce errors when installing ESXi (which checks to ensure it can virtualise 64-bit operating systems).

VMware Workstation Nesting

Screen Shot 2014-08-29 at 22.09.01

VMware Fusion Nesting

  1. Set theĀ CD/DVD (IDE)Ā configuration toĀ Use ISO image file, and point this to the customised ISO created earlier.
  2. Once the above settings have been configured, power on the VM.
  3. As soon as the VM is powered on, in the bottom right of the screen, right click on the flash disk icon, and clickĀ Connect (Disconnect from Host).

Attach USB in VMware Workstation

Screen Shot 2014-08-29 at 21.38.18

Attach USB in VMware Fusion

  1. This will mount the USB stick inside the VM, and allow you to do a standard ESXi installation onto the stick.
ESXi Install

ESXi Install

  1. At the end of the installation, disconnect the stick, un-mount and unplug it.
Install Complete

Install Complete

Part Three – Boot and go!
This is the easy bit, assuming you don’t have any of the HDMI issues I mentioned in the firstĀ post!

  1. Plug your newly installed USB stick into the back of the NUC.
  2. Don’t forget to plug in a network cable (duh!) and keyboard for the initial configuration. If you wish to modify any bios settings (optional), you will also ideally need a mouse as the NUC runs Visual BIOS.
  3. Power on the NUCā€¦
  4. Have fun!

That’s it!

Any questions/comments, please feel free to hit me up on twitter as I have recently disabled comments on my blog due to the insane volumes of spam bots they were attracting!

Intel NUC, NanoLab, VMware , , , , , , , , , , , , , ,

Fix for VMware Remote Console unrecoverable error: (vmrc)

Uber quick post here with a very simple fix.

I got the following error when upgrading my C# viclient from 5.5 to 6.0 and connecting to my new vSphere 6 vCenter instance, and even when connecting to older ESXi hosts directly. (Yes I’m a bit old school – not a web client fan even after several years!).

The following error was observed after connecting and authenticating to vCenter, when the screen was populated with objects. I also could not then connect to any VM console as it said it was disconnected.

---------------------------
VMware Remote Console Error
---------------------------
VMware Remote Console unrecoverable error: (vmrc)
GetProcAddress: Failed to resolve ENGINE_load_aesni: 127
You can request support. Ā 

To collect data to submit to VMware technical support, run "vm-support".
We will respond on the basis of your support entitlement.

 
It turns out that this is an issue with the remote console plugin. All I needed to do to fix it was:

  1. Rename the following folder C:\Program Files (x86)\Common Files\VMware\VMware Remote Console Plug-in 5.5 to anything you like (e.g. VMware Remote Console Plug-in 5.5-backup)
  2. Uninstall the vSphere 6.0 viclient
  3. Reinstall the vSphere 6.0 viclient
  4. Finito!

Simple fix for a silly little bug which is most likely related to me running an older version of VMware Workstation on my machine (in my case v8 but it could apply to any older version potentially).

VMware , , , , , , , ,

How often do you upgrade your storage array software?

Upgrades are scary!

Having managed and implemented upgrades on highly available systems such as the old Sun StorageTech line of rebranded HDS USP/VSP arrays back in the day, I can tell you that we did not take upgrades lightly!

Unless there was a very compelling reason for an upgrade, the line taken was always “if it ain’t broke, don’t fix it”, but then we were looking after storage in a massively high security environment where even minor changes were taken very seriously indeed. When it came to storage we didn’t have or need anything very fancy at all, just a some high performance LUNs cut from boat loads of small capacity 15K drives, a bit of copy on write snappage to a set of 3rd party arrays and some dual site synchronous replication. Compared to some of the features and configurations of today, that’s actually pretty minimal!

Updates

Now this approach meant that the platform was very stable. Great! It also meant that because we only did upgrades once in a blue moon, the processes were not what you might call streamlined, and the changes made by each upgrade were typically numerous, thereby running a pretty decent risk of something breaking. It was also key to ensure that we checked the compatibility matrix for every release to ensure that the 3rd party arrays would continue to function.

They say that software is eating the world. I’d say it seems the same could be reasonably said for the hardware storage vendors we saw at Storage Field Day 8, as they seem to mostly be moving towards more Agile development models. Little and often means lower risk for every upgrade as there are fewer changes. New features and improvements can be released on a more regular basis (especially those taking advantage of flash technologies which are changing by the minute!). A significant number of the vendors we saw had internal release cycles of between 2 and 4 weeks and public release cycles of 2-8 weeks!

In the case of one vendor, Pure Storage, they are not only releasing code every couple of weeks, but customers have obviously taken this new approach on board with vigour! Around 91% of Pure’s customer base is currently using an array software version 8 months old or less. An impressive stat indeed!

This is Hardware. Software runs on it...

This is Hardware. Software runs on it…

This sounds like a relatively risky approach, but they mitigate it to a great extent by using the telemetric data uploaded every 30 seconds to their Pure1 SaaS management platform from customer arrays, building up a picture of both individual customers and their customer base as a whole. They then use their fingerprint engine to proactively pre-check every customer array to find out which may be susceptible to any potential defect in a new software release. Arrays which pass this pre-check have the upgrades rolled out remotely by Pure Storage engineers on a group by group basis to minimise risk. Obviously this is also done in conjunction and agreement with customers change windows etc. You wouldn’t expect your controllers to start failing over without any notice! šŸ™‚

If I’m honest I am torn in two about this approach. The ancient storage curmudgeon in me says an array should just sit in the corner of the room quietly ticking away with minimal risk to availability and data durability (at least to known bugs anyway!). This new style of approach means that it doesn’t matter how many redundant bits of that rusty tin you have, as Scott D Lowe said last week:

That said we need to be realistic, we don’t live in ye olde world any more. Every part of the industry is moving towards more agile development techniques, driven largely by customer and consumer demand. If the “traditional” storage industry doesn’t follow suit, it risks being left behind by newer technologies such as SDS and hyper convergence.

There is one other key benefit to this deployment method which I haven’t mentioned of course; those big scary upgrades of the past now become minor updates, and the processes we wrap around them as fleshy sacks of water become mundane. That does sound quite tempting!

Perhaps upgrades aren’t that scary any more?

I’d love to hear your opinions either way, feel free to fire me a comment on twitter!

Further Reading
Some of the other SFD8 delegates have their own takes on the presentation we saw. Check them out here:

Dan Frithhttp://www.penguinpunk.net/blog/pure-storage-orange-is-the-new-black-now-what/

Scott D. Lowehttp://www.enterprisestorageguide.com/overcoming-new-vendor-risk-pure-storages-techniques

Pure1 Overview at SFD8

 
Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 8 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.

Storage, Tech Field Day , , , , , , , , , , , , , , , , ,