Tag Archive for vSAN

Preview – Open Homelab Project at #LonVMUG – 14th April 2016

So this is just a very brief post to firstly say don’t forget it’s the London VMUG on 14th April 2016, at Tech UK (10 Saint Bride Street, EC4A). There are a load of really interesting sessions coming up, both vendor and community.

For example:

  • We have a keynote from Luca Dell’Oca who provided one of the best non-vendorised vendor sessions I have ever seen at a VMUG (his session title this time sounds like he may be looking to up the ante!)!
  • We have loads of sessions on VSAN including the 6.2 updates (also see the Storage Field Day 9 sessions here for a deep dive on that).
  • We even have a session from the London VMUG leadership team’s “Darth Vader” himself, Simon Gallagher, talking about App Volumes!

It should be an awesome day!

agenda-lonvmug-April-2016.png

The keen eyed among you may also notice that I have a session in the list as well…

If you want to come along and be part of a slightly unique session, never been seen before, never been done before, and will probably never be done again (especially if when all goes pear shaped!), then feel free to come along to the Open Homelab project session I will be attempting to herd / steer / keep on the rails!

I suggested a few months ago to Simon G that we do some roundtable sessions at the London VMUG and volunteered to run some as an experiment. These are my favourite sessions at the UK VMUG as you get a dozen or so people round a table and chew the fat on a specific subject area.

It turns out that we don’t actually have anywhere in our new venue to run this session for a small group, so instead, it’s been converted into a “square table”, i.e. “no table” session in one of the standard rooms instead!

Running a roundtable with a room full of people is certainly going to be a challenge, a bit of an experiment, and worst case scenario it all falls apart and we never do it again! Yay! But, hopefully it will actually be a really worthwhile session, and I plan to share the results here afterwards as kind of a crowdsourced homelab advice tree or something! To be honest with less than two weeks to go I haven’t really figured out the details yet, but rest assured by a week on Thursday, I will at least have the title decided!

planning

Whatever happens it should be interesting! So if you want to share your homelab requirements with the group and get some advice and tips on how to design and build it, or if you want to tell us how awesome your lab is already and why you chose to build it like that, please do come along to the session and join in! 🙂

Register here:
London VMUG Meeting Registration – Thursday, 14th April 2016

Words Mean Things, Apparently – Deduplication Myths Explored

A rose by any other name would smell as sweet?

We might all agree that this is most definitely the case, but in the technology industry we have a problem, and it was highlighted across a number of the sessions we attended at Storage Field Day 9 this week.

Specifically, the use of certain terms to describe technology features, when the specific implementations are very different, and have potentially very different outcomes. This is becoming more and more of a problem across the industry as similar features are being “RFP checkboxed” as the same, when in reality they are not.

For example most of the vendors we saw support deduplication in one form or another, and in many cases there was a significant use of the word “inline”.

What do we mean by “inline deduplication”, and what impact to performance can this have?

One of the other delegates at SFD9, W Curtis Preston, had very strong opinions on this, which I am generally inclined to agree with!

UPDATE 08/04/2016: Curtis has recently published an article detailing his thoughts here.

If a write hits the system and is deduplicated prior to being written to its final non-volatile media, be it flash or disk, then it can generally be considered as inline.

Dedupe-Inline

Inline Deduplication

If deduplication is running in hardware (for example as 3PAR do in their Gen4+ ASIC), the deduplication process has minimal overhead on the system, and by not needing to send all writes to the back end storage it can actually improve performance overall, even under sustained high throughput where it can actually improve it by reducing back end writes.

Most non-inline deduplication would typically be referred to as “post-process”, and as a general rule are either run on a schedule or as a lower priority 24/7 system maintenance task. It can also run immediately after the write has gone to disk. This is still post-process, not inline.

It’s worth noting that any of these post-process methods can potentially have an impact on back-end capacity management, as dumping large quantities of data onto a system can temporarily spike capacity utilisation until the dedupe process has time to work its magic and increase storage efficiency. Not ideal if your storage capacity is approaching critical.

depu

In addition, the block has been written to an NVRAM device which should protect it from power loss etc, but the problem we have is that cache is an expensive and finite resource. As such, by throwing a sustained number of IOs at the system, you end up potentially filling up that cache/NVRAM faster than the IOs can be flushed and deduplicated, which is exacerbated by the fact that post-process dedupe generates yet more IOPS on the back end storage (by as much as 2-3x compared to the original write!). The cumulative effect causes IO to back up in the system like a dodgy toilet, thereby increasing latency and reducing your maximum capable IOPS from the system.

Worse still, in some vendor implementations, when system performance is maxed out deduplication in the IO path is dropped altogether, and inbound data is dumped out to disk as fast as possible. Then is then post-processed later, but this could obviously leave you in a bit of a hole again if you are at high capacity utilisation.

Dedupe-post

Post-Process Deduplication

None of this is likely to kick in for the vast majority of customers as they will probably have workloads generating tens of thousands of IOPS, or maybe low hundreds of thousands on aggregate. As such, for most modern systems and mixed workloads, this is unlikely to be a huge problem. However, when you have a use case which is pushing your array or HCI solution to its maximum capability, this can potentially have a significant impact on performance as described above.

[HCI – yet another misappropriated computing acronym, but I’ll let that one slide for now and move on!]

VMware VSAN Deduplication

In the case of one of one of the vendors we saw, VMware, they joked that because of the fact that they initially write to the caching flash tier prior to deduplication, they spent more time arguing over whether it was valid to call this inline than it took them to actually develop the feature! In their case, they have been open enough not to call it “inline” but instead “nearline”.

In part this is because they are always written to a flash device prior to dedupe, but also because not all of the writes to their caching tier actually get sent to the capacity tier. In fact some may live out their entire existence in an non-deduplicated state in flash cache.

dedupe.png

I applaud VMware for their attempt to avoid jumping on the inline bandwagon, though it would have perhaps been better to use a term which doesn’t already mean something completely different in the context of storage! 🙂

You can catch the full VMware session at the link below – it’s well worth a watch!
VMware Storage Presents at Storage Field Day 9

Further Reading

Some of the other SFD9 delegates and VMware staffers had their own takes on the presentation we saw. Check them out here:

Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 9 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.

Storage Field Day 9 – Behind the Curtain

Tech Field Day cheese

Tech Field Day is an awesome experience for all of the delegates! We get to spend an entire week unabashedly geeking out, as well as hanging out with the founders, senior folk and engineers at some of the most innovative companies in the world!

For those people who always wondered what goes on behind the scenes in the Tech Field Day experience, I took a few pano shots at the Storage Field Day 9 event this week.

Here they are, along with most of my favourite tweets and photos of the week… it was a blast!

Panos

Pre-Event Meeting

Pre-Event Meeting & Plexistor

NetApp & SolidFire

NetApp & SolidFire

Violin Memory

Violin Memory

Intel

Intel

Cohesity

Cohesity

VMware

VMware

The rest of the event…

Until next time… 🙂

HP Discover Europe 2014 – Day 2 Roundup

Day 2 started early with the first sessions beginning around 8.30am. I won’t bore you with the details of my day, but I will go through three really great new products / features I spent time learning about. Much of the info below came from slides, or discussions with product managers / engineers, so should not be taken as gospel!

HP OneView
I have to admit I have been a little lax in having a look at OneView as yet. I took the opportunity at the event to have a chat with some of the OneView engineers, and take the hands on lab. If you haven’t already done so, and you have any HP kit on premises, I strongly suggest you take a look at this product! I’m not going to go into any depth here, except to describe one of my favourite features.

OneView has the ability to connect into your servers, storage, and fabric, then auto-deploy, configure and manage your environment, end-to-end. An example of this might be if you are provisioning a new server. OneView can create new volumes based on specific policies, auto-configure all of your SAN zoning between your server initiator and targets (with single initiator, multiple target or single initiator, single array options only for now), then build the OS, configure and mount the storage on the server. How cool is that?

HP OneView

HP OneView

This is currently based on a specific subset of vendors, mainly only HP and Brocade AFAIK, but other vendors are being added in the future.

Having played with it in the lab, I can confirm that it is pretty easy to learn and use, with most information and configurations layed out reasonably intuitively in the BUI.

For more information on OneView see HP’s site.

ProLiant Gen9 Features
As I understand it, one of the key strategies behind the new ProLiant range is to ensure that HP are not losing on price / value against some of their less pricey competitors (who shall of course remain nameless as you know who they are already!). The premise here is that instead of buying top of the range servers with all the wizardry built in by default (with an appropriately top of the range price!), you can start with a base unit and only add the features you actually need. A prime example of this being that you don’t need a storage controller if you just boot from USB for a hypervisor!

This strategy has led to the removal (by default, you can configure it back again) of things like 10Gb FlexibleLOM network ports, front panel fault indicators, the onboard RAID card is now a plugin module, etc. The theory being that the Gen 9 servers, though newer, should actually come in at a better price point than their Gen 8 ancestors. The marketing shpiel is that the new Gen 9 servers deliver “the right compute for the right workload at the right economics every time”.

HP Gen 9

HP Gen 9

Cheesy marketing slogan? Absolutely!

Do they seem to deliver on this? From some of the indicative pricing I’ve seen so far, I’d say yes…

Just as a quick overview of the new ranges:

  • 10 Series (DL60 / DL80 Gen 9)
    • The 10 series is designed to be an entry level model for SMBs. These also now come with dual PSU as a CTO option, which suddenly makes them a lot more attractive in my mind.
  • 100 Series (DL160DL180 Gen 9)
    • This is not the same as the old 100 series machines from the G7 era and before. It is effectively equivalent to a DL3x0e (entry) machine in the previous generation ranges.
  • 300 Series (DL360 / DL380 Gen 9)
    • This now equates to the original DL3x0p series of machines, and has the maximum scalability and performance in mind.
The following (poor photo sorry) is a great slide which just lists out the key differences between each model in the range:
HP Proliant DL80/180/380 Gen 9

HP Proliant DL80/180/380 Gen 9

I suggest checking the quick specs for more info!

3PAR File Personas
As regards one of my favourite announcements from the entire event (apart from The Machine, which I will do a post on some time in the future), I was able to gather some more info on the awesome new File Personas announcement.

The first, most notable fact was that the HP are so confident in the resilience of their new arrays, that they are offering a 99.9999% Availability Guarantee! Many SLAs in the IT industry are not necessarily a guarantee of a claimed level of availability, but more a level of commercial risk accepted by the vendor or provider. That said, going with “Six Nines” definitely shows belief in your product set!

HP 3PAR File Personas

HP 3PAR File Personas

A few nuggets of info I gleaned from attending the File Personas breakout session were as follows:

  • Priority Optimisation will work but is not currently certified as supported. The following technologies are inherited from block persona, and are supported from day one:
    • Wide striping
    • Replication
    • Thin Provisioning
  • From a multi tenancy perspective, the initial release will only utilise up to one Active Directory source per array (not per Virtual File Server) as the controllers each have machine accounts in your domain, which is somewhat disappointing as a service provider who always asks “can it be multi-tenanted?”. It will provide up to 4 IPs per virtual file server, and these can be on separate VLANs and trusts may be used, so there is some scope for flexibility.
  • Licensing and configuration of virtual file servers is always based on multiples of 1TiB (note TiB not TB), but you can then use quotas to subdivide your file store allocations below this.
  • The $129 per TiB is based on the amount allocated to a virtual file server, irrespective of the back end storage or thin provisioning utilisation. You will not be forced to license the entire array. For example:
    • You have an array with say 100 TiB of usable space
    • 10TiB allocation to a virtual file server
    • 5TiB in use by end user files
    • 10TiB of license required

The price point seems genuinely good value to me. Compared to the cost of purchasing, powering and managing something like a Windows File Server Cluster, it’s really a no-brainer!

That should just about do it for today! Final day tomorrow will be mainly comprised of a few more sessions followed by a looooong wait for my flight home…

Disclaimer: As an HP customer, HP kindly provided my accommodation and entry to the HP discover event, but there was no expectation or request for me to write about their products or services.

%d bloggers like this: