Tag Archive for storage

Looking Forward to Storage Field Day 9 (#SFD9)

Storage Field Day

So for those of you who love to nerd out on storage like I do, you have probably already heard of the awesome streaming events put on by Stephen Foskett and the crew from Tech Field Day, otherwise known as Storage Field Day. These have grown so popular that Stephen is having to put on extra events just to cater for demand, which I think speaks volumes as to their efficacy and indeed quality!

For those not yet indoctrinated, these events involve taking a group of around a dozen storage and technology delegates to visit a number of different startups (think Pure, NexGen, Coho, etc) and more established companies (think Intel!) to talk about the latest things going on both at those organisations and in the industry in general. Each session lasts a couple of hours, but is generally broken down into several bite sized chunks for consumption at your leisure.

As a stream viewer you get the opportunity to learn about your favourite vendors latest funky stuff and watch them answer questions about all the things you probably wanted to know but never got the chance to ask. It is also a great way to get your head around an unfamiliar technology or vendor. Lastly, if you watch live, you can also ask questions via twitter for the delegates to ask of the presenters.

As a delegate this goes to a whole new level as you get to spend almost an entire week mahoossively geeking out on tech, learning from some of the smartest people in the tech industry, and meeting with the senior people at some of the industry’s best-known companies. I find it generally safest just to wear multiple layers to avoid any embarrassing nerdgasms! 😉

So with that in mind I am really chuffed to have been invited back to attend Storage Field Day 9, next month (16th-18th March) in San Jose!

Not all of the companies have been announced as yet, but we already know that the likes of Cohesity, Intel, VMware & Violin Memory will be in attendance. More will be confirmed over the next coupe of weeks and having seen the provisional list I can tell you it is definitely going to be a great event!

vendors

Needless to say the lineup of delegates is awesome as usual, with many well known bloggers from the EU, US and APAC. Make sure you check them out and follow the crew on twitter if you are so inclined. Most delegates post their opinions around the vendors and tech both during and after the event, so make sure you check out their blog feeds. For example, here is mine:

http://www.tekhead.org/blog/feed/

If you want to tune in live, simply go to http://techfieldday.com from 16th-18th March (PST) or catch up with the recordings on youtube later.

Finally, be warned my Twitter stream does get rather busy during the event, so feel free to temporarily mute me if need be! 😉

Why are storage snapshots so painful?

Have you ever wondered why we don’t use snapshots more often than about every 5-15 minutes in most solutions, and in many others, a lot less often than that?

It’s pretty simple to be honest… The biggest problem with taking snapshots is quiescing the data stream to complete the activity. At a LUN level, this usually involves some form of locking mechanism to pause all IO while any metadata updates or data redirections are made, after which the IO is resumed.

For small machines and LUNs with minimal IO load this is generally such a quick operation that it has virtually no effect on the application user, and is pretty much transparent. For busy applications, however, data can be changing at such a massive rate that disrupting that IO stream, even for a few seconds can have a significant impact on performance and user experience. In addition the larger the number of snapshots in the snap tree, the more that performance is often degraded through the management of large numbers of snapshots, copy on write activities, and, of course, lots of locking.

This problem is then multiplied several times over when you want to get consistency across multiple machines, for example when you want to get point-in-time consistency for an entire application stack (Web / App / DB, etc).

So what do we typically do? We reduce the regularity at which we take these snaps in order to minimise the impact, whilst still having to meet the (usually near zero because all data is critical, right?) RPO set by the business.

At SFD8, we had a very well received presentation from INFINIDAT, a storage startup based in Israel and founded by industry legend Moshe Yanai (the guy who brought you EMC Symmetrix / VMAX, and subsequently XIV). Moshe’s “third generation” enterprise class storage system comes with one particular feature with which I was really interested; snapshots! Yes, I know it sounds like a boring “checkbox in an RFP” feature, but when I found out how it worked I was really impressed.

For every single write stripe which goes to disk, a checksum and a timestamp (from a high precision clock) are written. This forms the base on which the snapshot system is built (something they call InfiniSnap™).

If you have a micro-second accurate clock and timestamps on every write, then in order to achieve a snapshot you simply have to pick a date and time! Anything written earlier than this is not included in the current snap, and anything on or after the time is. This means no locking or pausing of IO during a snap, making the entire process a near zero time and a zero impact operation! A volume with or without snapshots, therefore has indistinguishable performance. Wow!

Screen Shot 2015-12-13 at 20.55.19

It sounds so simple it shouldn’t work, but according to INFINIDAT they can easily support up to 100,000 snaps per system, and even this isn’t even a real number. They made it up as it was a double figure percentage bigger than the next closest array on the market. They will also happily support more than this if you ask, they said that they just need to test it first. In addition, each snap group will support up to 25 snaps per second, and they guarantee an RPO of as low as 4 seconds, based on snapshots alone. You can then use point in time replication to create an asynchronous copy on another array if needed. Now that’s granular! 🙂

The one caveat I would add to this is that this does not yet appear to have a fix for ye old faithful crash consistent vs application consistent issue, but it’s a great start. Going back to the application stack “consistency group” concept, in theory, you generally only need to VSS the database VM, and as such it will be much easier and simpler to have a consistent snap across an app stack with minimal overhead. As we move more towards applications using No-SQL databases etc, this will also become less of an issue.

The above was just one of the cool features they covered in their presentation, from which the general consensus was very positive indeed! A couple of weeks ago I was also able to spend a little time with one of INFINIDAT’s customers who just so happened to be attending the same UKVMUG event. Their impressions in terms of the quality of the array build (with a claimed 99.99999% availability!), the management interface, general performance during initial testing, the compelling pricing, and of course, their very funky matrix-like chassis, were all very positive too.

If you want to see the INFINIDAT presentation from SFD8, make sure you have your thinking hat on and a large jug of coffee! Their very passionate CTO, Brian Carmody, was a very compelling speaker and was more than happy to get stuck into the detail of how the technology works. I definitely felt that I came away a little smarter having been a part of the audience! He also goes into some fascinating detail about genome sequencing, the concept of cost per genome and it’s likely massive impact on the storage industry and our lives in general! The video is worth a watch for this section alone…

Further Reading
Some of the other SFD8 delegates have their own takes on the presentation we saw. Check them out here:

Dan FrithINFINIDAT – What exactly is a “Moshe v3.0”?
Enrico Signoretti’s blog Juku.itInfinidat: awesome tech, great execution
Enrico Signoretti writing on El RegHas the next generation of monolithic storage arrived?
Ray LucchesiMobile devices as a cache for cloud data
Vipin V.K. – Infinibox – Enterprise storage solution from Infinidat
GreyBeards on Storage Podcast – Interview with Brian Carmody

Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 8 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.

Where and why is my data growing?…

I’ve written recently about issues of data gravity and data inertia, and about how important analytics are to managing your data “stockpile”, but one thing I haven’t gone into is the constant challenge of actually understanding your data composition, i.e. what the hell am I actually storing?!

Looking back to my days as a Windows admin maintaining what for the time were some massive, multi-terabyte (ooer – it was 10 years ago to be fair), filers and shared document storage systems; we had little to tell us what the DNA of those file shares was, how much of it was documents and other business-related content, and how much of it was actually people storing their entire MP3 collections and “family photos” on their work shared drives (yes, 100% true!).

Back then our only method of combating these issues was to run TreeSize to see who was using most space, then do windows searches for specific file types and manually clear out the crud; an unenviable task which came across a few surprising finds I won’t go into just now (ooer for the second time)! The problem was that we just didn’t know what we had!

Ten years later I have spoken to customers who are consuming data at very significant rates, but don’t have a grip on where it’s all going…

With that in mind, I was really interested in what the chaps at Qumulo had come up with when they presented at SFD8 recently. As they said at the time, the management of storage is getting easier, but the management of data is getting very much harder! Their primary vision is therefore quite succinctly described as “Build visible data and make storage invisible”.

Their “Data Aware” scale-out NAS solution is based around providing near-realtime analytics on the metadata, and was designed to meet the requirements of the 600 companies and individuals they interviewed before they even came up with their idea!

The product is designed to be software only and subscription-based, though they also provide scale out physical 1u / 4u appliances as well. I guess the main concept there is “have it your way”; there are still plenty of customers out there who want to buy software solution which is pre-qualified and supported on specific hardware (which sounds like an oxymoron but each to their own I say)! Most of Qumulo’s customers today actually buy the appliances.

The coolest thing about their solution is definitely their unique file system (QSFS – Qumulo Scalable File System). It uses a very clever, proprietary method to track changes within the filesystem based on the aggregate of child attributes in the tree (see their SFD8 presentation for more info). As you then don’t need to necessarily walk the entire tree to get an answer to a query (it should be noted this would be one specifically catered for by Qumulo though). It can then present statistics based on those attributes in near-realtime.

Whiteboard Dude approves!

Whiteboard Dude approves!

I would have killed for this level and speed of insight back in my admin days, and frankly I have a few customers right now who would really benefit!

Taking this a step further, the analytics can also provide performance statistics based on file path and type, so for example it could show you where the hotspots are in your filesystem, and which clients are generating them.

Who's using my storage?

Who’s using my storage?

Stuff I would like to see in future versions (though I know they don’t chase the Service Provider market), would be things like the ability to present storage to more than one Active Directory domain, straight forward RBAC (Role Based Access Control) at the management layer, more of the standard data services you see from most vendors (the RFP tick box features). Being able to mix and match the physical appliance types would also be useful as you scale and your requirements change over time, but I guess if you need flexibility, go with the software-only solution.

At a non-feature level, it would be sensible if they could rename their aggregate terminology as I think it just confuses people (aggregates typically mean something else to most storage bods).

Capacity Visualisation

Capacity Visualisation

Overall though I think the Qumulo system is impressive, as are the founder’s credentials. Their CEO/CTO team of Peter Godman and Aaron Passey, with whom we had a good chinwag outside of the SFD8 arena, both played a big part in building the Isilon storage system. As an organisation they already regularly work with customers with over 10 billion files today and up to 4PB of storage.

If their system is capable of handling this kind of scalability having only come out of stealth 8 months ago, they’re definitely one to watch…

Further Reading
Some of the other SFD8 delegates have their own takes on the presentation we saw. Check them out here:

Dan Frith – Qumulo – Storage for people who care about their data

Scott D. Lowe – Data Awareness Is Increasingly Popular in the Storage Biz

Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 8 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.

 

How often do you upgrade your storage array software?

Upgrades are scary!

Having managed and implemented upgrades on highly available systems such as the old Sun StorageTech line of rebranded HDS USP/VSP arrays back in the day, I can tell you that we did not take upgrades lightly!

Unless there was a very compelling reason for an upgrade, the line taken was always “if it ain’t broke, don’t fix it”, but then we were looking after storage in a massively high security environment where even minor changes were taken very seriously indeed. When it came to storage we didn’t have or need anything very fancy at all, just a some high performance LUNs cut from boat loads of small capacity 15K drives, a bit of copy on write snappage to a set of 3rd party arrays and some dual site synchronous replication. Compared to some of the features and configurations of today, that’s actually pretty minimal!

Updates

Now this approach meant that the platform was very stable. Great! It also meant that because we only did upgrades once in a blue moon, the processes were not what you might call streamlined, and the changes made by each upgrade were typically numerous, thereby running a pretty decent risk of something breaking. It was also key to ensure that we checked the compatibility matrix for every release to ensure that the 3rd party arrays would continue to function.

They say that software is eating the world. I’d say it seems the same could be reasonably said for the hardware storage vendors we saw at Storage Field Day 8, as they seem to mostly be moving towards more Agile development models. Little and often means lower risk for every upgrade as there are fewer changes. New features and improvements can be released on a more regular basis (especially those taking advantage of flash technologies which are changing by the minute!). A significant number of the vendors we saw had internal release cycles of between 2 and 4 weeks and public release cycles of 2-8 weeks!

In the case of one vendor, Pure Storage, they are not only releasing code every couple of weeks, but customers have obviously taken this new approach on board with vigour! Around 91% of Pure’s customer base is currently using an array software version 8 months old or less. An impressive stat indeed!

This is Hardware. Software runs on it...

This is Hardware. Software runs on it…

This sounds like a relatively risky approach, but they mitigate it to a great extent by using the telemetric data uploaded every 30 seconds to their Pure1 SaaS management platform from customer arrays, building up a picture of both individual customers and their customer base as a whole. They then use their fingerprint engine to proactively pre-check every customer array to find out which may be susceptible to any potential defect in a new software release. Arrays which pass this pre-check have the upgrades rolled out remotely by Pure Storage engineers on a group by group basis to minimise risk. Obviously this is also done in conjunction and agreement with customers change windows etc. You wouldn’t expect your controllers to start failing over without any notice! 🙂

If I’m honest I am torn in two about this approach. The ancient storage curmudgeon in me says an array should just sit in the corner of the room quietly ticking away with minimal risk to availability and data durability (at least to known bugs anyway!). This new style of approach means that it doesn’t matter how many redundant bits of that rusty tin you have, as Scott D Lowe said last week:

That said we need to be realistic, we don’t live in ye olde world any more. Every part of the industry is moving towards more agile development techniques, driven largely by customer and consumer demand. If the “traditional” storage industry doesn’t follow suit, it risks being left behind by newer technologies such as SDS and hyper convergence.

There is one other key benefit to this deployment method which I haven’t mentioned of course; those big scary upgrades of the past now become minor updates, and the processes we wrap around them as fleshy sacks of water become mundane. That does sound quite tempting!

Perhaps upgrades aren’t that scary any more?

I’d love to hear your opinions either way, feel free to fire me a comment on twitter!

Further Reading
Some of the other SFD8 delegates have their own takes on the presentation we saw. Check them out here:

Dan Frithhttp://www.penguinpunk.net/blog/pure-storage-orange-is-the-new-black-now-what/

Scott D. Lowehttp://www.enterprisestorageguide.com/overcoming-new-vendor-risk-pure-storages-techniques

Pure1 Overview at SFD8

 
Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 8 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.

%d bloggers like this: