Tag Archive for statistics

Where and why is my data growing?…

I’ve written recently about issues of data gravity and data inertia, and about how important analytics are to managing your data “stockpile”, but one thing I haven’t gone into is the constant challenge of actually understanding your data composition, i.e. what the hell am I actually storing?!

Looking back to my days as a Windows admin maintaining what for the time were some massive, multi-terabyte (ooer – it was 10 years ago to be fair), filers and shared document storage systems; we had little to tell us what the DNA of those file shares was, how much of it was documents and other business-related content, and how much of it was actually people storing their entire MP3 collections and “family photos” on their work shared drives (yes, 100% true!).

Back then our only method of combating these issues was to run TreeSize to see who was using most space, then do windows searches for specific file types and manually clear out the crud; an unenviable task which came across a few surprising finds I won’t go into just now (ooer for the second time)! The problem was that we just didn’t know what we had!

Ten years later I have spoken to customers who are consuming data at very significant rates, but don’t have a grip on where it’s all going…

With that in mind, I was really interested in what the chaps at Qumulo had come up with when they presented at SFD8 recently. As they said at the time, the management of storage is getting easier, but the management of data is getting very much harder! Their primary vision is therefore quite succinctly described as “Build visible data and make storage invisible”.

Their “Data Aware” scale-out NAS solution is based around providing near-realtime analytics on the metadata, and was designed to meet the requirements of the 600 companies and individuals they interviewed before they even came up with their idea!

The product is designed to be software only and subscription-based, though they also provide scale out physical 1u / 4u appliances as well. I guess the main concept there is “have it your way”; there are still plenty of customers out there who want to buy software solution which is pre-qualified and supported on specific hardware (which sounds like an oxymoron but each to their own I say)! Most of Qumulo’s customers today actually buy the appliances.

The coolest thing about their solution is definitely their unique file system (QSFS – Qumulo Scalable File System). It uses a very clever, proprietary method to track changes within the filesystem based on the aggregate of child attributes in the tree (see their SFD8 presentation for more info). As you then don’t need to necessarily walk the entire tree to get an answer to a query (it should be noted this would be one specifically catered for by Qumulo though). It can then present statistics based on those attributes in near-realtime.

Whiteboard Dude approves!

Whiteboard Dude approves!

I would have killed for this level and speed of insight back in my admin days, and frankly I have a few customers right now who would really benefit!

Taking this a step further, the analytics can also provide performance statistics based on file path and type, so for example it could show you where the hotspots are in your filesystem, and which clients are generating them.

Who's using my storage?

Who’s using my storage?

Stuff I would like to see in future versions (though I know they don’t chase the Service Provider market), would be things like the ability to present storage to more than one Active Directory domain, straight forward RBAC (Role Based Access Control) at the management layer, more of the standard data services you see from most vendors (the RFP tick box features). Being able to mix and match the physical appliance types would also be useful as you scale and your requirements change over time, but I guess if you need flexibility, go with the software-only solution.

At a non-feature level, it would be sensible if they could rename their aggregate terminology as I think it just confuses people (aggregates typically mean something else to most storage bods).

Capacity Visualisation

Capacity Visualisation

Overall though I think the Qumulo system is impressive, as are the founder’s credentials. Their CEO/CTO team of Peter Godman and Aaron Passey, with whom we had a good chinwag outside of the SFD8 arena, both played a big part in building the Isilon storage system. As an organisation they already regularly work with customers with over 10 billion files today and up to 4PB of storage.

If their system is capable of handling this kind of scalability having only come out of stealth 8 months ago, they’re definitely one to watch…

Further Reading
Some of the other SFD8 delegates have their own takes on the presentation we saw. Check them out here:

Dan Frith – Qumulo – Storage for people who care about their data

Scott D. Lowe – Data Awareness Is Increasingly Popular in the Storage Biz

Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 8 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.

 

Without good Analytics you dont have a competitive storage product

Throughout my career, analysing storage utilisation for solution design and capacity management has not been an easy task! Even recently when I speak to customers about utilisation, they often don’t have the management tools in place on their legacy arrays or servers to be able to help us understand what their true workloads look like, or indeed often just basic statistics.

Gathering them is laborious at best, and almost impossible at worst. For example:

  • One previous major vendor I used to work with was only able to surface a small amount of basic throughput and latency data over the past 30 days or so, along with a bit of controller and port utilisation, through their Java-based BUI (Java version specific of course – I still shudder at the thought).
  • More recently another vendor I have used has a web based stats console which can aggregate multiple arrays, but they use a rather outdated method of visualisation which requires filling in a big form to get the stats generated and the produced graphs don’t include any kind of trending data or 95th percentile, etc.
  • Another vendor array I work with fairly regularly requires you to run an API call against the array which only provides you with the stats since the last time you ran it. By then running the API every 30 seconds to a minute, you can build up a body of stats over time. Not brilliant, and it’s a total pain to rationalise the exported data.
  • Even if you have the stats at the array, you need to then gather the same stats at the connected hosts, to ensure that they roughly correlate and that you don’t have any potential issues on the network (which is significantly more likely if say you are running storage and IP traffic on a converged network fabric).

In a word; clunky!

One of the things that struck me about many if not all of the vendors at Storage Field Day 8, was how much better the management consoles and analytics engines were than virtually all of those I have used in the past.

Several vendors use their dial home features to send the analytics back to HQ. This way the stats for individual customers as well as their customer base as a whole can be kept almost indefinitely and used to improve the product, as well as pre-emptively warning customers of potential issues through analysis of this “big data”. This also avoids customers having to spend yet more money on storing the data about their data storage!

Of those we spoke to, one vendor in particular really stood out for me; Nimble Storage. Their InfoSight platform gathers 30-70m data points per array, per day, which are uploaded to their central analytics platform and accessible via their very user friendly interface. It can produce a number of very useful graphs and statistics, send scheduled reports, and will even provide predictive upgrade modelling based on current trends.

Recently they have also added a new opt-in VMVision service which can actually plug into your vCenter server to track the IO stats for the VMs from a host / VM perspective as well, presenting these in conjunction with the array data. This will show you exactly where your potential bottlenecks are / are not, meaning that in a troubleshooting scenario you can avoid wasting precious time looking in the wrong place and all of the data is automatically rationalised into a single view, with no administrative effort required.

As certain storage array features are becoming relatively commoditised, it’s becoming harder for vendors to set themselves apart from the field. Having strong analytics and management tools is definitely one way to do this. So much so, I was compelled to tweet the following at the time:

Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 8 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.

%d bloggers like this: