Without good Analytics you dont have a competitive storage product

Throughout my career, analysing storage utilisation for solution design and capacity management has not been an easy task! Even recently when I speak to customers about utilisation, they often don’t have the management tools in place on their legacy arrays or servers to be able to help us understand what their true workloads look like, or indeed often just basic statistics.

Gathering them is laborious at best, and almost impossible at worst. For example:

  • One previous major vendor I used to work with was only able to surface a small amount of basic throughput and latency data over the past 30 days or so, along with a bit of controller and port utilisation, through their Java-based BUI (Java version specific of course – I still shudder at the thought).
  • More recently another vendor I have used has a web based stats console which can aggregate multiple arrays, but they use a rather outdated method of visualisation which requires filling in a big form to get the stats generated and the produced graphs don’t include any kind of trending data or 95th percentile, etc.
  • Another vendor array I work with fairly regularly requires you to run an API call against the array which only provides you with the stats since the last time you ran it. By then running the API every 30 seconds to a minute, you can build up a body of stats over time. Not brilliant, and it’s a total pain to rationalise the exported data.
  • Even if you have the stats at the array, you need to then gather the same stats at the connected hosts, to ensure that they roughly correlate and that you don’t have any potential issues on the network (which is significantly more likely if say you are running storage and IP traffic on a converged network fabric).

In a word; clunky!

One of the things that struck me about many if not all of the vendors at Storage Field Day 8, was how much better the management consoles and analytics engines were than virtually all of those I have used in the past.

Several vendors use their dial home features to send the analytics back to HQ. This way the stats for individual customers as well as their customer base as a whole can be kept almost indefinitely and used to improve the product, as well as pre-emptively warning customers of potential issues through analysis of this “big data”. This also avoids customers having to spend yet more money on storing the data about their data storage!

Of those we spoke to, one vendor in particular really stood out for me; Nimble Storage. Their InfoSight platform gathers 30-70m data points per array, per day, which are uploaded to their central analytics platform and accessible via their very user friendly interface. It can produce a number of very useful graphs and statistics, send scheduled reports, and will even provide predictive upgrade modelling based on current trends.

Recently they have also added a new opt-in VMVision service which can actually plug into your vCenter server to track the IO stats for the VMs from a host / VM perspective as well, presenting these in conjunction with the array data. This will show you exactly where your potential bottlenecks are / are not, meaning that in a troubleshooting scenario you can avoid wasting precious time looking in the wrong place and all of the data is automatically rationalised into a single view, with no administrative effort required.

As certain storage array features are becoming relatively commoditised, it’s becoming harder for vendors to set themselves apart from the field. Having strong analytics and management tools is definitely one way to do this. So much so, I was compelled to tweet the following at the time:

Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 8 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.

Storage, Tech Field Day , , , , , , , , , , , , , , , , , , ,