Tag Archive for REST

Swordfish – A Standard by Any Other Name Would Smell As Sweet


Whether it’s the IEEE, the ISO or any other, we live in a world governed by standards. This has the positive impact in allowing interoperability of devices and elements, but at the same time has the unfortunate side effect of hampering the development of new technologies which conflict with those standards, even if their adoption would ultimately provide a better outcome for everyone!

At the same time, many organisations (read: vendors) opt out of these standards and introduce their own. This is great for the vendor as it is tailored to their requirements and products, but it doesn’t help the customer or their lowly sysadmin who has to then implement a load of additional tooling to manage these products. Take S3 as an example; AWS took one look at what was out there in the market, decided that none of the standards met their requirements, so wrote their own!

The key seems to me to be finding a balance, where you implement a standard, but make it extensible, such that individual vendors can add additional data or functionality over and above the baseline. This means that you can always support the “lowest common denominator” for everyone.

So what is Swordfish?

Funnily enough, the folk from SNIA (The Storage Networking Industry Association) have implemented precisely this with one of their latest standards releases, Swordfish. Specifically, this defines the standards for APIs used to manage storage devices in a consistent fashion, regardless of vendor or indeed storage class (for example software based hyper-converged solutions are supported by it, as well as block, file, object, etc!).standardsThey have achieved this by taking the existing SMI-S standards and refactoring them into a simplified model which is client, not vendor oriented, and based on a REST API model, JSON (the current industry favourite for almost all data interchange) and OData. Not only that, but they have achieved this and agreed the standards with their many members in less than 12 months. By comparison to your average RFC from the IEEE that’s lightning fast! 😮

Now this is not to say that your typical vendor is going to throw out everything they have today, but if they begin to run these APIs in parallel, I could see this eventually becoming the defacto standard for all storage management. In addition, SNIA have confirmed that if 2-3 or more vendors have a requirement for the same additional fields (which they will initially have to implement via extensions), then SNIA will ratify them within weeks. Truly an agile methodology for standards!

The Tekhead Take

This seems to me to be a pragmatic approach to a difficult problem. Keeping vendors happy, whilst trying to make life easier for storage consumers and administrators by bringing storage management into the twenty-first century!

Despite being a relatively dry subject matter, I was actually quite interested and impressed with this innovation! People will still need dedicated local storage for many years to come, and these standards will help to enable them to manage storage in a more consistent fashion. Who knows, it may even promote more competition!

Want to Know More?

I was fortunate enough to meet the team from SNIA last year at their Colorado HQ, with Storage Field Day 13. One of the speakers (industry veteran Rob Peglar) also recently appeared as a guest on the Storage Unpacked podcast – an episode well worth a listen too!

Anyway, you can catch the session here:

SNIA Presents at Storage Field Day 13

SNIA have published a load of information on the standards here:


Finally, some of the other SFD13 delegates had their own thoughts on the session and standards as a whole. You can find them here:

Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 13 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.

Where and why is my data growing?…

I’ve written recently about issues of data gravity and data inertia, and about how important analytics are to managing your data “stockpile”, but one thing I haven’t gone into is the constant challenge of actually understanding your data composition, i.e. what the hell am I actually storing?!

Looking back to my days as a Windows admin maintaining what for the time were some massive, multi-terabyte (ooer – it was 10 years ago to be fair), filers and shared document storage systems; we had little to tell us what the DNA of those file shares was, how much of it was documents and other business-related content, and how much of it was actually people storing their entire MP3 collections and “family photos” on their work shared drives (yes, 100% true!).

Back then our only method of combating these issues was to run TreeSize to see who was using most space, then do windows searches for specific file types and manually clear out the crud; an unenviable task which came across a few surprising finds I won’t go into just now (ooer for the second time)! The problem was that we just didn’t know what we had!

Ten years later I have spoken to customers who are consuming data at very significant rates, but don’t have a grip on where it’s all going…

With that in mind, I was really interested in what the chaps at Qumulo had come up with when they presented at SFD8 recently. As they said at the time, the management of storage is getting easier, but the management of data is getting very much harder! Their primary vision is therefore quite succinctly described as “Build visible data and make storage invisible”.

Their “Data Aware” scale-out NAS solution is based around providing near-realtime analytics on the metadata, and was designed to meet the requirements of the 600 companies and individuals they interviewed before they even came up with their idea!

The product is designed to be software only and subscription-based, though they also provide scale out physical 1u / 4u appliances as well. I guess the main concept there is “have it your way”; there are still plenty of customers out there who want to buy software solution which is pre-qualified and supported on specific hardware (which sounds like an oxymoron but each to their own I say)! Most of Qumulo’s customers today actually buy the appliances.

The coolest thing about their solution is definitely their unique file system (QSFS – Qumulo Scalable File System). It uses a very clever, proprietary method to track changes within the filesystem based on the aggregate of child attributes in the tree (see their SFD8 presentation for more info). As you then don’t need to necessarily walk the entire tree to get an answer to a query (it should be noted this would be one specifically catered for by Qumulo though). It can then present statistics based on those attributes in near-realtime.

Whiteboard Dude approves!

Whiteboard Dude approves!

I would have killed for this level and speed of insight back in my admin days, and frankly I have a few customers right now who would really benefit!

Taking this a step further, the analytics can also provide performance statistics based on file path and type, so for example it could show you where the hotspots are in your filesystem, and which clients are generating them.

Who's using my storage?

Who’s using my storage?

Stuff I would like to see in future versions (though I know they don’t chase the Service Provider market), would be things like the ability to present storage to more than one Active Directory domain, straight forward RBAC (Role Based Access Control) at the management layer, more of the standard data services you see from most vendors (the RFP tick box features). Being able to mix and match the physical appliance types would also be useful as you scale and your requirements change over time, but I guess if you need flexibility, go with the software-only solution.

At a non-feature level, it would be sensible if they could rename their aggregate terminology as I think it just confuses people (aggregates typically mean something else to most storage bods).

Capacity Visualisation

Capacity Visualisation

Overall though I think the Qumulo system is impressive, as are the founder’s credentials. Their CEO/CTO team of Peter Godman and Aaron Passey, with whom we had a good chinwag outside of the SFD8 arena, both played a big part in building the Isilon storage system. As an organisation they already regularly work with customers with over 10 billion files today and up to 4PB of storage.

If their system is capable of handling this kind of scalability having only come out of stealth 8 months ago, they’re definitely one to watch…

Further Reading
Some of the other SFD8 delegates have their own takes on the presentation we saw. Check them out here:

Dan Frith – Qumulo – Storage for people who care about their data

Scott D. Lowe – Data Awareness Is Increasingly Popular in the Storage Biz

Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 8 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.


%d bloggers like this: