Swordfish – A Standard by Any Other Name Would Smell As Sweet

Whether it’s the IEEE, the ISO or any other, we live in a world governed by standards. This has the positive impact in allowing interoperability of devices and elements, but at the same time has the unfortunate side effect of hampering the development of new technologies which conflict with those standards, even if their adoption would ultimately provide a better outcome for everyone!

At the same time, many organisations (read: vendors) opt out of these standards and introduce their own. This is great for the vendor as it is tailored to their requirements and products, but it doesn’t help the customer or their lowly sysadmin who has to then implement a load of additional tooling to manage these products. Take S3 as an example; AWS took one look at what was out there in the market, decided that none of the standards met their requirements, so wrote their own!

The key seems to me to be finding a balance, where you implement a standard, but make it extensible, such that individual vendors can add additional data or functionality over and above the baseline. This means that you can always support the “lowest common denominator” for everyone.

So what is Swordfish?

Funnily enough, the folk from SNIA (The Storage Networking Industry Association) have implemented precisely this with one of their latest standards releases, Swordfish. Specifically, this defines the standards for APIs used to manage storage devices in a consistent fashion, regardless of vendor or indeed storage class (for example software based hyper-converged solutions are supported by it, as well as block, file, object, etc!).standardsThey have achieved this by taking the existing SMI-S standards and refactoring them into a simplified model which is client, not vendor oriented, and based on a REST API model, JSON (the current industry favourite for almost all data interchange) and OData. Not only that, but they have achieved this and agreed the standards with their many members in less than 12 months. By comparison to your average RFC from the IEEE that’s lightning fast! 😮

Now this is not to say that your typical vendor is going to throw out everything they have today, but if they begin to run these APIs in parallel, I could see this eventually becoming the defacto standard for all storage management. In addition, SNIA have confirmed that if 2-3 or more vendors have a requirement for the same additional fields (which they will initially have to implement via extensions), then SNIA will ratify them within weeks. Truly an agile methodology for standards!

The Tekhead Take

This seems to me to be a pragmatic approach to a difficult problem. Keeping vendors happy, whilst trying to make life easier for storage consumers and administrators by bringing storage management into the twenty-first century!

Despite being a relatively dry subject matter, I was actually quite interested and impressed with this innovation! People will still need dedicated local storage for many years to come, and these standards will help to enable them to manage storage in a more consistent fashion. Who knows, it may even promote more competition!

Want to Know More?

I was fortunate enough to meet the team from SNIA last year at their Colorado HQ, with Storage Field Day 13. One of the speakers (industry veteran Rob Peglar) also recently appeared as a guest on the Storage Unpacked podcast – an episode well worth a listen too!

Anyway, you can catch the session here:

SNIA Presents at Storage Field Day 13

SNIA have published a load of information on the standards here:

http://snia.org/swordfish

Finally, some of the other SFD13 delegates had their own thoughts on the session and standards as a whole. You can find them here:

Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 13 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.

Storage, Tech Field Day , , , , , , , ,