I Like Big Files and I Cannot Lie

You other vendors, can’t deny,
When an array walks in with an itty bitty waste [-ed capacity],
And many spindles in your face
You get sprung, want to pull up tough,
‘Cause you notice that storage was stuffed!

Ok… I’ll stop now! I’m just a bit sad and always wanted an excuse to to use that as a post opener! 🙂

There is a certain, quite specific type of customer whose main requirements revolve around the storage of large data sets consisting of thousands to millions of huge files. Think media / TV / movie companies, video surveillance or even PACS imaging and genomic sequencing. Ultimately we’re talking petabyte-scale capacities – more than your average enterprise needs to worry about!

How you approach storage of this type of data is worlds apart from your average solution!

The Challenges of “Chunky” Data

Typical challenges involve having multiple silos of your data across multiple locations, with different performance and workload characteristics. Then you have different storage protocols for different applications or phases in their data processing and delivery. Each of those silos then requires different skills to manage, and different capacity management regimes.

Sir Mixalot likes big files

On top of that, for the same reason as we moved away from parity groups in arrays to wide striping, these silos then have IO and networking hotspots, wasted capacity (sometimes referred to as trapped white space) and wasted performance, which cannot be shared across multiple systems.

Finally (and arguably most importantly), how do you ensure the integrity, resilience, and durability of this data, as by its very nature, it typically requires long-term retention?

Ideal Solution

What you really need is a single storage system which can not only scale to multi-petabyte capacities with multiple protocols, but is reasonably easy to manage, even with a high admin to capacity ratio.

You then need to ensure that data can also be protected against accidental, or malicious file modification or deletion.

Finally, you need the system to be able to replicate additional copies to remote sites, as backing up petabytes of data is simply unrealistic! Similarly, you may want multiple replicas or additional pools outside of your central repository which all replicate back to the mothership, for example for ROBO or multi-site solutions where editing large files needs to be done locally.

As my good friend Josh De Jong said recently:

https://twitter.com/EuroBrew/status/798927164175331328

Of course, the biggest drawback of using this approach is that you have one giant failure domain. If something somehow manages to proverbially poison your “data lake”, that’s a hell of a lot of data to lose in one go!

DellEMC Isilon

During our recent Tech Field Day 12 session at DellEMC, I was really interested to see how the DellEMC Isilon scale-out NAS system was capable of meeting many of these requirements, especially as this is a product which can trace its heritage all the way back to 2001! In fact, their average customer on Isilon is around 1PB in size, and their largest customer is using 144PB! Scalability, check!

The Isilon team also confirmed that around 70% of their 8,000+ customers trust the solution sufficiently to not use any external backup solution, trusting in SnapshotIQ, SyncIQ and in some cases SmartLock, to protect their data. That’s a pretty significant number!

One thing I am not so keen on with the Isilon (and to be fair, many other “traditional” /  old guard storage vendor offerings) is the complexity and breadth of the licensing; almost all of the interesting features each have to have their own license. If the main benefit to the data lake is simplicity, then I would far rather have a single price with perhaps one or two uplift options for licenses, than an a la carte menu.

In addition, the limit of 50 security domains provides some flexibility for service providers, but then limits the size of your “data lake” to 50 customers. It would be great to see this limit increased in future.Data Lake

The Tekhead Take

Organisations looking to retain data in these quantities need to weigh up the relative risks of using a single system for all storage, versus the costs of and complexity of multiple silos. Ultimately it is down to each individual organisation to work out what closest matches their requirements, but for the convenience of a single large repository of all of your data, the DellEMC Islion still remains a really interesting proposition.

Further Info

You can catch the full Isilon session at the link below:
Dell EMC Presents at Tech Field Day 12

Further Reading

Some of the other TFD delegates had their own takes on the presentation we saw. Check them out here:

Disclaimer: My flights, accommodation, meals, etc at Tech Field Day 12 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services.

Storage, Tech Field Day , , , , , , , , , , , , , , , ,