One of the more common requirements I come across on a day to day basis working with organisations across a broad spectrum of industries is the question of how to manage long-term data retention.
Frankly, I have massively oversimplified the question as there are many more nuances to it that this! Some of the questions, discussion points and potential solutions I see when trying to scope out and define a long-term data retention strategy are below. We assume in this case that we are talking about backing up application data, but the same can apply to file data, such as from a file server.
Long Term Data Retention – Questions, questions, questions?!
Like beautiful snowflakes, ultimately it always comes back to gathering the requirements for the individual business.
What are the regulatory and compliance requirements for long-term retention of data, and what are the consequences for loss of that data? In the new world, this could be pretty serious, especially with things like GDPR right around the corner. Escalating this up the business hierarchy can get buy in from other parts of the business to provide additional budget outside of IT, for a solution to meet the actual requirements, not just a botch job which will likely fail when put to the test.
How long is the actual data retention required? Looking at most current applications, if we are relying on being able to read back data in 7 years, current or future backup software may still work, but will we have the kit to read the tapes or data? If using spinning rust as a storage media, do we expect to be able to migrate data from one disk system to another easily in future, and if so, how does that impact things like encryption, capacity, deduplication and compression of that data?
What is it that we are trying to protect against? Deliberate or accidental deletion, total destruction of a server, array or DC, or perhaps we just need to be able to prove what your data looked like at a specific date / time.
How granular does the data need to be? For example do we need to be able to pull a file version from a specific week in the past X years? The more granular we need to get, potentially the more expensive. If we have controls in place to protect archive data against accidental / deliberate deletion, then we may not actually need to keep more than a few days or weeks of backups (as an example).
The use of FIM (File Integrity Management) tooling can be very helpful in this regard, especially for flat file structures. They can track all changes to your file system and if something is removed or updated, you could alert your server teams to investigate why and restore it from a recent backup.
Can the application or server prevent deliberate or accidental data deletion? If the application can be treated as, or write to, WORM storage (Write Once Read Many times), then the risk of data loss is further reduced, especially if that storage can be replicated off site. This doesn’t really help much with things like SQL databases, however!
Where is the archive data for the application or solution actually held? Is it within the live system (e.g. the live DB), or can it be exported onto a tertiary archive system where it becomes Read Only to all parties, including administrators? Even better, can the application export the data into a generic format, more likely to be readable in 25+ years time (such as CSV, text etc)? This provides quite a bit more flexibility in terms of future access and recovery options.
Does the application or server provide RBAC, and has it actually been implemented yet? If we minimise the number of people who could update or delete data (maliciously or accidentally), we minimise the risk of data loss.
What is the budget for the solution? All singing, all dancing, physical or software solutions can be great, but you may not be able to afford them.
Are we looking for an appliance-based solution which includes storage, replication, backup plugins, etc, or do you already have the HW and just need some software? This often, but not always, comes down to a time vs budget question. Do you want to spend your team’s time managing clunky backup software, or just buying an appliance which does half the work for you and is policy based?
What are your sovereignty requirements for the data, and would a cloud-based service be appropriate for your business? It can be very cheap to store data in something like S3 or blob storage, if the business accepts this and you don’t need to pull any of the data back very often (if at all).
How quickly is the data required when requested, how large is a typical access request, and how often are they needed? If this can be hours or days, then an offline or cloud solution may be appropriate, but anything where immediate access is required, is a different story.
Similarly, will we want to restore or access this data in the event of a DR, does this solution form part of our DR strategy? Perhaps it’s only required for access to much older data because you are replicating the most recent data to a DR facility!
As we can see, there are many, many, [many!] things to think about when considering long-term retention of data in a backup or archive solution.
What brought this up Alex?…
… I hear you ask!
I recently attended Storage Field Day 13, where we had a presentation from a backup vendor, StorageCraft, who has been in the SMB and mid-market space for many years, and it got me thinking!
The latest iteration of their backup software provides a local cache with cloud integration, and the added ability to spin up a DR environment in the event of an outage to your primary DC. A pretty nifty feature if you are legally able to store your data outside of your local environment (they currently have DCs in the US and EU only).
They can also create backups using their proprietary SPF file format, which has apparently not changed since its inception around 15 years ago. There is also no concept of a media server, as each server manages its own backups (albeit with the ability to use a central scheduler tool). This gets around the issue of backup compatibility, though may limit their ability to provide additional data services for the backup files, such as encryption, dedupe or compression, outside that of the storage targets they reside on.
This is what tickled my mental matrix into deploying my keyboard! 🙂
Want to Know More?
The session was recorded and is now available to stream online:
http://techfieldday.com/appearance/exablox-presents-at-storage-field-day-13/
Some of the other SFD13 delegates had their own thoughts on the session and StorageCraft in general. You can find them here:
Dan Frith – StorageCraft Are In Your Data Centre And In The Cloud
Scott Lowe – Backup and Recovery in the Cloud: Simplification is Actually Really Hard
Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 13 were provided by Tech Field Day / Gestalt IT, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.