Tag Archive for backup

Scale-Out Doesn’t Just Mean Applications

Scale Out

A couple of months ago I wrote a post entitled Scale-Out. Distributed. Whatever the Name, it’s the Future of Computing.

Taking the concept a step further, I recently started thinking about other elements in IT which are moving in that direction; not just applications and storage, but underlying infrastructure and management elements too.

Then it dawned on me that this really is not a new thing… we’ve been taking this approach for years! Technologies like VMware vSphere, have enabled us to become trusting, almost presumptuous, that we can add resources as we need them; increasing the shared pool transparently and enabling us to continue to service requirements, whilst eliminating downtime. (You can even use them to scale up on-the-fly if you really have to!)

The current breed of infrastructure engineers and startups have grown up in this era and the great thing is that this has now become part of their DNA! Typically, no longer are solutions designed from scratch to be scale-up in nature; hitting some artificial limit in capacity or having to scale specific elements of a solution to avoid nasty bottlenecks.

Instead, infrastructure is being designed to scale-out natively; distributed architectures, balancing workloads and metadata evenly across platforms. This has the added benefit, of course, of making them more resilient to failure of individual components.Distributed Systems

Backup isn’t Sexy, but it’s Necessary

One great example of this new architecture paradigm (drink!), is Rubrik, a startup in the backup space who we met at Tech Field Day 12. Their home-grown distributed file system, distributed metadata, built in off-site replication and global namespace, provide a massively scalable and resilient backup system.

All of the roles from a traditional backup solution (such as backup proxies/media servers/metadata servers, etc) are now rolled into a single, scale-out platform. As I seem to find myself saying more and more often these days, KISS personified!kiss - Keep it simple stupid EFS

With shrinking IT teams, I commonly find that companies are willing to trade budget for time savings. Utilising a simple, policy-driven management interface and enabling off-site replication to be done over-the-wire, has a lot of benefits to operational time!

As an added bonus, it can even replicate out to S3, Blob and NFS targets, to give even more options for off-site replication. Of course, a big fat pipe to the internet will cost you more each month; though you’re probably investing in that anyway, to meet your employee’s peak lunchtime demand for facebook and youtube! 🙂

Much like any complex machine, under the hood, Rubrik is pretty impressive. There is a masterless cluster management solution, multi-tier flash and disk for performance, and a clever redirect-on-write snapshot chain algorithm, which minimises capacity utilisation whilst providing very granular restores.

The key thing here, though, is we don’t really care; we are a consumer society who just wants things to work, as we have more exciting things than backup to worry about!

rubrik

TLDR;

We have enough complexity in IT these days without having to worry about backup. I would say that the simple to manage, scale-out solution from Rubrik is certainly worth considering as part of any PoC or RFP! 🙂

Further Info

You can catch the full Rubrik session at the link below:
Rubrik Presents at Tech Field Day 12

Further Reading

Some of the other TFD delegates had their own takes on the presentation we saw. Check them out here:

Disclaimer: My flights, accommodation, meals, etc at Tech Field Day 12 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services.

Amazon AWS Tips and Gotchas – Part 10 – EFS (Elastic File System)

Continuing in this series of blog posts taking a bit of a “warts and all” view of a few Amazon AWS features, below are a handful more tips and gotchas when designing and implementing solutions on Amazon AWS. This week, we talk about the latest feature of AWS, EFS (aka Elastic File System).

For the first post in this series with a bit of background on where it all originated from, see here:
Amazon #AWS Tips and Gotchas – Part 1

For more posts in this series, see here:
Index of AWS Tips and Gotchas

20. Amazon AWS Tips and Gotchas – Part 10 – EFS (Elastic File System)

A big challenge when designing highly available web infrastructures is historically how to provide a centralised content store for static content without wasting resources.

A classic model for this is a pair of web / file servers with either rsync or Gluster to replicate the content between them. In Windows world, this would be something like either a WSFC (failover cluster) or perhaps something evil like a DFS replicated share. This means that not only are you wasting money on multiple virtual machines / instances just to serve file content, but you also add significant risk and complexity in the replication and failover between these machines.

Enter, AWS EFS!AWS EFSAt a simple level, EFS is basically an NFS (v4.1) share within the AWS cloud, which is replicated across all AZs in any one region. No need for managing and replicating between instances, or indeed paying for EC2 instances just to create file shares! Great!

As this is still a relatively immature product, there are still a few “features” to be aware of:

  1. There is no native EFS backup solution (yet!). I’m sure this will come very soon. As we have Re:invent coming up, it wouldn’t surprise me if something came out then. In the meantime, your main methods would be either to use Data Pipeline to backup to another EFS store or potentially mount EFS and backup through an EC2 instance using your own tools or scripts. I would be concerned about backing up EFS to EFS (if in the same region), as this is putting all your eggs in one basket. Hopefully, AWS will provide other target options in the future.
  2. There is no native encryption of EFS data as yet. If you need this right now, you could achieve it by simply pre-encrypting the data in your application first, before it is written to EFS. Alternatively, just hold your breath as AWS have already stated that:
    “Amazon EFS does not currently provide the option to encrypt data at rest, but we will offer this option soon”.AWS EFS Meme
  3. If you have less than about 100GB, then due to the way the performance burst credits work you may not get the performance you need. The more you buy, the more performance you get, so don’t short change your app for the sake of a few dollars!

    “Amazon EFS uses a credit system to determine when file systems can burst. Each file system earns credits over time at a baseline rate that is determined by the size of the file system, and uses credits whenever it reads or writes data”
    .

    In early testing, it has been seen that very small filesystems can lead to IO starvation and performance issues. I would recommend you start with 100GB as a minimum (subject to your workload requirements of course!). This is still pretty cheap at only about $30-33 a month; a lot less than even a pair of EC2 instances, never mind the complexity reduction benefits. KISS!

    Of course, the more caching you can do on that content, e.g. using CloudFront as a CDN, the lower the IO requirements on your EFS store.

    For more info on performance see here:
    Amazon EFS Performance

    kiss - Keep it simple stupid EFS

  4. And finally… being NFS based, this is obviously primarily aimed at Linux solutions. It would be nice to think that AWS will release an SMB version in the future… we can but hope!

Thanks to my learned colleague Tom Ellis for the tip! As he says, “The size needs to be determined by the throughput needs, and not the storage capacity needs. “

Find more posts in this series here:
Index of AWS Tips and Gotchas

HOWTO: Process for Upgrading Veeam Backup & Replication 7 to 8

As a VMware vExpert we are kindly provided free licenses for Veeam Backup & Replication and Veeam One. I have been using Veeam B&R for the last year and have successfully used it to protect half a dozen of my key lab machines and do one or two restores over that time.

The licenses we are provided with by Veeam are based on a 365 day evaluation, so my backup server was reaching its expiry date this week. I was running Veeam B&R version 7.x, so as part of the upgrade license I also needed to update the Veeam software from version 7 to 8.

This turned out to be an incredibly easy process with only a couple of minor tweaks at the end to get things up and running. As you can see from the screenshots below the installation and update of Veeam is pretty much a next, next, finish type of installation.

It’s also with mentioning that I have documented the process for upgrading Veeam B&R, but the process for upgrading Veeam One is pretty much the same.

As with any standard upgrade to software running in a virtual machine, I started by taking a snapshot of that machine.

Next step was to mount the ISO file Veeam into a virtual machine operating system and start the install wizard.

Of course I read every single word of the license agreement.

The installer recognised the previous version of the software and offered to upgrade to latest automatically.

I then pointed the install wizard to the evaluation license key provided to me by the folks at Veeam.

A number of basic checks are completed to ensure that the appropriate pre-requisites are in place.

Next you would enter the service account for Veeam. Obviously being a home lab and me being incredibly lazy, this is the local machine administrator in this case. In any production environment this should of course be a dedicated account.

The existing SQL express database instance is selected.

Veeam recognises this has an instance on it which can be upgraded.

The installer is now ready to run.

After about five minutes installation is complete.

After a quick reboot, the server is back up and running and I log back in. When I launch Veeam B&R 8 for the first time, it recognises that some server components still need to be upgraded.

Again this is just a next, next, finish setup.

The only issues I have seen after the upgrade were a couple of VMs which failed their backups. After a reboot of said machines, everything was right as rain and backups are running as normal.

Once I was sure everything was working properly, and had run a couple of successful backups, I committed and deleted the snapshots taken at the start of the process.

Conclusion
Overall the process was very simple and very slick, exactly what you want from a software upgrade. Particularly impressive considering this was a full version upgrade, not just a point release. You can see why their marketing department came up with the tagline “It Just Works”!

Although most organisations I have worked for in the past have generally used more traditional backup vendors, Veeam is definitely enterprise ready and well worth considering. The only drawback, is that if you run a mixed environment of physical and virtual machines, you may require multiple backup platforms. Even then, Veeam Endpoint can do this in some scenarios AFAIK.

%d bloggers like this: