Archive for VMware

You had me at Tiered Non-Volatile Memory!

Memory isn’t cheap! Despite the falling costs and increasing sizes of DRAM DIMMS, it’s still damned expensive compared to most non-volatile media at a price per GB. What’s more frustrating is that often you buy all of this expensive RAM, assign it to your applications, and find later through detailed monitoring, that only a relatively small percentage is actually being actively used.

For many years, we have had technologies such as paging, which allow you to maximise the use of your physical RAM, by writing out the least used pages to disk, freeing up RAM for services with current memory demand. The problem with paging is that it is sometimes unreliable, and when you do actually need to get that page back, it can be multiple orders of magnitude slower returning it from disk.

Worse still, if you are running a workload such as virtual machines and the underlying host becomes memory constrained, a hypervisor may often not have sufficient visibility of the underlying memory utilisation, and as such will simply swap out random memory pages to a swap file. This can obviously have significant impact on virtual machine performance.

More and more applications are being built to run in memory these days, from Redis to Varnish, Hortonworks to MongDB. Even Microsoft got on the bandwagon with SQL 2014 in-memory OLTP.

One of the companies we saw at Storage Field Day ,  Plexistor, told us that can offer both tiered posix storage and tiered non-volatile memory through a single software stack.

The posix option could effectively be thought of a bit like a non-volatile, tiered RAM disk. Pretty cool, but not massively unique as RAM disks have been around for years.

The element which really interested me was the latter option; effectively a tiered memory driver which can present RAM to the OS, but in reality tier it between NVDIMMs, SSD and HDDs depending on how hot / cold the pages are! They will also be able to take advantage of newer bit addressable technologies such as 3D XPoint as they come on the market, making it even more awesome!

PlexistorArch.jpg

Plexistor Architecture

All of this is done through the simple addition of their NVM file system (i.e. device driver) on top of the pmem and bio drivers and this is compatible with most versions of Linux running reasonably up to date kernel versions.

It’s primarily designed to work with some of the Linux based memory intensive apps mentioned above, but will also work with more traditional workloads as well, such as MySQL and the KVM hypervisor.

Plexistor define their product as “Software Defined Memory” aka SDM. An interesting term which is jumping on the SDX bandwagon, but I kind of get where they’re going with it…

SDM_vs_SDS2.png

Software Defined Memory!

One thing to note with Plexistor is that they actually have two flavours of this product; one which is based on the use of NVRAM to provide a persistent store, and one which is non-persistent, but can be run on cloud infrastructures, such as AWS. If you need data persistence for the latter, you will have to do it at the application layer, or risk losing data.

If you want to find out a bit more about them, you can find their Storage Field Day presentation here:
Plexistor Presents at Storage Field Day 9

Musings…
As a standalone product, I have a sneaking suspicion that Plexistor may not have the longevity and scope which they might gain if they were procured by a large vendor and integrated into existing products. Sharon Azulai has already sold one startup in relatively early stages (Tonian, which they sold to Primary Data), so I suspect he would not be averse to this concept.

Although the code has been written specifically for the Linux kernel, they have already indicated that it would be possible to develop the same driver for VMware! As such, I think it would be a really interesting idea for VMware to consider acquiring them and integrating the technology into ESXi. It’s generally recognised as a universal truth that you run out of memory before CPU on most vSphere solutions. Moreover, when looking in the vSphere console we often see that although a significant amount of memory is allocated to VMs, often only a small amount is actually active RAM.

The use of Plexistor technology with vSphere would enable VMware to both provide an almost infinite pool of RAM per host for customers, as well as being able to significantly improve upon the current vswp process by ensuring hot memory blocks always stay on RAM and cold blocks are tiered out to flash.

plexistorvmware

The homelab nerd in me also imagines an Intel NUC with 160GB+ of addressable RAM per node! 🙂

Of course the current licensing models for retail customers favour the “run out of RAM first” approach as it sells more per-CPU licenses, however, I think in the long term VMware will likely move to a subscription based model, probably similar to that used by service providers (i.e. based on RAM). If this ends up being the approach, then VMware could offer a product which saves their customers further hardware costs whilst maintaining their ESXi revenues. Win-Win!

Further Reading
One of the other SFD9 delegates had their own take on the presentation we saw. Check it out here:

Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 9 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.

Pure Storage – Now available in Petite

Pure Storage are probably one of the best known “All Flash” vendors in the industry today, but one of the things which has set the bar a little high for smaller organisations to get a slice of this speedy action, is the price.

Well, the good news is that for customers with smaller projects or simply smaller budgets, a Pure AFA is now within reach!

At Pure Accelerate today, along with their new FlashStack and FlashBlades, Pure announced a mini version of their ever popular M-series arrays, the FlashArray //M10.

This new array is fully featured, comes in the same form factor and with all of the same software, support, etc as the bigger models (//M20, //50, //70) but at a lower entry point of <$50k list including the first year of support. Not only that, but it is in-place non-disruptively upgradeable to the larger controller models later, all the way up to the //M70, so it is possible to buy in at this cheapest level and upgrade as business needs dictate later.

The only main differences between te //M10 and other Pure models is the lack of expansion ports on the controllers (you need at least an //M20 if you want to add shelves), and reduced compute / DRAM capacity.

m10.png

Specs are pretty much in line with the rest of the arrays in the range, with the //M10 coming in at 5TB / 10TB raw. Depending on your workloads, after dedupe and compression, this could be up to the stated useable (12.5TB/25TB). Mileage, as always, may vary! This is the perfect quantity for many use cases, including small to medium sized VDI environments, critical databases, etc. I suspect the //M10 may even find its way into some larger enterprises who’s internal processes often dictate that every project has its own budgets and its own pool of dedicated resources!

Lastly, and possibly most importantly to small businesses who may not have full time staff dedicated to managing storage, Pure’s monitoring and upgrade services are all included as well, via Pure1.

I think this is a positive step for the company and will help to engage with their customer base earlier in the organisational lifecycle, and when combined with their unique and very sticky Evergreen Storage offering, it will enable them to keep customers for life!

Disclaimer/Disclosure: My accommodation, meals and event entry to Pure Accelerate were provided by Pure Storage, and my flights were provided by Tech Field Day, but there was no expectation or request for me to write about any of the products or services and I was not compensated in any way for my time at the event.

Looking Forward to Storage Field Day 9 (#SFD9)

Storage Field Day

So for those of you who love to nerd out on storage like I do, you have probably already heard of the awesome streaming events put on by Stephen Foskett and the crew from Tech Field Day, otherwise known as Storage Field Day. These have grown so popular that Stephen is having to put on extra events just to cater for demand, which I think speaks volumes as to their efficacy and indeed quality!

For those not yet indoctrinated, these events involve taking a group of around a dozen storage and technology delegates to visit a number of different startups (think Pure, NexGen, Coho, etc) and more established companies (think Intel!) to talk about the latest things going on both at those organisations and in the industry in general. Each session lasts a couple of hours, but is generally broken down into several bite sized chunks for consumption at your leisure.

As a stream viewer you get the opportunity to learn about your favourite vendors latest funky stuff and watch them answer questions about all the things you probably wanted to know but never got the chance to ask. It is also a great way to get your head around an unfamiliar technology or vendor. Lastly, if you watch live, you can also ask questions via twitter for the delegates to ask of the presenters.

As a delegate this goes to a whole new level as you get to spend almost an entire week mahoossively geeking out on tech, learning from some of the smartest people in the tech industry, and meeting with the senior people at some of the industry’s best-known companies. I find it generally safest just to wear multiple layers to avoid any embarrassing nerdgasms! 😉

So with that in mind I am really chuffed to have been invited back to attend Storage Field Day 9, next month (16th-18th March) in San Jose!

Not all of the companies have been announced as yet, but we already know that the likes of Cohesity, Intel, VMware & Violin Memory will be in attendance. More will be confirmed over the next coupe of weeks and having seen the provisional list I can tell you it is definitely going to be a great event!

vendors

Needless to say the lineup of delegates is awesome as usual, with many well known bloggers from the EU, US and APAC. Make sure you check them out and follow the crew on twitter if you are so inclined. Most delegates post their opinions around the vendors and tech both during and after the event, so make sure you check out their blog feeds. For example, here is mine:

http://www.tekhead.org/blog/feed/

If you want to tune in live, simply go to http://techfieldday.com from 16th-18th March (PST) or catch up with the recordings on youtube later.

Finally, be warned my Twitter stream does get rather busy during the event, so feel free to temporarily mute me if need be! 😉

Quick Fix for “The task was canceled by a user” when deploying OVA in vCenter 6

The task was cancelled by a user

So I came across a very odd vCenter bug today when trying to deploy an OVA file on vSphere 6.0, specifically the latest CoreOS image.

The import was repeatedly failing with the same error message.

What was more frustrating was the fact that the error message was “The task was cancelled by a user”, which it blatantly was not!

Error log example below:

OVA Import Errors

OVA Import Errors

A quick bit of testing and Googling and I came across an article by my good friend Ather Beg from the LonVMUG, who had a very simple fix for the same issue in vSphere 5.5.

  1. Install 7-zip or a similar archiving tool
  2. Extract the OVA file using 7-zip into its component parts
  3. Import into vCenter, selecting the OVF file for the import target

That’s it – simples!

Success!

Success!

What’s really weird is that after importing the OVF successfully, I then went back and imported the OVA, and it worked fine!

Very strange indeed…

%d bloggers like this: