Starting in January 2013, my Intel NUC series is now over has reached the heady heights of double digits over the past few years, so I figured it might be handy to make them a bit easier to find!
The clue is in the name! Info on Synology storage and Cisco SG300-10 switches. I have subsequently added a further 3 Synology NAS boxes and a Cisco SG300-20!
Memory isn’t cheap! Despite the falling costs and increasing sizes of DRAM DIMMS, it’s still damned expensive compared to most non-volatile media at a price per GB. What’s more frustrating is that often you buy all of this expensive RAM, assign it to your applications, and find later through detailed monitoring, that only a relatively small percentage is actually being actively used.
For many years, we have had technologies such as paging, which allow you to maximise the use of your physical RAM, by writing out the least used pages to disk, freeing up RAM for services with current memory demand. The problem with paging is that it is sometimes unreliable, and when you do actually need to get that page back, it can be multiple orders of magnitude slower returning it from disk.
Worse still, if you are running a workload such as virtual machines and the underlying host becomes memory constrained, a hypervisor may often not have sufficient visibility of the underlying memory utilisation, and as such will simply swap out random memory pages to a swap file. This can obviously have significant impact on virtual machine performance.
More and more applications are being built to run in memory these days, from Redis to Varnish, Hortonworks to MongDB. Even Microsoft got on the bandwagon with SQL 2014 in-memory OLTP.
One of the companies we saw at Storage Field Day , Plexistor, told us that can offer both tiered posix storage and tiered non-volatile memory through a single software stack.
The posix option could effectively be thought of a bit like a non-volatile, tiered RAM disk. Pretty cool, but not massively unique as RAM disks have been around for years.
The element which really interested me was the latter option; effectively a tiered memory driver which can present RAM to the OS, but in reality tier it between NVDIMMs, SSD and HDDs depending on how hot / cold the pages are! They will also be able to take advantage of newer bit addressable technologies such as 3D XPoint as they come on the market, making it even more awesome!
Plexistor Architecture
All of this is done through the simple addition of their NVM file system (i.e. device driver) on top of the pmem and bio drivers and this is compatible with most versions of Linux running reasonably up to date kernel versions.
It’s primarily designed to work with some of the Linux based memory intensive apps mentioned above, but will also work with more traditional workloads as well, such as MySQL and the KVM hypervisor.
Plexistor define their product as “Software Defined Memory” aka SDM. An interesting term which is jumping on the SDX bandwagon, but I kind of get where they’re going with it…
Software Defined Memory!
One thing to note with Plexistor is that they actually have two flavours of this product; one which is based on the use of NVRAM to provide a persistent store, and one which is non-persistent, but can be run on cloud infrastructures, such as AWS. If you need data persistence for the latter, you will have to do it at the application layer, or risk losing data.
Musings… As a standalone product, I have a sneaking suspicion that Plexistor may not have the longevity and scope which they might gain if they were procured by a large vendor and integrated into existing products. Sharon Azulai has already sold one startup in relatively early stages (Tonian, which they sold to Primary Data), so I suspect he would not be averse to this concept.
Although the code has been written specifically for the Linux kernel, they have already indicated that it would be possible to develop the same driver for VMware! As such, I think it would be a really interesting idea for VMware to consider acquiring them and integrating the technology into ESXi. It’s generally recognised as a universal truth that you run out of memory before CPU on most vSphere solutions. Moreover, when looking in the vSphere console we often see that although a significant amount of memory is allocated to VMs, often only a small amount is actually active RAM.
The use of Plexistor technology with vSphere would enable VMware to both provide an almost infinite pool of RAM per host for customers, as well as being able to significantly improve upon the current vswp process by ensuring hot memory blocks always stay on RAM and cold blocks are tiered out to flash.
The homelab nerd in me also imagines an Intel NUC with 160GB+ of addressable RAM per node! 🙂
Of course the current licensing models for retail customers favour the “run out of RAM first” approach as it sells more per-CPU licenses, however, I think in the long term VMware will likely move to a subscription based model, probably similar to that used by service providers (i.e. based on RAM). If this ends up being the approach, then VMware could offer a product which saves their customers further hardware costs whilst maintaining their ESXi revenues. Win-Win!
Further Reading One of the other SFD9 delegates had their own take on the presentation we saw. Check it out here:
Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 9 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.
Having successfully completed the VCP6-DCV Delta Exam (2V0-621D) this week, I thought it would be worthwhile jotting down a few thoughts on the exam, and noting the resources I used to prepare for it.
I’ve previously completed the VCP3, VCP4 and VCP5 “DCV” exams, however being specifically a delta exam, this one was a little different. The exam primarily covers the differences between vSphere 5 and vSphere 6, with a handful of seemingly more general questions.
For summary impressions of the exam (i.e. the TLDR), jump to the end of this article! 🙂
Preparation I used the following resources in prep for the exam:
The VCP6 Delta Exam Blueprint. I never really truly appreciated the usefulness of this document until the last few years, but I now use it as my primary study guide for all VMware exams. I found that the best way to do this was to copy the entire list of topics into a document (in my case OneNote), and highlight all of the key subject areas I needed to study up on.
Pluralsight Training Courses. I have been a big advocate and user of Pluralsight (and their predecessor TrainSignal) video training for many years. Although there is no specific course aimed at the delta exam, I simply dipped in and out of the training to cover the areas already identified above from the blueprint, where my knowledge was weakest.
What’s New in VMware vSphere 6 from David Davis. This is a great summary course from David just covering some of the basic new features in a couple of hours.
A selection of videos from the following intermediate vSphere 6 courses from Greg Shields. The names of the subsections and videos are mostly quite nicely linked to the title sections in the blueprint (which is handy):
vBrownBag VCP6 sessions. The guys and gals at vBrownBag are truly awesome, as is the content they produce on a weekly basis! Most recently they have done a series on the VCP-DCV exam, split by section, so again if you use the blueprint as your guide to what you need to study, you can simply dip in and out of the video sessions are required. A couple of example sessions I watched were:
The following Hands on Labs were on my list as potentially very useful, but I simply ran out of time to do them:
HOL-SDC-1627 – VVol, Virtual SAN & Storage Policy-Based Management
HOL-SDC-1604 vSphere Performance Optimization
HOL-CHG-1695 vSphere 6 Challenge Lab
HOL-SDC-1608 Virtual SAN 6 from A to Z
My Intel NUC Nanolab homelab. I completed an upgrade from vSphere 5.5 to 6.0 in my homelab, and messed around with a load of the new features. I have documented the upgrade process in a post which I will get posted soon, but the best news for me was that vSphere 6 seems to now support all of the drivers in the base, and so no longer requires additional VIBs! 🙂
The Exam The exam itself was different to any previous VCP exam I’ve done. I would say that because the scope of the exam was much narrower, the depth of the questions seemed to me to be significantly more, with a few really tricky ones thrown in there.
Over all if I was to do it again (and when it comes time to do the VCP7 in a few years) I would probably just do the full VCP exam, rather than the delta. That way you can be sure of a decent number of the easy peasy questions which will probably be on stuff you’ve been doing for years, as well as the new stuff you may not know quite as well.
Obviously having not done the full VCP6 exam I can’t say this for sure, but I would say it’s a pretty good bet.
I have been running a variety of Intel NUC nodes in my vSphere homelab over the past 3 years now, including the D34010WYKH, DC3217IYE & DC53427HYE.
In that time I have unfortunately seen more than my fair share of USB drive failures and corruptions, generally with an error which looks something like this:
These are not cheap and nasty, or freebie USB drives, so I would not normally expect to see this rate of failures. The error only occurs when you reboot the host, and the startup bombs out at the start of the hypervisor launch. I have often managed to recover the stick by copying back corrupted files from another instance, but generally I needed to rebuild and restore the image. An unnecessary pain in the rear!
The Root Cause The NUC case can become quite warm during normal operation with or without the fans spinning up, and I have come to believe that the main reason for the corruptions is that the USB stick itself is getting too hot and therefore eventually failing. Having pulled a USB out from a recently shut down node, they are really quite hot to the touch. You don’t actually see the symptom / failure until a reboot because the ESXi image actually runs in memory, so is only loaded from the USB stick at boot time.
The Solution As for the solution, it’s really quite simple. I purchased a number of 12cm (5 inch) USB 2.0 extender cables on eBay for just 99p each (including delivery!).
These keep the USB stick indirectly attached to the NUC chassis, and as such the heat does not transfer into the flash drive. Since doing this I have not seen any further issues with the corruptions. Job done!