Tag Archive for HDD

It’s a Geek Life! HDD Watch Review

So, why an HDD watch review on a tech blog?

Well, I’m not a “watch guy”, meaning I don’t have a collection of 50 varying and expensive watches, however, I do consider myself as someone who appreciates the skill which goes into producing a timepiece. Regardless of what many millennials will tell you, I also believe you should never be without one, and a mobile phone simply will not do… 🙂

It was my birthday recently, and based on a none-too-subtle hint from me, my wife very kindly gave me the gift of an HDD watch! Needless to say, I was extremely chuffed with it, so thought I would provide a mini review here.

I originally heard about these very funky (yes, read: nerdy) watches via the biggest watch aficionado I know IRL, Stephen Foskett, who has an extensive collection and loves anything which goes tick-tock! He even runs his own watch blog, Grail Watch, which I recommend for any horologists (if that is the correct term?).

The original run of 500 watches came from an IndieGoGo campaign in 2014. In March this year, Jean Jerome, the creator of the HDD Watch, has opened up his own website for anyone who missed out the first time.

HDD Watch Review - It's more than 8-bits!

It’s more than 8-bits!

The watch itself is of a very decent build quality. The HDD Microdrive (Hitachi 4GB to be precise) has been encased in a custom (very shiny!) stainless steel enclosure, which provides both shock and water protection. A Miyota GL20 quartz movement is added, which provides accuracy to +/- 20 seconds per month. Mine seems to be achieving something within this window but with no second hand, it’s hard to tell! I’ve also caught it once or twice on hard objects and ne’er a scratch has been seen, so I would definitely attest to the build quality.

HDD Watch Review Waterproof

When did you last stick an HDD in a glass of water and expect it to keep working?!

Most of my life I have been used to wearing segmented metal watch straps with butterfly clasps, which I find to be most comfortable and secure. When I originally received the watch I did consider replacing the strap, which is rubberised (neoprene) and modeled on a PCB, with a segmented metal strap. Replacement straps are available from the vendor, including a metal expansion strap, but it turns out that this is one of the features which most draws the eye, and people often comment on this first! It even drew the eye of one of my interviewers when I was interviewing for my recent change of role, which I don’t think harmed my chances! 🙂

HDD Watch Review Closeup

There were only two negatives I would highlight about the watch, one is a “bug” and the other is a “missing” feature!

  • The bug is that there is a tiny piece of dust on the inside of the glass on my particular watch, which is then reflected in the surface of the platter as well. It’s just a bit of an annoyance, and I am hoping I will be able to clean it out whenever I eventually have to replace the battery.
  • The feature I wish the watch had, is a date window. I didn’t realise how often I actually use this feature of my current watch until I’ve had to go without it! I fully understand why one isn’t included however, as it would spoil the look of the platter, and there is nowhere else on the watch for a date to comfortably sit, even if a mechanism could be found which would allow for remote placement of this element.
Closing the Wrist Strap

I hope this HDD watch review has been of some interest!

Overall, if you want the ultimate in Geek Chic, I highly recommend the HDD watch from http://hddwatches.com. A brilliant purchase and a unique piece of history, which at only €150, is well worth the purchase price IMHO!

HDD Watch Review Geek Fashion

You had me at Tiered Non-Volatile Memory!

Memory isn’t cheap! Despite the falling costs and increasing sizes of DRAM DIMMS, it’s still damned expensive compared to most non-volatile media at a price per GB. What’s more frustrating is that often you buy all of this expensive RAM, assign it to your applications, and find later through detailed monitoring, that only a relatively small percentage is actually being actively used.

For many years, we have had technologies such as paging, which allow you to maximise the use of your physical RAM, by writing out the least used pages to disk, freeing up RAM for services with current memory demand. The problem with paging is that it is sometimes unreliable, and when you do actually need to get that page back, it can be multiple orders of magnitude slower returning it from disk.

Worse still, if you are running a workload such as virtual machines and the underlying host becomes memory constrained, a hypervisor may often not have sufficient visibility of the underlying memory utilisation, and as such will simply swap out random memory pages to a swap file. This can obviously have significant impact on virtual machine performance.

More and more applications are being built to run in memory these days, from Redis to Varnish, Hortonworks to MongDB. Even Microsoft got on the bandwagon with SQL 2014 in-memory OLTP.

One of the companies we saw at Storage Field Day ,  Plexistor, told us that can offer both tiered posix storage and tiered non-volatile memory through a single software stack.

The posix option could effectively be thought of a bit like a non-volatile, tiered RAM disk. Pretty cool, but not massively unique as RAM disks have been around for years.

The element which really interested me was the latter option; effectively a tiered memory driver which can present RAM to the OS, but in reality tier it between NVDIMMs, SSD and HDDs depending on how hot / cold the pages are! They will also be able to take advantage of newer bit addressable technologies such as 3D XPoint as they come on the market, making it even more awesome!

PlexistorArch.jpg

Plexistor Architecture

All of this is done through the simple addition of their NVM file system (i.e. device driver) on top of the pmem and bio drivers and this is compatible with most versions of Linux running reasonably up to date kernel versions.

It’s primarily designed to work with some of the Linux based memory intensive apps mentioned above, but will also work with more traditional workloads as well, such as MySQL and the KVM hypervisor.

Plexistor define their product as “Software Defined Memory” aka SDM. An interesting term which is jumping on the SDX bandwagon, but I kind of get where they’re going with it…

SDM_vs_SDS2.png

Software Defined Memory!

One thing to note with Plexistor is that they actually have two flavours of this product; one which is based on the use of NVRAM to provide a persistent store, and one which is non-persistent, but can be run on cloud infrastructures, such as AWS. If you need data persistence for the latter, you will have to do it at the application layer, or risk losing data.

If you want to find out a bit more about them, you can find their Storage Field Day presentation here:
Plexistor Presents at Storage Field Day 9

Musings…
As a standalone product, I have a sneaking suspicion that Plexistor may not have the longevity and scope which they might gain if they were procured by a large vendor and integrated into existing products. Sharon Azulai has already sold one startup in relatively early stages (Tonian, which they sold to Primary Data), so I suspect he would not be averse to this concept.

Although the code has been written specifically for the Linux kernel, they have already indicated that it would be possible to develop the same driver for VMware! As such, I think it would be a really interesting idea for VMware to consider acquiring them and integrating the technology into ESXi. It’s generally recognised as a universal truth that you run out of memory before CPU on most vSphere solutions. Moreover, when looking in the vSphere console we often see that although a significant amount of memory is allocated to VMs, often only a small amount is actually active RAM.

The use of Plexistor technology with vSphere would enable VMware to both provide an almost infinite pool of RAM per host for customers, as well as being able to significantly improve upon the current vswp process by ensuring hot memory blocks always stay on RAM and cold blocks are tiered out to flash.

plexistorvmware

The homelab nerd in me also imagines an Intel NUC with 160GB+ of addressable RAM per node! 🙂

Of course the current licensing models for retail customers favour the “run out of RAM first” approach as it sells more per-CPU licenses, however, I think in the long term VMware will likely move to a subscription based model, probably similar to that used by service providers (i.e. based on RAM). If this ends up being the approach, then VMware could offer a product which saves their customers further hardware costs whilst maintaining their ESXi revenues. Win-Win!

Further Reading
One of the other SFD9 delegates had their own take on the presentation we saw. Check it out here:

Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 9 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.

Secondary can be just as important as Primary

There can be little doubt these days, that the future of the storage industry for primary transactional workloads is All Flash. Finito, that ship has sailed, the door is closed, the game is over, [Insert your preferred analogy here].

Now I can talk about the awesomeness of All Flash until the cows come home, but the truth is that flash is not now, and may never be as inexpensive for bulk storage as spinning rust! I say may as technologies like 3D NAND are changing the economics for flash systems. Either way, I think it will still be a long time before an 8TB flash device is cheaper than 8TB of spindle. This is especially true for storing content which does not easily dedupe or compress, such as the two key types of unstructured data which are exponentially driving global storage capacities through the roof year on year; images and video.

With that in mind, what do we do with all of our secondary data? It is still critical to our businesses from a durability and often availability standpoint, but it doesn’t usually have the same performance characteristics as primary storage. Typically it’s also the data which consumes the vast majority of our capacity!

AFA Backups

Accounting needs to hold onto at leat 7 years of their data, nobody in the world ever really deletes emails these days (whether you realise or not, your sysadmin is probably archiving all of yours in case you do something naughty, tut tut!), and woe betide you if you try to delete any of the old marketing content which has been filling up your arrays for years! A number of my customers are also seeing this data growing at exponential rates, often far exceeding business forecasts.

Looking at the secondary storage market from my personal perspective, I would probably break it down into a few broad groups of requirements:

  • Lower performance “primary” data
  • Dev/test data
  • Backup and archive data

As planning for capacity is becoming harder, and business needs are changing almost by the day, I am definitely leaning more towards scale-out solutions for all three of these use cases nowadays. Upfront costs are reduced and I have the ability to pay as I grow, whilst increasing performance linearly with capacity. To me, this is a key for any secondary storage platform.

One of the vendors we visited at SFD8, Cohesity, actually targets both of these workload types with their solution, and I believe they are a prime example of where the non-AFA part of the storage industry will move in the long term.

The company came out of stealth last summer and was founded by Mohit Aron, a rather clever chap with a background in distributed file systems. Part of the team who wrote the Google File System, he went on to co-found Nutanix as well, so his CV doesn’t read too bad at all!

Their scale-out solution utilises the now ubiquitous 2u, 4-node rack appliance physical model, with 96TB of HDDs and a quite reasonable 6TB of SSD, for which you can expect to pay an all-in price of about $80-100k after discount. It can all be managed via the console, or a REST API.

Cohesity CS2000 Series

2u or not 2u? That is the question…

That stuff is all a bit blah blah blah though of course! What really interested me is that Cohesity aim to make their platform infinitely and incrementally scalable; quite a bold vision and statement indeed! They do some very clever work around distributing data across their system, whilst achieving a shared-nothing architecture with a strongly consistent (as opposed to eventually consistent), 2-phase commit file system. Performance is achieved by first caching data on the SSD tier, then de-staging this sequentially to HDD.

I suspect the solution being infinitely scalable will be difficult to achieve, if only because you will almost certainly end up bottlenecking at the networking tier (cue boos and jeers from my wet string-loving colleagues). In reality most customers don’t need infinite as this just creates one massive fault domain. Perhaps a better aim would be to be able to scale massively, but cluster into large pods (perhaps by layer 2 domain) and be able to intelligently spread or replicate data across these fault domains for customers with extreme durability requirements?

Lastly they have a load of built-in data protection features in the initial release, including instant restore, and file level restore which is achieved by cracking open VMDKs for you and extracting the data you need. Mature features, such as SQL or Exchange object level integration, will come later.

Cohesity Architecture

Cohesity Architecture

As you might have guessed, Cohesity’s initial release appeared to be just that; an early release with a reasonable number of features on day one. Not yet the polished article, but plenty of potential! They have already begun to build on this with the second release of their OASIS software (Open Architecture for Scalable Intelligent Storage), and I am pleased to say that next week we get to go back and visit Cohesity at Storage Field Day 9 to discuss all of the new bells and whistles!

Watch this space! 🙂

To catch the presentations from Cohesity as SFD8, you can find them here:
http://techfieldday.com/companies/cohesity/

Further Reading
I would say that more than any other session at SFD8, the Cohesity session generated quite a bit of debate and interest among the guys. Check out some of their posts here:

Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 8 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.

My Synology DSM Blue LED issue was actually just a failed drive!

This weekend I spent several hours trying to resolve an issue with my Synology DS413j which I use for backing up my other two Synology hosts (DS412+).

I was experiencing many of the symptoms in the following post, yet I was convinced I could not have been hacked, as this server is not available to the internet! Not only that but I recently updated DSM (2-3 weeks ago) so wondered if this could be the cause?…

http://forum.synology.com/enu/viewtopic.php?f=108&t=82141

After following the full 11 step fix (minus the migratable step as it wouldn’t work), I still had a box which was unresponsive. My symptoms were:

  • Power LED light blinks blue
  • I could not log into the DSM console
  • DSM login simply says “Processing, please wait” forever or eventually times out
  • I was able to ping DSM
  • I could not reset DSM using the reset button on the back of the device
  • Booting of the device took a very long time (up to 20 mins) and Synology Assistant shows “Starting Services…” for a very long time
  • Even after following rebuild steps I could not get the Synology Assistant to show “Migratable”
  • Once booted I could see the SMB shares and access them intermittently

After much head scratching I decided to take each of the drives in turn and test them on my Windows desktop. I have 4 drives in the host, 2x Seagate and 2x WD Red. If any of them were going to fail I believed the Seagate would be at fault, so I tested those first. I plugged in disk number one, ran some tests, all ok. I then plugged in disk number two, and discovered that windows would not even mount the drive. Not only that but it was actually causing device manager to hang.

Faulty drive identified, I powered on the Synology with the remaining 3 drives only. Much to my relief, the system booted within a minute and started to beep, warning me I was missing a drive. DSM is now responding just fine.

I have now ordered a new WD Red drive to replace the failed Seagate (which was out of its very short warranty of course!).

Moral of the story: If your box looks like it may have been hit by SynoLocker but it was never on the internet, try testing all the drives in turn. You may just save yourself a few hours / days of pain!

One other wee tip is to also enable SSH access. I’m not sure if I could have logged in via SSH in the hung state, but it may have given me another troubleshooting avenue.

%d bloggers like this: