Tag Archive for NetApp

Top 10 Tekhead Posts of 2016

I’m pleased to say that I upped my game somewhat over the past year, managing to churn out 62 posts in 2016, more than double the 28 posts I produced in 2015!

There were a few other interesting trends over the previous year. The balance between VMware and other subjects has definitely shifted for me, where for example, I wrote well over a dozen posts on AWS.

I guess this is probably representative of both my recent role change, as well as the shift in my customers from being 90%+ VMware houses, to a broad mix of different cloud platforms, both public (AWS / Azure) and private (VMware / OpenStack).

This trend is only going to accelerate in the future, and I suggest Scott Lowe’s Full Stack Journey podcast would be well worth your time subscribing to for great information on how to avoid being left behind as our industry morphs over the coming years!

thecloud

It’s worth noting that this trend is also mirrored in the top 5 articles alone, which include popular newer technologies such as Docker and AWS. That said, it’s great to see the Intel NUC Nanolab series is still as popular as ever, and people are obviously still keeping their vSphere skills and certs up to date, based on the VCP delta study guide popularity.

You may also have noticed that I have been a little quieter of late. The main reasons for this have been down to starting my new role earlier this year, studying for exams, plus a number of other projects I’ve been involved in (such as the Open TechCast podcast). Hopefully I can find a little more balance between them all in 2017, though I already have a couple of podcasts, a VMUG presentation, and a possible exam lined up for January so I’m not really helping myself on that front!

Tekhead Post Stats 2016

So, enough jibber jabbing! Here follows the top 10 most popular posts of the past 12 months.

Tekhead Top 10 Posts of 2016
  1. My Synology DSM Blue LED issue was actually just a failed drive!
  2. Installing Docker on Ubuntu Quick Fix
  3. NanoLab – Running VMware vSphere on Intel NUC – Part 1
  4. Fix for VMware Remote Console unrecoverable error: (vmrc)
  5. AWS Certified Solutions Architect Associate Exam Study Guide & Resources
  6. VCP6-DCV Delta Exam (2V0-621D) Study Guide and Exam Experience
  7. NetApp – Is this the dawn of a new day?
  8. NanoLab – Part 10 – Your NUCs are nice and cool, but what about your stick?
  9. Index of Tekhead.it Blog Posts on Amazon AWS
  10. Quick Fix for “The task was canceled by a user” when deploying OVA in vCenter 6

Something Mike Preston and I discussed on our recent Open TechCast podcast episode, was how it can be a little frustrating as a blogger that often an opinion piece which took ages to write and edit will get a small number of views, whilst a quick tip which took a couple of minutes to jot down, might get thousands or even tens of thousands over time!

Gladly, my top 10 this year includes both types, so my time wasnt completely wasted! 🙂

Anyway thats enough from me for now; all the best for 2017 folks!

NetApp – Is this the dawn of a new day?

NetAppSolidFireBiscuit

Many people in the storage industry believed that NetApp made a pretty big mistake by underestimating the power of flash and its impact on the storage market. What really impressed me is that at Storage Field Day 9, Dave Hitz stood up and openly agreed!

He then went on to explain how they had recognised this and made a strategic decision to purchase one of the hottest and most innovative flash storage companies in the world, SolidFire. This has clearly been done with the intention of using SolidFire as Polyfilla for the hole in their product portfolio, but I would suggest that it is as much about SolidFire becoming a catalyst for modernising and reforming the organisation.

As with almost any company which has been around for a significant period of time and grown to a significant size (currently standing at around 12,500 employees), NetApp has become rather a behemoth, with all of the usual process-driven issues which beset companies of their scale. Much like an oil tanker, they don’t so much measure their turning circle in metres, as they do in miles.

With the exception of a few key figures and some public battles with a certain 3-letter competitor, their marketing has also historically been relatively conservative and their customers the same. As a current and historical NetApp customer and ex-NetApp admin myself, by no means am I denigrating the amazing job they have done over the years, or indeed the quality of the products they have produced! However, of late I have generally considered them to be mostly in the camp of “nobody ever got fired for buying IBM”.

Nobody ever got fired for buying IBM

In stark contrast, they have just spent a significant chunk of change on a company that is the polar opposite. SolidFire have not only brilliant engineers and impressive technology, but they also furnished their tech marketing team with some of the most well known and talented figures in the industry. These guys have been backed up by a strong, but relatively small sales organisation, who were not afraid to qualify out of shaky opportunities quickly, allowing them to concentrate their limited resources on chasing business where their unique solution had the best chance of winning. Through this very clear strategy, they have been able to grow revenues significantly year on year, ultimately leading to their very attractive $870m exit.

Having experienced a number of M&As myself, both as the acquiring company and the acquired, I can see some parallels to my own experiences. Needless to say, the teams from both sides of this new venture are in for a pretty bumpy ride over the coming months! NetApp must make the transformation into a cutting edge infrastructure company with a strong social presence, and prove themselves to be more agile to changing market requirements. This is will not be easy for some individuals in the legacy organisation, who are perhaps more comfortable with the status quo. The guys coming in from SolidFire are going to feel rather like they’re nailing jelly to a tree at times, especially when they run into many of the old processes and old guard attitudes at their new employer.

kidding

What gives me hope that the eventual outcome could be a very positive one, is that NetApp senior management have already identified and accepted these challenges, and have put a number of policies in place to mitigate them. For example, as I understand it, the staff at SolidFire have been given a remit that whenever they come across blockers to achieving success for the organisation, to ask some “hard questions”, which are robust in nature to say the least! That said, some are as simple as asking the question “Why?”. With executive sponsorship behind this endeavour ensuring that responses like “because that’s how we’ve always done it” will not be acceptable, I am confident that it will enable the SolidFire guys and gals to work with their new colleagues to affect positive change within the organisation.

I think this is reflected in Jeramiah Dooley’s recent post here, which echoes so many elements of this post I almost considered not hitting publish! 😮

If the eventual outcome of this is to make NetApp stronger and more viable in the long term, then all the better it will be for those who stick around to enjoy it! This, of course, will benefit the industry as a whole by maintaining a strong and broad set of storage companies to keep competition fierce and prices low for customers. Win-win!

bright

It is certainly going to be an interesting couple of years, and I for one am looking forward to seeing the results!

You can find the session videos from all the guys at NetApp here, I would say they are well worth the time to watch:
NetApp Presents at Storage Field Day 9

Further Reading
Some of the other SFD9 delegates had their own takes on the presentation we saw. Check them out here:

Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 9 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.

Storage Field Day 9 – Behind the Curtain

Tech Field Day cheese

Tech Field Day is an awesome experience for all of the delegates! We get to spend an entire week unabashedly geeking out, as well as hanging out with the founders, senior folk and engineers at some of the most innovative companies in the world!

For those people who always wondered what goes on behind the scenes in the Tech Field Day experience, I took a few pano shots at the Storage Field Day 9 event this week.

Here they are, along with most of my favourite tweets and photos of the week… it was a blast!

Panos

Pre-Event Meeting

Pre-Event Meeting & Plexistor

NetApp & SolidFire

NetApp & SolidFire

Violin Memory

Violin Memory

Intel

Intel

Cohesity

Cohesity

VMware

VMware

The rest of the event…

Until next time… 🙂

Software Defined Storage Virtualisation – How useful is that then?

Ignoring the buzzword bingo post title, storage virtualisation is not a new thing (and for my American cousins, yes, it should be spelt with an s! 🙂 ).

NetApp have for example been doing a V-Series controller for many years which could virtualise pretty much any storage you stick in the back of it. It would then present it as NFS and layer on all of the standard ONTAP features.

The big advantage then was that you can use the features which might otherwise be missing from your primary or secondary storage tiers, as well as being able to mix and match different tiers of storage from the same platform.

In a previous role, we had an annual process to full backup and restore a 65TB Oracle database from one site to another over a rather slow link, using an ageing VTL that could just about cope with incrementals and not much more on a day to day basis. End to end this process took a month!

Then one year we came up with a plan to used virtualised NFS storage to do compressed RMAN backups, replicate the data using snap mirror and restore on the other side. It took us 3 days; an order of magnitude improvement!

That was 4 years ago, when the quantity of data globally was about 4x less than it is now; the problem of data inertia is only going to get worse as the worlds storage consumption doubles roughly every two years!

What businesses need is the flexibility to use a heterogeneous pool of storage of different tiers and vendors in different locations to move our data around as required to meet our current IT strategy, without having to change paths to data or take downtime (especially on non virtualised workloads which don’t have the benefits of Storage vMotion etc). These tiers need to provide the consistent performance defined by individual application requirements.

It’s for this reason that I was really interested in the presentation from Primary Data at Storage Field Day 8. They were founded just two years ago, came out of stealth at VMworld 2015, and plan to go GA with their first product in less than a month’s time. They also have some big technical guns in the form of their Chief Scientist, the inimitable Steve Wozniak!

One of the limitations of the system I used in the past was that it was ultimately a physical appliance, with all the usual drawbacks thereof. Primary Data are providing the power to abstract data services based on software only, presented in the most appropriate format for the workload at hand (e.g. for vSphere, Windows, Linux etc), so issues with data gravity and inertia are effectively mitigated. I immediately see three big benefits:

  • Not only can we decouple the physical location of the data from it’s logical representation and therefore move that data at will, we can also very quickly take advantage of emerging storage technologies such as VVOLs.
    Some companies who shall remain nameless (and happen to have just been bought by a four letter competitor) won’t have support for VVOLs for up to another 12 months on some of their products, but with the “shim” layer of storage virtualisation from Primary Data, we could do it today on virtually any storage platform whether it is VVOL compliant or not. Now that is cool!
  • By virtualising the data plane and effectively using the underlying storage as object storage / chains of blocks, they enable additional data services which may either not be included with the current storage, or may be an expensive add-on license. A perfect example of this is sync and async replication between heterogenous devices.
    Perhaps then you could spend the bulk of your budget on fast and expensive storage in your primary DC from vendor A, then replicate to your DR site asynchronously onto cheaper storage from vendor B, or even a hyper-converged storage environment using all local server media. The possibilities are broad to say the least!
  • The inclusion of policy based Quality of Service from day one. In Primary Data parlance, they call them SLOs – Service Level Objectives for applications with specific IOPS, latency etc.
    QoS does not even exist as a concept on many recent storage devices, much to the chagrin of many service providers for example, so being able to retrofit it would protect the ROI on existing spend whilst keeping the platform services up to date.

There are however still a few elements which to me are not yet perfect. Access to SMB requires a filter driver in Windows in front of the SMB client, so the client thinks it’s talking to an SMB server but it’s actually going via the control plane to route the data to the physical block chains. A bit of a pain to retrofit to any large legacy environment.

vSphere appears to be a first class tenant in the Primary Data solution, with VASA and NFS-VAAI supported out of the “virtual” box, however it would be nice to have Primary Data as a VASA Client too, so it could read and then surface all capabilities from the underlying storage straight through to the vSphere hosts.

You will still have to do some basic administration on your storage back end to present it through to Primary Data before you can start carving it up in their “Single Pane of Glass”. If they were to create array plugins which would allow you to remote manage many common arrays this would really make that SPoG shine! (Yes, I have a feverish unwavering objection to saying that acronym!)

I will certainly be keeping an eye on Primary Data as they come to market. Their initial offering would have solved a number of issues for me in previous roles if it had been available a few years earlier, and I can definitely see opportunities where it would work well in my current infrastructure. I guess it now becomes up to the market to decide whether they see the benefits too!

Further Reading
Some of the other SFD8 delegates have their own takes on the presentation we saw. Check them out here:

Ray Lucchesi – Primary data’s path to better data storage presented at SFD8

Dan Frith – Primary Data  Because we all want our storage to do well

Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 8 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.

%d bloggers like this: