100th post and time for a change…

Wow! Little did I think that when I posted my first couple of silly posts in May 2010, that 6 years later I would actually be doing this thing on a regular basis and that it would have given me so many amazing learning and networking opportunities!

I originally built the blog both to remind myself of stuff by blogging about it and to test out running Ubuntu in my homelab; more fool me however, I chose Joomla as my CMS platform…

I’m sure it’s a great product, but frankly as a novice blogger it was really not a friendly platform, which discouraged me from actually using it, and for the first two years I managed a sum total of three posts! Woohoo!

I then made what was in hindsight, a very sensible decision to switch to WordPress. Since then I haven’t looked back… averaging a couple of posts every month for the past 4 years.

Novice Blogger

The funny thing I have found about blogging over this time is that I do this mostly for the enjoyment of writing and sharing information etc, yet I find myself in a permanent state of mental flagellation over not producing enough content or publishing often enough.

Like everyone I have my excuses, not least my two small children and crazily busy job, but I do what I can! I am always in awe at the amount of content some bloggers manage to generate, whilst still staying sane and having a personal life!

I don’t believe I’m the only one who feels like this… Perhaps we should start a support group and give the condition a name? How about “Bloggers Contrition”?

New Post

Anyway enough jabbering…

As to the title above, if you are a regular visitor to the site you may already have noticed some small changes going on with the domain name and titles. I am not rebranding, but I felt that switching from .org to .it and dropping the www was a nice way to give the site a bit of a refresher going into 2016.

http://tekhead.it/blog

The process itself will probably take me a couple of weeks to complete as I want to make sure I have all of the right 301 redirects in place before the final switch, but I am not anticipating this being a huge issue. If for any reason you happen to spot any of the content becoming unavailable, please let me know via a wee tweet!

Anyhoo, I’ll just wrap up by saying thank you all very much for allowing me to continue ranting on this little corner of the internet and for all the positive comments and feedback over the years. I shall endeavour to keep it up – if I can think of anything to write about!

Life, Web , , , , , , , , , , , ,

Windows Server 2012 MCSA Upgrade 70-417 Study Guide and Exam Experience

Sat and passed the 70-417 exam this week so thought I would get a few thoughts down for the benefit of the handful people who may still be planning to sit it. Yes, I know it’s Windows 2012, and I am writing this in 2016, but I’ve been a bit busy the last few years doing “other stuff”. 🙂

Preparation Materials

The list of materials I used to prepare for this exam were relatively few, and were certainly very inexpensive!

  1. Upgrading Skills to Windows Server 2012 Jump Start on Microsoft Virtual Academy – not massively in depth, but a great introduction to the new features in 2012.
  2. What’s New in Windows Server 2012 R2 Jump Start on Microsoft Virtual Academy – again, a good overview on the new changes. This of it as the foundations on which to build your new skills!
  3. Pluralsight training: Dipped in and out of the 70-410 / 70-411 / 70-412 courses for areas I needed additional knowledge. The quality of course material on Pluralsight is second to none and they are always my go-to video training provider. The only shame is that they don’t have a specific 70-417 course, which you can get on their biggest competitor (CBT Nuggets).
  4. Pluralsight: Windows Server 2012 Remote Desktop Infrastructure.
  5. Exam Ref 70-417 Upgrading from Windows Server 2008 to Windows Server 2012 R2 (MCSA) by J.C. Mackin. This was by far the single most effective aid to learning all of the new features, as well as passing the exam! If you only have time to use one study aid, this is the one to invest in. It’s also only just over a tenner! I will definitely be investing in these official guides for my future MS exams (if I do any – see below!).
  6. Followed the blueprint on the MS 70-417 site, to confirm that I had a reasonable understanding of each of the areas tested.
  7. Spent a number of hours installing and configuring as many of the different new / updated features in Windows 2012 R2, on my home lab (Intel NUC Nanolab). In terms of getting to know what the different configuration options and processes are, this is invaluable!

Exam Experience and Tips
This exam is unlike most other MS exams (or indeed other vendor exams) in that it is broken down into three distinct sections, covering each of the three equivalent exams (70-410/411/412). Once you complete each section, you cannot go back to the previous one. Personally this is not a massive issue for me as my exam technique is to answer and move on. If I’m unsure, I go with my gut feeling as this is probably more likely to be right than anything I come up with spending 10 minutes wavering back and forth between answers!

Taking this three section element a step further, your final grade is actually based on the lowest score of each section. Worse still, if you don’t meet 70% in any one section, even if you ace the other two, you fail the exam. No pressure then! I believe it varies, but I had less than 60 questions, roughly split three ways between the sections.

Everyone is going to have their strengths and weaknesses but I personally found the middle section the trickiest, passing by relatively small margin, but the first and last were not too bad.

It felt to me like the typical mixed MS bag of easy marks from simple questions, and the insanely difficult “how would you know that one setting or feature unless you had implemented it in some obscure use case”. This is perhaps where I feel sometimes MS exams are not very realistic, and don’t actually test your real world understanding / skills. This has become even worse in the past few years, as you are now expected to memorise literally hundreds of PowerShell commands, many of which you will probably never use, or could check using the ISE when you need to.

In terms of tips, my number one suggestion is that you definitely make sure you know all of the key PowerShell commands required by the blueprint / exam guide. Beyond that practice as much of the configuration as you can in your home lab, as you will be expected to know which “nerd knobs” to turn and buttons to click to achieve some activities.

Closing Thoughts on the Current State of Microsoft Exams
I have stated this openly previously, but I will say it here again. I strongly object to the concept of certifications which are linked to a specific product version, having an expiry date. There is absolutely no benefit to the individual, or indeed the industry to have someone take the same exam over and over again every couple of years, and any particular version is only “current” for 3-5 years anyway.

Do employers of vocational degree graduates expect you to go back to University every couple of years and re-take your finals to prove you understood the content? Of course not! They take your degree as proof that you understood the subject matter at the time, and that you have gained skills and experience both from that time and subsequently.

The other joke here is that the technical certifications themselves do not actually prove that you truly know how to do the job anyway, especially with the prevalence of brain dumps, and IMHO are only a gateway and aid to recruiters. Unless you’re a contractor, the further you progress in your career, the less potential employers actually seem to care about these certifications anyway. They appear to me to be seen as a “nice to have”, but your experience and skills are far more important.

For this reason I have decided that even as a self professed certification junkie, it is very unlikely that I will take my new MCSA 2012 and upgrade it all the way to the MCSE, largely due to the 3 year time limit and re-certification requirement. I would far rather spend my limited time learning other new technologies (for example AWS, Docker, Vagrant, etc) with or without certification, and using those new skills to progress my career.

I don’t think there is any doubt that the new Microsoft is making a great many positive decisions under Satya Nadella’s leadership, but the organisation’s decision to expire certs is not one I can get myself behind.

Certification, Microsoft , , , , , , , , , , , ,

Top 5 Posts of 2015 on Tekhead.org

This is just a very quick note to say thank you, everyone, for your awesome support and continued readership over the past year! Without that I don’t know that I would put in the effort!

I very much hope that the content produced continues to be of some use in the coming year…

Moving swiftly on to the Top 5 most popular posts of 2015; they were as follows:

  1. My Synology DSM Blue LED issue was actually just a failed drive!
  2. NanoLab – Running VMware vSphere on Intel NUC – Part 1
  3. Installing Docker on Ubuntu Quick Fix
  4. Docker Part 1 – Introduction and HOWTO Install Docker on Ubuntu 14.04 LTS
  5. NanoLab – Running VMware vSphere on Intel NUC – Part 2

I can’t say I’m surprised with the Synology post popularity, as there are far more Synology users out there than virtualisation and storage admins I should think… not 100% my usual reader demographic. 😉

The Nanolab series continues in popularity, which warms my cockles! They make for an awesome homelab, and I have a handful of posts almost ready to go for 2016 to continue this series.

Finally, and most interestingly considering current industry trends, the Docker HOWTO series has definitely proven very popular, even though so far I have stuck to the absolute basics! I will definitely endeavour to expand on this series throughout this year.

So that’s it for now, just a quick one. I hope you all had an awesome New Year (I spent mine this year watching Star Wars: The Force Awakens woohoo!) and wish you all the best for the exciting things to come in 2016!

Web , , , , , , ,

Why are storage snapshots so painful?

Have you ever wondered why we don’t use snapshots more often than about every 5-15 minutes in most solutions, and in many others, a lot less often than that?

It’s pretty simple to be honest… The biggest problem with taking snapshots is quiescing the data stream to complete the activity. At a LUN level, this usually involves some form of locking mechanism to pause all IO while any metadata updates or data redirections are made, after which the IO is resumed.

For small machines and LUNs with minimal IO load this is generally such a quick operation that it has virtually no effect on the application user, and is pretty much transparent. For busy applications, however, data can be changing at such a massive rate that disrupting that IO stream, even for a few seconds can have a significant impact on performance and user experience. In addition the larger the number of snapshots in the snap tree, the more that performance is often degraded through the management of large numbers of snapshots, copy on write activities, and, of course, lots of locking.

This problem is then multiplied several times over when you want to get consistency across multiple machines, for example when you want to get point-in-time consistency for an entire application stack (Web / App / DB, etc).

So what do we typically do? We reduce the regularity at which we take these snaps in order to minimise the impact, whilst still having to meet the (usually near zero because all data is critical, right?) RPO set by the business.

At SFD8, we had a very well received presentation from INFINIDAT, a storage startup based in Israel and founded by industry legend Moshe Yanai (the guy who brought you EMC Symmetrix / VMAX, and subsequently XIV). Moshe’s “third generation” enterprise class storage system comes with one particular feature with which I was really interested; snapshots! Yes, I know it sounds like a boring “checkbox in an RFP” feature, but when I found out how it worked I was really impressed.

For every single write stripe which goes to disk, a checksum and a timestamp (from a high precision clock) are written. This forms the base on which the snapshot system is built (something they call InfiniSnap™).

If you have a micro-second accurate clock and timestamps on every write, then in order to achieve a snapshot you simply have to pick a date and time! Anything written earlier than this is not included in the current snap, and anything on or after the time is. This means no locking or pausing of IO during a snap, making the entire process a near zero time and a zero impact operation! A volume with or without snapshots, therefore has indistinguishable performance. Wow!

Screen Shot 2015-12-13 at 20.55.19

It sounds so simple it shouldn’t work, but according to INFINIDAT they can easily support up to 100,000 snaps per system, and even this isn’t even a real number. They made it up as it was a double figure percentage bigger than the next closest array on the market. They will also happily support more than this if you ask, they said that they just need to test it first. In addition, each snap group will support up to 25 snaps per second, and they guarantee an RPO of as low as 4 seconds, based on snapshots alone. You can then use point in time replication to create an asynchronous copy on another array if needed. Now that’s granular! 🙂

The one caveat I would add to this is that this does not yet appear to have a fix for ye old faithful crash consistent vs application consistent issue, but it’s a great start. Going back to the application stack “consistency group” concept, in theory, you generally only need to VSS the database VM, and as such it will be much easier and simpler to have a consistent snap across an app stack with minimal overhead. As we move more towards applications using No-SQL databases etc, this will also become less of an issue.

The above was just one of the cool features they covered in their presentation, from which the general consensus was very positive indeed! A couple of weeks ago I was also able to spend a little time with one of INFINIDAT’s customers who just so happened to be attending the same UKVMUG event. Their impressions in terms of the quality of the array build (with a claimed 99.99999% availability!), the management interface, general performance during initial testing, the compelling pricing, and of course, their very funky matrix-like chassis, were all very positive too.

If you want to see the INFINIDAT presentation from SFD8, make sure you have your thinking hat on and a large jug of coffee! Their very passionate CTO, Brian Carmody, was a very compelling speaker and was more than happy to get stuck into the detail of how the technology works. I definitely felt that I came away a little smarter having been a part of the audience! He also goes into some fascinating detail about genome sequencing, the concept of cost per genome and it’s likely massive impact on the storage industry and our lives in general! The video is worth a watch for this section alone…

Further Reading
Some of the other SFD8 delegates have their own takes on the presentation we saw. Check them out here:

Dan FrithINFINIDAT – What exactly is a “Moshe v3.0”?
Enrico Signoretti’s blog Juku.itInfinidat: awesome tech, great execution
Enrico Signoretti writing on El RegHas the next generation of monolithic storage arrived?
Ray LucchesiMobile devices as a cache for cloud data
Vipin V.K. – Infinibox – Enterprise storage solution from Infinidat
GreyBeards on Storage Podcast – Interview with Brian Carmody

Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 8 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.

Storage, Tech Field Day , , , , , , , , , , , ,