Tag Archive for Hyper-V

Startup Spotlight: Re-skill, Pivot or Get Squashed

spotlight

The subject matter of this post is a startup of sorts and was triggered by a conversation I had with an industry veteran a few months back. By veteran of course, I mean an old bugger! 😉

It is an entity which begins its journey sourcing a target market in the tech industry and spends day and night pursuing that market to the best of its ability.

It brings in resources to help meet the key requirements of the target market; some of those resources are costly, and others not so much.

Occasionally it hits a bump in the road with funding and potentially needs to find other sources of investment, and may go through several rounds of funding over the course of a number of years. Eventually it gets to a point where the product is of a decent quality and market value.

Then it does a market analysis and discovers that the market has shifted and if the entity does not pivot or indeed re-skill, they will become irrelevant within a few short years.

Eh?

I am of course talking about the career of an IT professional.

Though I may be slightly exaggerating on the becoming irrelevant quite so fast, we certainly all made the choice to follow a career in one of the fastest moving industries on the planet. We have no choice but to continue to develop and maintain our knowledge, in order to keep driving our careers forward.

As a self-confessed virtual server hugger with a penchant for maintaining a pretty reasonable home lab, I enjoy understanding the detailed elements of a technology, how they interact, and acknowledging where the potential pitfalls are. The cloud, however, is largely obfuscated in this respect; to the point where many cloud companies will not even divulge the location of their data centres, never mind the equipment inside them and configuration thereof!

Obfuscation

Obfuscation

That said, those of you with a keen eye may have noticed a shift in my twitter stream in the past year or so, with subjects tending towards a more public cloudy outlook… Talking to a huge range of customers in various verticals on a regular basis, it feels to me that a great many organisations are right on the tipping point between their current on-premises / dedicated managed services deployment models, and full public cloud adoption (or at the very least hybrid!).

It’s hard to believe that companies like AWS have actually been living and breathing public cloud for over ten years already; that’s almost as long as my entire career! In that time they have grown from niche players selling a bit of object storage, to the Behemoth-aaS they are today. To a greater or lesser extent (and for better or worse!), they are now the yardstick upon which many cloud and non-cloud services are measured. This is also particularly the case when it comes to cost, much to the chagrin of many across the industry!

To me, this feels like the optimum time for engineers and architects across our industry (most definitely including myself) to fully embrace public and hybrid cloud design patterns. My development has pivoted predominantly towards technologies which are either native to, or which support public cloud solutions. Between family commitments, work, etc, we have precious little time to spend in personal development, so we need to spend it where we think we will get the most ROI!

charge

So what have I been doing?

Instead of messing about with my vSphere lab of an evening, I have spent recent months working towards certified status in AWS, Azure, and soon, GCP. This has really been an eye opener for me around the possibilities of designs which can be achieved on the current public cloud platforms; never mind the huge quantity of features these players are likely to release in the coming 12 months, or the many more after that.

Don’t get me wrong, of course, everything is not perfect in the land of milk and honey! I have learned as much in these past months about workloads and solutions which are NOT appropriate for the public cloud, as I have about solutions which are! Indeed, I have recently produced a series of posts covering some of the more interesting AWS gotchas, and some potential workarounds for them. I will be following up with something similar for Azure in the coming months.

Taking AWS as an example, something which strikes me is that many of the features are not 100% perfect and don’t have every feature and nerd knob under the sun available. Most seem to have been designed to meet the 80/20 rule and are generally good enough to meet the majority of design requirements more than adequately. If you want to meet a corner use case or a very specific requirement, then maybe you need to go beyond native public cloud tooling.

Perhaps the same could be said about the mythical Full Stack Engineer?

Good Enough

Anyhow, that’s enough rambling from me… By no means does this kind of pivot imply that everything we as infrastructure folks have learned to date has been wasted. Indeed I personally have no intention to drop “on premises” skills and stop designing managed dedicated solutions. For the foreseeable future there will likely be a huge number of appropriate use cases, but in many, if not most cases I am being engaged to look at new solutions with a publicly cloudy mindset!

Indeed, as Ed put it this time last year:

Downtime sucks! Designing Highly Available Applications on a Budget

HA Minions

Downtime sucks.

I write this whilst sitting in an airport lounge, having been disembarked from my plane due to a technical fault. I don’t really begrudge the airline in question; it was a plumbing issue! This is a physical failure of the aircraft in question and just one of those things (unless I find out later they didn’t do the appropriate preventative maintenance of course)! Sometimes failures just happen and I would far rather it was just a plumbing issue, not an engine issue!

What is not excusable, however, is if the downtime is easily preventable; for example, if you are designing a solution which has no resilience!

This is obviously more common with small and medium sized businesses, but even large organisations can be guilty of it! I have had many conversations in the past with companies who have architected their solutions with significant single points of failure. More often than not, this is due to the cost of providing an HA stack. I fully appreciate that most IT departments are not swimming in cash, but there are many ways around a budgetary constraint and still provide more highly available, or at least “Disaster Resistant” solutions, especially in the cloud!HA Austin Powers Meme

Now obviously there is High Availability (typically within a single region or Data Centre), and Disaster Recovery (across DCs or regions). An ideal solution would achieve both, but for many organisations it can be a choice between one and the other!

Budgets are tight, what can we do?

Typically HA can be provided at either the application level (preferred), or if not, then at the infrastructure level. Many solutions to improvise availability are relatively simple and inexpensive. For example:

  • Building on a public cloud platform (and assuming that the application supports load balancing), why not test running twice as many instances with half the specification each? In most cases, unless there are significant storage quantities in each instance, the cost of scaling out this way is minimal.
    If there is a single instance, split it out into two instances, immediately doubling your availability. If there are two instances, what about splitting into 4? The impact of a node loss is then only 25% of the overall throughput capacity for the application, and can even bring down the cost of HA for applications where the +1 in N+1 is expensive!
  • Again in cloud, if there are more than two availability zones in a region (e.g. on AWS), then take advantage of them! If an application can handle 2 AZs, then the latency of adding a third shouldn’t make much, if any difference, and costs will only increase slightly with a small amount of extra inter-AZ bandwidth or per-AZ services (e.g NAT gateways).
    Again, in this scenario the loss of an AZ will only take out 33% of the application servers, not 50%, so it is possible to reduce the number of servers which are effectively there for failover only.
  • If you can’t afford to run an application as multi-AZ or multi-node, consider putting it in an auto-scaling group or scale-set with a minimum and maximum of 1 server. That way if an outage occurs or int he case of AWS, an entire AZ goes down, an instance will automatically be regenerated in an alternative AZ.HA Oliver
What if my app doesn’t like load balancers?

If you have an application which cannot be load balanced, you probably shouldn’t be thinking about running it in the cloud (not if you have any serious availability requirements anyway!). It amazes me how many business critical applications and services are still running in single servers all over the world!

  • If your organisation is dead set on using cloud for a SPoF app, then making it as ephemeral as possible can help. Start by splitting the DBs from the apps, as these can almost always be made HA by some means (e.g. master/slave replication, mirroring, log shipping, etc). Failover nodes also often don’t attract a license fee from many vendors (e.g. MS SQL), so always check your license documentation to see what you can achieve on the cheap.
  • Automate! If you can deploy application server(s) from a script, even if the worst happens, the application can be redeployed very quickly, in a consistent fashion.
    The trend at the moment is moving towards a more agile deployment process and automated CI/CD pipelines. This enables companies to recover from an outage by rebuilding their environments and redeploying code rapidly (as long as they have a replica of the data or a highly available datastore!).
  • If it’s not possible to script or image the code deployment, then taking regular backups (and snapshots where possible) of application servers, and testing them often is an option! If you don’t want to go through the inflexible, unreliable and painful nightmare of doing system state restores, then take image-based backups (supported by the vast majority of backup vendors nowadays). Perhaps even syncing of application data to a warm standby server which can be brought online reasonably swiftly, or even use an inexpensive DR service such as Azure Site Recovery, to provide an avenue of last resort!
  • If maybe cloud isn’t the best place to locate your application, then provide HA at the infrastructure layer by utilising the HA features of your favourite hypervisor!
    For example, VMware vSphere will have an instance back up and running within a minute or two of the failure of a host using the vSphere HA feature (which comes with every edition except Essentials!). On the assumption/risk that the power cycle does not corrupt OS, applications or data, you minimise exposure to hardware outages.
  • If the budget is not enough to buy shared storage and all VMs are running on local storage in the hypervisor hosts (I have seen this more than you might imagine!), then consider using something like vSphere Replication or Hyper-V Replicas to copy at least one of each critical VM role to another host, and if there are multiple instances, then spread them around the hosts.

Finally, make sure whatever happens there is some form of DR, even if it is no more than a holding page or application notification and a replica or off-site backup of critical data! Customers and users would rather see something telling them that you’re working to resolve the problem, than getting a spinning wheel and a timeout! If you can provide something which is of limited functionality or performance, then it’s better than nothing!

HA ServersTLDR; High Availability on a Budget

There are a million and one ways to provide more highly available applications; these are just a few. The point is that providing highly available applications is not as expensive as you might initially think.

With a bit of elbow grease, a bit of scripting and regular testing, even on the smallest budgets you can cobble together more highly available solutions for even the crummiest applications! 🙂

Now go forth and HA!

MCTS: 70-246 Monitoring and Operating a Private Cloud with System Center 2012 Exam Review

Well I am very pleased to say that I came back home today certified as an MCSE: Private Cloud… yay!

First off, I would say this is one of the trickiest MS exams I have taken. This is not because the subject is particularly difficult, but purely for the volume and depth of information you need to cover, as you are in effect being tested on your knowledge of no less than 7 enterprise applications as well as their interoperation!

I will admit that due to time constraints I wasn’t able to study all elements of SC2012 in the depth I would have liked (I have barely scratched the surface with App Controller and Config Manager), but I was fortunate not to have been hammered too badly because of this.

I have already listed my study materials in my previous post MCTS: 70-246 Monitoring and Operating a Private Cloud with System Center 2012 Exam Prep and Study Guide, but once again I believe it is really getting as much hands on experience as you can, which makes all the difference.

I created a simple lab environment running the entire thing under VMware Workstation 8 on my desktop machine. The spec of the machine is:

  • Intel Quad Core i7 920 processor
  • 24GB RAM
  • Multiple SSDs (the test lab runs across 2 of them totalling around 150GB of space in use).
  • I also used a FreeNAS 0.7 appliance running on another vSphere box to provide some shared iSCSI storage for my Hyper-V clusters (doesnt need to be fast as only for a couple of test VMs and cluster quorum).

The only time I suffered any real performance issues with this setup were when installing windows updates. This wasn’t an issue for me as I kicked them off overnight, but if you were being a bit more proactive, you could build one VM first, update to all the latest patches, install Sliverlight, .NET3.5 / .NET4  (required by lots of SC products), then sysprep and clone the VMs instead.

As I was being a little lazy, I didn’t do much with nesting VMs this time, so immediately under WS8 I installed 9 VMs. You could of course nest most or all of these roles under your Hyper-V hosts, barring the DC which is required to auth the startup of your VM hosts, an issue which is now fixed in Windows Server 2012 (in theory). The performance reduction is minimal, it’s just a bit of a pain if you want to shut down your machine in a hurry…

HostnameRolesvCPUsvRAMvDisks
(Thin)
SV2008R2-MGTAD, DNS, SQL Server 2008 R2 SP124GB40GB
100GB
SCOMOperations Manager 2012, SQL Server 2008 R2 SP1 Reporting Services22GB40GB
SCCMConfiguration Manager 201212GB40GB
SCVMMVirtual Machine Manager 201212GB40GB
SCSMService Manager 201212GB40GB
SCACApplication Controller 201212GB40GB
SCORCHOrchestrator 201212GB40GB
HV1-FULLHyper-V under a full 2008 R2 OS installation24GB40GB
60GB
HV2-HVSHyper-V Server 2008 R224GB40GB
60GB

Hyper-V Server and 2008 R2 are not supported in a cluster configuration, but it will work (with a couple of red lines on your cluster validation report). As long as you implement the following steps, you can then nest 64 bit VMs inside your Hyper-V servers. See Velimir Kojic’s blog post for more info on this, but the headline points are:

  1. Enable virtualisation of VT-x/EPT. This is the same as you would do for virtualising ESX/ESXi under Workstation 8, allowing nested 64-bit VMs.
  2. Add the following line to your VMX files:
    hypervisor.cpuid.v0 = “FALSE”

I did initially try the unified installer but it proved to be a total pain, especially as some of the components were not recognised or were missing / different from the download links, and the installer itself refuses to install or even recognise a package if the install does not have the correct name (e.g. you have to download 2 editions of reportviewer, 2008 and 2010. You need to put them in separate directories with their original file names, and not just rename them to reportviewer2008.exe and reportviewer2010.exe – very annoying!). The same goes for the service packs, SQL installers etc. In the end I gave up with it and installed all the components manually, which I think probably teaches you more about the install process anyway.

Once I had my lab up and running I simply followed through all of the MS training on the Microsoft Virtual Academy. I genuinely cannot recommend these highly enough, and it really is very good of MS to provide them free of charge. When running through the videos, I tried to emulate every demo on screen, using my lab, then followed through reading as many articles as possible from the other links I included in my prep article.

Good luck to anyone attempting this exam in the future, next on my agenda was going to be the upgrade to Windows Server 2012, but I have decided to (at long last) slot in some time to aim for a CCNA first!

Related Posts
MCTS: 70-246 Monitoring and Operating a Private Cloud with System Center 2012 Exam Prep and Study Guide

MCTS: 70-246 Monitoring and Operating a Private Cloud with System Center 2012 Exam Prep and Study Guide

Exam 70-246

After a rather busy summer, I figured it was about time I got round to finishing up my MCSE:Private Cloud, by completing the final exam in the track with the 70-246 exam. Unfortunately due to a very busy week since I came back from holiday, I haven’t given myself much time to study for the exam!

At the time of writing there are still no online MOC (Microsoft Official Curriculum) courses on 70-246 (such as the courses you can use with your TechNet subscription), so if you have a manager with a great training budget you can always attend the 10750A: Monitoring and Operating a Private Cloud with System Center 2012 (5 Days) training course. I have other courses I want my dev budget spent on, so I have chosen to use online resources to study for it instead.

As always, I have summarised my prep materials / study guide below for anyone interested:

  • Official Microsoft 70-246 Exam Page
    Links to all official source material, exam reqs, etc. Make sure you know and understand all of the skills measured.
  • Microsoft Virtual Academy CoursesFree!
    For a free resource these courses are superb! I used these previously for my 70-659 exam prep, and have done so again this time. If you haven’t done 70-659 and are approaching 70-246 without any Hyper-V knowledge but perhaps some VMware knowledge, then I highly recommend you consider the “Microsoft Virtualization for VMware Professionals – The Platform” and “Microsoft Virtualization for VMware Professionals – Management” courses first. They are based on 2008 R2 but it will cover off the mapping of terminology etc.
    Note: there is quite a bit of repetition in the courses, so I will try to highlight as I go, which are the best use of your time (unless of course you’re a rank whore, in which case do them all!). The courses I completed are as follows:

    1. Configuring and deploying Microsoft’s Private Cloud
      A good intro to Hyper-V 2012 covering a broad base – expect so spend a good 16 hours watching the 8 videos (allowing for pauses for breaks and note-making). As usual, the inimitable Symon Perriman leads the course, assisted by a selection of other MS technical marketeers.
      Be warned, the content in this is very useful, but this was one of the driest MSA courses I have watched to date. At points I did struggle to keep my attention levels up. Try to watch them say one video a night, then spend some time playing with your lab on whichever component you were watching. Trying to watch these in one go will zap your brain!
      Note: For some reason this skips the intro video for the jump start course which can be found here, and I recommend you watch first for a general overview:
      Private Cloud Jump Start (01): Introduction to the Microsoft Private Cloud with System Center 2012
    2. What’s New in System Center 2012
      This follows the same slide deck as the intro to private cloud course I mentioned above, but with a different presenter.
    3. System Center 2012: Virtual Machine Manager (VMM)
      Still a lot of high level technical marketing, but there are some quite useful demos.
    4. System Center 2012 Operations Manager
      Very well presented and goes into a decent amount of detail with plenty of demos.
    5. System Center 2012: Orchestrator & Service Manager
      Another well presented and more in-depth course.
    6. System Center 2012: Configuration Manager
      Review TBC – I did not actually get through this in time before my exam, but plan to revisit it later anyway.
    7. System Center Advisor
      Review TBC – I did not actually get through this in time before my exam, but plan to revisit it later anyway.
    8. Introduction to Private, Hybrid and Public Cloud
      Do this if you are totally new to cloud concepts, otherwise save your time and look elsewhere.
  • System Center 2012 Self-Study Guide by Scott RachuiRecommended!
    Quite simply the most in depth, detailed set of study guides I have ever come across! Scott has put in a huge amount of effort to gather all of these resources in one place. Go through as many as you can, but to be honest, you probably wont have time to get through them all!!!
  • Study Guide by Keith Mayer
    Great resource from MS blogger Keith Mayer. To download his guide, you need to use the “Pay with a Tweet” link to get a copy of his free PDF. Totally worth the price! 🙂
  • Official MS Virtualisation Blog
    If you’re a VMware person, hold onto your hat for some serious politicking, but there is some interesting content if you have time for a browse.
  • Hyper-V White Papers by Aidan Finn
    This site is run by MS MVP Aidan Finn, who has co-authored a load of books on MS products.
  • Build a Home Lab
    I cannot recommend this enough. The best way to learn Hyper-V is to play with it, that way you have seen the ins and outs.
    My home lab runs under VMware Workstation 8 on Windows 7 64-bit, with an Intel Core i7 920 and 24GB of RAM.
    To get Hyper-V 2008 R2 to run like this you need to do a couple of fixes to your hypervisor VMs when you create them. See Velimir Kojic’s blog postfor more info on this, but the headline points are:

    1. Enable virtualisation of VT-x/EPT. This is the same as you would do for virtualising ESX/ESXi under Workstation 8, allowing nested 64-bit VMs.
    2. Add the following line to your VMX files:
      hypervisor.cpuid.v0 = “FALSE”
  • More to links and updates to follow over the next week…

Please feel free to submit any worthwhile links to study materials and I will include them above.

Related Posts:
MCTS: 70-246 Monitoring and Operating a Private Cloud with System Center 2012 Exam Review

%d bloggers like this: