Tag Archive for training

Amazon AWS Tips and Gotchas – Part 2 – AWS EBS & RDS MS SQL

Continuing in this series of blog posts taking a bit of a “warts and all” view of a few Amazon AWS features, below are a handful more tips and gotchas when designing and implementing solutions on Amazon AWS, including EBS and MS SQL on RDS.

For the first post in this series with a bit of background on where it all originated from, see here:
http://tekhead.it/blog/2016/02/amazon-aws-tips-and-gotchas-part-1/

For more posts in this series, see here:
Index of AWS Tips and Gotchas

AWS Tips and Gotchas – Part 2 – EBS & RDS
  1. You cannot increase the size of EBS volumes without stopping the instance. If you are designing scale-out / high availability solution then this is not a big issue as you should be able to take some downtime on any individual node, but that downtime is going to be fairly significant, and the larger the volume, the more downtime you will incur. The actual process looks like this (summary below):
    • Stop the instance
    • Snapshot the volume
    • Create a new volume from the snapshot, with your new larger size
    • Detach the old volume
    • Attach the new volume and start the instance back up

    This is one of those features which is bread and butter for a vSphere or Hyper-V admin, and could be done online in seconds with the vast majority of guest operating systems.

    I think it really highlights the key difference between designing for AWS Cloud, and a traditional enterprise virtual infrastructure. In a solution where most of your hosts are ephemeral, this should not be a big issue. If you try to take a traditional enterprise approach, you may find yourself in hot water, having to take service downtime to make simple changes.

    I suggest where possible / appropriate, avoid using EBS and use alternative options such as S3 which can scale on demand.

    UPDATE 13th Feb 2017: Amazon have just released Elastic Volumes, which allow you to scale up EBS volumes on demand! Yay! More info here:
    Amazon EBS Update – New Elastic Volumes Change Everything

  2. Similar to resizing EBS volumes, you cannot hot-resize an instance, or indeed resize them / change their type in place. In order to change instance type you need to detach any EBS volumes (including root volumes if you wish to maintain them too), terminate the instance, create a new one and re-attach your volumes.
    Obviously you cannot re-attach a root volume if you are using instance storage (ephemeral) for this, so make sure you use EBS backed volumes if you want to maintain your root volumes for any scale-up elements of your solutions which cannot simply be re-created from a bootstrap script.
  3. If your application depends on Microsoft SQL, you are going to be in for a fairly unpleasant surprise! It is not currently possible to resize MS SQL volumes on Amazon RDS once they have been deployed! At all. Full stop. Nada.AWS MS SQL - say what nowThe recommendation from AWS is to deploy your estimated future capacity requirement from day one! Not very cloudy at all…Your only growth option when you hit your initial capacity limit is to migrate all the data to a new RDS instance and take some application downtime to fail over.This can be minimised by using things like log shipping from the source instance to get the target as close to up-to-date as possible, but you will still need to shut down and swing your applications, and frankly it’s a risky headache which would be better avoided if possible, and certainly not something you want to be doing on a regular basis.Probably best to design for your estimated growth, and add a percentage on top.

Find more posts in this series here:
Index of AWS Tips and Gotchas

Amazon AWS Tips and Gotchas – Part 3 – S3, Tags and ASG

Amazon AWS Tips and Gotchas – Part 1 – AWS Intro, EBS and EC2

Although I have been very much aware of AWS for many years and understood it at a high level, I have never had the time to get deep down and dirty with the AWS platform… that is until now!

I have spent the past three weeks immersing myself in AWS via the most excellent ACloud.Guru Solution Architect Associate training course, followed by a one week intensive AWS instructor-led class from QA on AWS SA Associate and Professional.

While the 100 hours or so I have spent labbing and interacting with AWS is certainly not 10,000, it has given me some valuable insights on both how absolutely AWSome (sorry – had to be done!) the platform is, as well as experiencing a few eye openers which I felt were worth sharing.

It would be very easy for me to extoll the virtues of AWS, but I don’t think there would be much benefit to that. Everyone knows it is a great platform (but maybe I’ll do it later anyway)! In the meantime, I thought it would be worthwhile taking a bit more of a “warts and all” view of a few features. Hopefully, this will avoid others stepping into the potential traps which have come up directly or indirectly through my recent training materials, as well as being a memory aid to myself!

pretty cloud AWS EC2 EBS

The key thing is with all of these “gotchas”, they are not irreparable, and can generally be worked around by tweaking your infrastructure design. In addition, with the rate that AWS develop and update features on their platforms, it is likely that many of them will improve over the coming months / years anyway.

The general feeling around many of these “features” is that AWS are indirectly and gently encouraging you to avoid building your solutions on EC2 and other IaaS services, Instead, pushing you more towards using their more managed services such as RDS, Lambda, Elastic Beanstalk etc.

This did originally start off as a single “Top 10” post but realised quickly that there are a lot more than 10 items and some of them are pretty deep dive! As such, I have split the content into easily consumable chunks, with a few lightweight ones to get us started… keep your eyes open for a few whoppers later in the series!

The full list of posts will be available here:
Index of AWS Tips and Gotchas

AWS Tips and Gotchas – Part 1
  1. Storage for any single instance may not exceed 20,000 IOPS and 320MB/sec per EBS volume. This is really only something which will impact very significant workloads. The current “recommended” workaround for this is to do some pretty scary things such as in-guest RAID / striping!

    Doing this with RAID0 means you then immediately risk loss of the entire datastore if a single EBS volume in the set goes offline for even a few seconds. Alternatively, you can buy twice as much storage and waste compute resources doing RAID calculations. In addition, you then have to do some really kludgy things to get consistent snapshots from your volume, such as taking your service offline. 
    In reality, only the most extreme workloads hit this kind of scale up. The real answer (which is probably better in the long term) is to refactor your application or database for scale-out, a far more cloudy design.
    amazon AWS EBS
  2. The internet gateway service does not provide a native method for capping of outbound bandwidth. It doesn’t take a genius to work out that when outbound bandwidth is chargeable, you could walk away with a pretty significant bandwidth bill should something decide to attack your platform with a high volume of traffic. One potential method to work around this would be to use NAT instances. You can then control the bandwidth using 3rd party software in the NAT instance OS.
  3. There is no SLA for EC2 instances unless you run them across multiple Availability Zones. Of course with typical RTTs of a few milliseconds at most, there is very little reason not to stretch your solutions across multiple AZs. The only time you might keep in one AZ is if you have highly latency sensitive applications, or potentially the type of app which requires a serialised string of DB queries to generate a response to the end user.

    In a way I actually quite like this SLA requirement as it pushes customers who might otherwise have accepted the risk of a single DC, into designing something more robust and accepting the (often minor) additional costs. With the use of Auto Scaling and Elastic Load Balancing there is often no reason you can’t have a very highly available application split across two or more AZs, whilst using roughly the same number of servers as a single site solution.

    For example the following solution would be resilient to a single AZ failure, whilst using no more infrastructure than a typical resilient on-premises single site solution:Teahead AWS Simple HA Web Configuration
    No DR replication required, no crazy metro clustering setup, nothing; just a cost effective, scalable, highly resilient and simple setup capable of withstanding the loss of an entire data centre (though not a region, obviously).

Find more posts in this series here:
Index of AWS Tips and Gotchas

Amazon AWS Tips and Gotchas – Part 2 – AWS EBS & RDS MS SQL

 

Looking Forward to Storage Field Day 9 (#SFD9)

Storage Field Day

So for those of you who love to nerd out on storage like I do, you have probably already heard of the awesome streaming events put on by Stephen Foskett and the crew from Tech Field Day, otherwise known as Storage Field Day. These have grown so popular that Stephen is having to put on extra events just to cater for demand, which I think speaks volumes as to their efficacy and indeed quality!

For those not yet indoctrinated, these events involve taking a group of around a dozen storage and technology delegates to visit a number of different startups (think Pure, NexGen, Coho, etc) and more established companies (think Intel!) to talk about the latest things going on both at those organisations and in the industry in general. Each session lasts a couple of hours, but is generally broken down into several bite sized chunks for consumption at your leisure.

As a stream viewer you get the opportunity to learn about your favourite vendors latest funky stuff and watch them answer questions about all the things you probably wanted to know but never got the chance to ask. It is also a great way to get your head around an unfamiliar technology or vendor. Lastly, if you watch live, you can also ask questions via twitter for the delegates to ask of the presenters.

As a delegate this goes to a whole new level as you get to spend almost an entire week mahoossively geeking out on tech, learning from some of the smartest people in the tech industry, and meeting with the senior people at some of the industry’s best-known companies. I find it generally safest just to wear multiple layers to avoid any embarrassing nerdgasms! 😉

So with that in mind I am really chuffed to have been invited back to attend Storage Field Day 9, next month (16th-18th March) in San Jose!

Not all of the companies have been announced as yet, but we already know that the likes of Cohesity, Intel, VMware & Violin Memory will be in attendance. More will be confirmed over the next coupe of weeks and having seen the provisional list I can tell you it is definitely going to be a great event!

vendors

Needless to say the lineup of delegates is awesome as usual, with many well known bloggers from the EU, US and APAC. Make sure you check them out and follow the crew on twitter if you are so inclined. Most delegates post their opinions around the vendors and tech both during and after the event, so make sure you check out their blog feeds. For example, here is mine:

http://www.tekhead.org/blog/feed/

If you want to tune in live, simply go to http://techfieldday.com from 16th-18th March (PST) or catch up with the recordings on youtube later.

Finally, be warned my Twitter stream does get rather busy during the event, so feel free to temporarily mute me if need be! 😉

NanoLab – Part 9 – Installing VMware vSphere ESXi 5.5 on Intel NUC

I successfully ran my VMware vSphere ESXi 5.1 Nanolab for 18 months on my pair of Intel NUC DC3217IYE hosts. Early this year I got around to upgrading to 5.5. I had experienced some issues with my vCenter Server Appliance so ended up just rebuilding the lab from scratch and reattaching my old data stores. Having written all of this up, I then promptly forgot to post it! So for the sake of continuity (before I do the same for 6.0 shortly), this article covers the process.

In addition I also purchased a 3rd node for my lab, the 4th Gen D34010WYKH model (also with a Core i3), with which I was able to test and prove the process on as it uses the same NIC chipset.

The following are updated instructions for installing vSphere 5.5 on Intel NUC (any model with the Intel® 82579V or Intel® I218V onboard NIC should work).

I recommend before you start, you upgrade the NUC to the latest firmware, to avoid any potential bugs (of which there were a few when they were first released). Copy the latest firmare image onto a USB stick, boot the NUC, hit F7 at the bios, find your firmware on the USB stick and let it do it’s thing:

Intel NUC Firmware Upgrade

Intel NUC Firmware Upgrade

vSphere 5.5 Install Requirements

  • A USB Stick. This should work on anything over 1-2GB but personally am using 8GB PNY Micro Sleek Attache & 16GB Kinston DataTraveler Micro drives as they’re tiny, so less likely to catch on anything as they stick out the back of the NUC box, and they cost less than £5 each.
  • A copy of VMware Workstation 8 / Fusion 6 or newer.
  • ESXi-Customizer 2.7.2 (created by Andreas Peetz)
    http://v-front.blogspot.com/p/esxi-customizer.html for adding VIBs to your image. NOTE: This can also be done by Powershell, but I like the GUI as it’s easy! (http://blogs.vmware.com/vsphere/2012/04/using-the-vsphere-esxi-image-builder-cli.html)
  • The ESXi driver for the Intel® 82579V Gigabit Ethernet Controller (e.g. for the original models using ESXi 5.5):
  • OR The ESXi driver for the Intel® I218V Gigabit Ethernet Controller (e.g. for the Haswell based D34010U models):
  • (AND) The ESXi AHCI driver for the SATA controller (if you want to use local drives in the  Haswell based D34010U models):
    • sata-xahci-1.10-1.x86_64
    • If you do choose to add this in as well to your image, simply run the customiser twice, once for the network VIB, then a second time for the SATA vin, using the interim image as your source for the final image.

Process Overview

  • Create a customised ISO with the additional Intel driver.
  • Install ESXi to your USB stick using VMware Workstation / VMware Fusion and the customised ISO you will create below.
  • Plug in your NUC, insert the USB stick, boot and go!

Part One – Create the Custom ISO

  1. Run the ESXi-Customizer-v2.7.2.exe (latest version at time of writing).
  2. This will extract the customer to the directory of your choosing.
  3. Navigate to the new directory.
  4. Run the ESXi-Customizer.cmd batch file. This will open up the GUI, where you can configure the following options:
  • Path to your ESXi Installer
  • Path to the Intel driver downloaded previously
  • Path where you want the new ISO to be saved
  1. Ensure you tick the Create (U)EFI-bootable ISO checkbox.
ESXi-Customizer with 2.3.2 vib

ESXi-Customizer with 2.3.2 vib

This will output a new custom ESXi installer ISO called ESXi-5.x-Custom.iso or similar, in the path defined above.

Part Two – Install bootable ESXi to the USB stick.
I stress that this is my preferred way of doing this as an alternative is simply to burn your customised ISO to a CD/DVD and boot using a USB DVD-ROM. That would however be a whole lot slower, and waste a blank CD!

  1. Plug your chosen USB stick into your PC.
  2. Open VMware Workstation (8 or above), VMware Fusion, or whatever you use, ideally supporting the Virtualize Intel VT-x/EPT or AMD-V/RVI option (allowing you to nest 64-bit VMs).
  3. Create a new VM, you can use any spec you like really, as ESXi always checks on boot, but I created one with the similar specs as my intended host, single socket, 2vCPU cores. RAM doesn’t really matter either but I use at least 4GB normally. This does not require a virtual hard disk.
  4. Once the VM is created, and before you boot it, edit the CPU settings and tick the Virtualize Intel VT-x/EPT or AMD-V/RVI checkbox. This will reduce errors when installing ESXi (which checks to ensure it can virtualise 64-bit operating systems).

VMware Workstation Nesting

Screen Shot 2014-08-29 at 22.09.01

VMware Fusion Nesting

  1. Set the CD/DVD (IDE) configuration to Use ISO image file, and point this to the customised ISO created earlier.
  2. Once the above settings have been configured, power on the VM.
  3. As soon as the VM is powered on, in the bottom right of the screen, right click on the flash disk icon, and click Connect (Disconnect from Host).

Attach USB in VMware Workstation

Screen Shot 2014-08-29 at 21.38.18

Attach USB in VMware Fusion

  1. This will mount the USB stick inside the VM, and allow you to do a standard ESXi installation onto the stick.
ESXi Install

ESXi Install

  1. At the end of the installation, disconnect the stick, un-mount and unplug it.
Install Complete

Install Complete

Part Three – Boot and go!
This is the easy bit, assuming you don’t have any of the HDMI issues I mentioned in the first post!

  1. Plug your newly installed USB stick into the back of the NUC.
  2. Don’t forget to plug in a network cable (duh!) and keyboard for the initial configuration. If you wish to modify any bios settings (optional), you will also ideally need a mouse as the NUC runs Visual BIOS.
  3. Power on the NUC…
  4. Have fun!

That’s it!

Any questions/comments, please feel free to hit me up on twitter as I have recently disabled comments on my blog due to the insane volumes of spam bots they were attracting!

%d bloggers like this: