Tag Archive for AWS

Amazon AWS Tips and Gotchas – Part 8 – AWS EC2 Reserved Instances

Continuing in this series of blog posts taking a bit of a “warts and all” view of a few Amazon AWS features, below are a handful more tips and gotchas when designing and implementing solutions on Amazon AWS, including AWS EC2 Reserved Instances.

For the first post in this series with a bit of background on where it all originated from, see here:
Amazon #AWS Tips and Gotchas – Part 1

For more posts in this series, see here:
Index of AWS Tips and Gotchas

AWS Tips and Gotchas – Part 8

Reserved Instances are a great way to save yourself some money for instances you know you will require for a significant period of time (from 12-36 months). One really cool fact which AWS don’t announce enough, in my opinion, is that reserved instances can actually be shared across consolidated billing accounts!

If you wanted to, you could purchase all of your reserved instances from your primary consolidated billing accounts, however, it doing this has some potentially unexpected results:

  1. Reserved instances don’t just provide you with a better price, they also provide you with guaranteed ability to spin up an instance of your chosen type, regardless of how busy the AZ in question actually is.
    If there is an AZ outage, other AWS customers will scramble to spin up additional instances in other AZs in the same region, either manually or via ASGs, and this has the potential to starve the compute resources for one or more instance types!
    Yes, that’s right, even AWS do not have an infinite compute resources!AWS Infinity Reserved InstancesBy using reserved instances, you are still guaranteed to be able to run yours regardless of available capacity for on-demand instances. They are truly reserved.
    If however, you centralise your reserved instances into your CB account, you will get the reservation pricing benefits at the top of the account tree, but you don’t get the capacity reservations as these are account specific.
  2. Reserved instances are specific to individual Availability Zones, so ensure you spread these evenly across your AZs to avoid wasting them (you are of course designing your apps to be resilient across AZs, right?) and give you maximum reserved coverage in the unlikely event of a full AZ outage.
  3. And finally… Reserved instances are a commercial tool applied after-the-fact, not against a specific instance. When using consolidated billing for reserved instances, the reservations are therefore effectively split evenly across all accounts. If you actually want to report back to each business unit / account owner on their billing including reserved instance, this could be tricky.

Find more posts in this series here:
Index of AWS Tips and Gotchas

Amazon AWS Tips and Gotchas – Part 9 – Scale-Up Patching

Amazon AWS Tips and Gotchas – Part 7 – AWS EMR, Spot Instances & PGs

Continuing in this series of blog posts taking a bit of a “warts and all” view of a few Amazon AWS features, below are a handful more tips and gotchas when designing and implementing solutions on Amazon AWS, including EMR, Spot Instances and Placement Groups.

For the first post in this series with a bit of background on where it all originated from, see here:
Amazon #AWS Tips and Gotchas – Part 1

For more posts in this series, see here:
Index of AWS Tips and Gotchas

AWS Tips and Gotchas – Part 7

  1. As detailed in the EMR FAQ, EMR does not support multi-master config, only one master node per EMR cluster (plus of course, multiple slaves). If that master node goes offline, you lose your cluster and all data which is being processed at the time. The AWS recommended workaround for this is to checkpoint your EMR cluster regularly, which allows resuming of the cluster from the last checkpoint in the event of a failure. AWS EMR
  2. Spot instances and sticky sessions do not play well together!!! If you use spot instances as a method for providing cheap burst resources, make sure your application is not dependent on sticky sessions.
    If it is, you risk losing user sessions when the spot instances are terminated with only 2 minutes notice.
    There are a couple of mitigation methods for this, the best of which is simply to not use sticky sessions, and store your session data in another system such as ElastiCache or DynamoDB (or both!).
    Alternatively, you could setup a script within the EC2 guest OS to monitor the Spot Instance Termination Notifications (http://169.254.169.254/latest/meta-data/spot/termination-time) and devise a method to cleanly migrate off any remaining sessions from your instance and remove it from the load balancer.
    NOTE: It is best to avoid terminating your spot instances yourself, as AWS will not charge you for the hour in which they terminate your instance, so you can save some budget over shutting your own instances down.
  3. Placement groups were designed specifically for high bandwidth applications, which require low latency, 10Gbps connectivity between instances.
    AWS Placement GroupsIf you do not start all instances in a placement group at the same time, you cannot guarantee that they will end up optimally close to each other later. Indeed, as stated in the placement groups KB “If you try to add more instances to the placement group later, or if you try to launch more than one instance type in the placement group, you increase your chances of getting an insufficient capacity error”.
    If you do want to add more instances to your placement group later, the best thing to do is stop and restart all of your instances concurrently.

Find more posts in this series here:
Index of AWS Tips and Gotchas

Amazon AWS Tips and Gotchas – Part 8 – AWS EC2 Reserved Instances

Amazon AWS Tips and Gotchas – Part 6 – AWS Dedicated VPCs

Continuing in this series of blog posts taking a bit of a “warts and all” view of a few Amazon AWS features, below are a handful more tips and gotchas when designing and implementing solutions on Amazon AWS, including Dedicated VPCs.

For the first post in this series with a bit of background on where it all originated from, see here:
Amazon #AWS Tips and Gotchas – Part 1

For more posts in this series, see here:
Index of AWS Tips and Gotchas

AWS Tips and Gotchas – Part 6

12. AWS Dedicated VPCs

Just a quick one this week, specifically something to watch out for otherwise you risk running up a scary bill very quickly!

When you create a new VPC, you have the option to create it as Default or Dedicated as per the screenshot below:

AWS Dedicated VPCs

Now here’s the rub… if you select dedicated VPC, this will actually cause every single EC2 instance from then on to be created on dedicated hardware (what AWS call single-tenant hardware, i.e. dedicated physical servers!) by default, within that VPC.

Also note that as per the Dedicated Instances KB article, “You can’t change the instance tenancy of a VPC after you create it”.

In other words, if you find you have created your VPC as a dedicated one, you will have to destroy and re-create everything within that VPC to get it back to default (i.e. multi-tenant/shared compute).

AWS Dedicated VPCs invoiceAnyhoo, I said it was just a quick one this week…

Find more posts in this series here:
Index of AWS Tips and Gotchas

Amazon AWS Tips and Gotchas – Part 7 – AWS EMR, Spot Instances & PGs

Amazon AWS Tips and Gotchas – Part 5 – Managing Multiple VPCs

Continuing in this series of blog posts taking a bit of a “warts and all” view of a few Amazon AWS features, below are a handful more tips and gotchas when designing and implementing solutions on Amazon AWS, based around VPCs and VPC design.

For the first post in this series with a bit of background on where it all originated from, see here:
Amazon #AWS Tips and Gotchas – Part 1

For more posts in this series, see here:
Index of AWS Tips and Gotchas

AWS Tips and Gotchas – Part 5

11. Managing Multiple VPCs & Accounts

Following on from the previous post, let us assume that instead of just talking about public services endpoints (e.g. S3, Glacier, etc), and instead we are talking about environments with multiple VPCs, possibly multiple accounts, and the potential addition of Direct Connect on top.

AWS VPC VPCs

Why would you do this? Well, there are numerous reasons for logically separating things such as your dev/test and production environments from a security and compliance perspective. The one that people sometimes get hung up on is why would I want more than one account? As it goes, some AWS customers run many tens or even hundreds of accounts! Here are a few examples:

  • The simplest answer to this is so that you can avoid being “CodeSpaced” by keeping copies of your data / backups in a second account with separate credentials!
  • Separation of applications which have no direct interaction, or perhaps minimal dependencies, to improve security.
  • Running separate applications for different business units in their own accounts to make for easier LoB billing.
  • Allowing different development teams to securely work on their own applications without risking impact to any other applications or data.
  • With the mergers and acquisitions growth strategy which many companies adopt, it is fairly common these days for companies to be picked up and bring their AWS accounts and resources with them.
  • Lastly, a very common design pattern for compliance is to use a separate account to gather all of your CloudTrail and other audit logs in a single account, inaccessible to anyone except your security team, and therefore secure from tampering.

The great thing is that with consolidated billing, you can have as many accounts as you like whilst still receiving a single monthly bill for your organisation!

We will now look at a few examples of ways to hang together your VPCs and accounts, and in the majority of cases, you can effectively consider the two as interchangeable in so far as the scope of this post.

Scenario A – Lots of Random VPC Peering and a Services VPC

This option is ok for small solutions but definitely does NOT scale and is also against best practice recommendations from AWS. As mentioned in the previous section, transitive peering is also not possible unless you are somehow proxying the connections, so if you are looking to add Direct Connect to this configuration, this just simply isn’t going to fly.

Imagine that all of the blue dotted arrows in the following diagram were VPC peering connections! Aaaaargh!

AWS VPC VPCs

Option B – Bastion Server in Services VPC

If each of your VPCs is independent, and you only need to manage them remotely (i.e. you are not passing significant traffic between many different VPCs, or from AWS to your MPLS, then a services VPC with a bastion server may be a reasonable option (hub and spoke):

http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/peering-configurations-full-access.html

In this example, you could push a Direct Connect VIF into VPC A and via your bastion server, manage servers in each of your other VPCs. This would NOT be appropriate if your other servers / clients on premises wanted to access those resources directly, however, and is more likely in the scenario where each VPC hosts some form of production or dev/test platform which is internet facing, and this is effectively your management connection in the back door.

You might also potentially aggregate all of your security logs etc into the bastion VPC.

AWS VPC VPCs

Scenario C – Full Mesh

This is like a neater version of Scenario A. Holy moly! Can you imagine trying to manage, support or troubleshoot this?

AWS VPC VPCs

Even something as simple as managing your subnets and route tables would become a living, breathing nightmare! Then what happens every time you want to add another VPC? shudder

If you require this level of inter-VPC communication, then my first question would be why are you splitting the workloads across so many dependent VPCs, and where is the business benefit to doing so? Better to look at rationalising your architecture than try to maintain something like this.

Scenario D – Lollipop Routing

If you absolutely must allow every VPC to talk to most or even every other VPC, and the quantity of VPCs is significant then it may be worthwhile looking at something more scalable and easy to manage.

This one is more scalable from a management perspective, but if I am honest, I am not massively keen on it! It feels a bit like AWS absolving themselves of all responsibility when it comes to designing and supporting more complex network configurations. It could potentially also work out rather expensive as you could end up needing a fairly hefty amount of Direct Connect bandwidth to support the potential quantity of traffic at this scale, as well as adding a load of unnecessary latency.

I would prefer that AWS simply allowed some form of auto configured mesh with a simple tag/label assigned to each VPC to allow traffic to route automatically. If only such a technology existed or could be used as a design template!?! (sarcasm mode off – MPLS anyone?)

I am confident that at the rate AWS are developing new services, providing automation of VPC peering won’t be miles off (as suggested by the word “presently” in the following slide from an AWS presentation available on slideshare from last July (2015):

AWS VPC VPCs

In the meantime, we are left with something that looks a bit like this:

AWS VPC VPCs

When reaching this kind of scale, there are also a few limitations you want to be aware of:

AWS VPC VPCs

And Finally… NOTE: Direct Connect is per-Region

When you procure a direct connect, you are not procuring a connection to “AWS”, you are procuring a connection to a specific region. If you want to be connected to multiple AWS regions, you will need to procure connections to each region individually.

To an extent I can see that this makes some logical sense. Let’s say they allowed access through one region to others, if you have connections to a single region and that region has a major issue, you could end up losing access to all regions.

What would be good though would be the ability to connect to two regions, which would then provide you with region resilient access to the entire AWS network of regions. Whether this will become a reality is yet to be seen, but I have heard rumblings that there may be some movement on this in the future.

Wrapping Things Up

As you can see, getting your VPC peering and Direct Connect working appropriately, especially at scale, is a bit of a minefield.

I would suggest that if you are seriously looking at using Direct Connect, and need some guidance you could do worse than have a chat with your ISP, MSP or hosting provider of choice. They can help you to work out a solution which is best for your businesses requirements!

Find more posts in this series here:
Index of AWS Tips and Gotchas

Further Reading

Here are links to a few resources used in the writing of this post, worthwhile reading if you want to understand the subject more thoroughly:

Amazon AWS Tips and Gotchas – Part 6 – AWS Dedicated VPCs

%d bloggers like this: