AWS Certified Solutions Architect Associate Exam Prep & Experience

Historically I have been well aware of AWS and understood the key services at a high level, but recently this has become a key strategic focus for my employer, and I was asked to get down and dirty with the platform. So after about 5 weeks of steeping myself in the AWS ecosystem and platform, labbing like crazy, and attending a compressed AWS Solutions Architect training course, I finally sat the AWS Certified Solutions Architect Associate exam this week, and am happy to say I passed!

It has been a pretty intense number of weeks, and my wife has been less than impressed with hardly seeing me for a month, but it has certainly been worthwhile!

TLDR: Loads of exam resources coming in the follow-up post. Learn to speed read! ACloud.Guru and official QA AWS courses are both good. The exam itself was reasonably tricky for an intro level exam, but not too bad. List of prep materials is here:
http://tekhead.it/blog/2016/03/aws-certified-solution-architect-associate-exam-study-guide-resources/

AWS Solutions Architect Exam Prep Process

I will post a follow-up list of resources shortly but for now, I will concentrate on the process!

My exam prep and training was largely centred around the ACloud.Guru and official QA AWS Accelerated courses, with a load of additional reading preceding and following them.

I am also a copious note taker and I spend significant amounts of time labbing to make sure that whatever I am designing for a customer, or whatever I am being tested on, I have generally done it at least once! More detail on these in the study materials post.

7 days before the AWS exam

Having spent several weeks labbing I spent my last week predominantly reading through the recommended whitepapers and reading the AWS FAQ documents, along with a number of articles from the AWS documentation site.

2 days before the AWS exam

I spent this time solidly doing practice questions, reading AWS documentation to fill in any blanks from the practice questions, and reading through my notes from the two courses.

I found the sample exam and practice questions very useful. The same goes for the practice tests in the ACloud.Guru course. Whenever I came across a question I was not 100% confident on, again I hit the AWS documentation site to fill in the blanks.

1 day before the AWS exam

One thing I did the night before the exam was to read through all of my ACloud.Guru notes, specifically concentrating on the “Exam Tips” which Ryan had noted throughout the course, as well as all of the end of section summaries.

Similarly during the QA course, every time the trainer mentioned something which is a likely exam topic I made a specific note of it. I took some time to review the list prior to the exam and look up AWS documentation and articles on the relevant features.

#AWS Certified Solutions Architect Associate Exam Prep & Experience

AWS Solutions Architect Exam Experience

The exam itself is obviously under NDA so I obviously cant go into any detail about the content. Amazon also provide an FAQ about the exam which is worth reading.

The exam centre I used was not one I had used before for Prometric or Pearson Vue. It certainly looked the part, very modern etc, but in reality, it was actually quite sub par. I was lucky enough to be sitting on the opposite side of a paper thin wall from a very noisy chap in a meeting room! Fortunately, the exam centre did provide ear plugs. Can’t say I have ever even felt the need to wear earplugs in an exam before, but there’s a first time for everything!

I felt the time allocation was reasonable. I finished after roughly 75-80% of my allotted time so very similar to a number of other industry entry to mid level exams I have taken in the past.

In terms of difficulty, I would equate the Solutions Architect Associate exam to being of a similar level to a reasonably tricky VCP / MCP, but definitely not as hard as a VCAP. I passed reasonably comfortably, but had to really think hard about quite a few of the questions. I was really glad I managed to get a bit of time to read some of the FAQ documents in the days before the exam, which were not originally on my resource list, but turned out to be very good exam prep!

Every time I hit next there was a very long pause until the next question is displayed. I can only guess the questions are being requested on the fly as you progress, as the pause was so long I cant think of any other reasonable explanation! I would guess I lost at least 3-5 minutes over the course of the exam, staring at the next question loading! Not ideal if you are pushed for time, and had I been, I may have found this more frustrating.

The submit button (which ends the exam) is frankly stupid! It appears on every single page of the exam. Do they believe people are going to answer the first 3 questions then hit submit?!? This is just asking for trouble IMHO.The test system vendor they use feels dated / clunky compared to other systems I have used recently, e.g. for Microsoft and VMware exams on Pearson Vue, which are pretty dated in and of themselves!

As this post is now getting rather long I shall end it here and provide a second post with a rather sizable list of my study materials!

In the mean time…

AWS Solution Architect Associate Exam Prep and Experience

 

AWS Certified Solutions Architect Associate Exam Study Guide & Resources

 

Amazon AWS Tips and Gotchas – Part 3 – S3, Tags and ASG

Continuing in this series of blog posts taking a bit of a “warts and all” view of a few Amazon AWS features, below are a handful more tips and gotchas when designing and implementing solutions on Amazon AWS, including AWS S3, Tags / Tagging as well as ASG (Auto-Scaling Groups).

For the first post in this series with a bit of background on where it all originated from, see here:
http://tekhead.it/blog/2016/02/amazon-aws-tips-and-gotchas-part-1/

For more posts in this series, see here:
Index of AWS Tips and Gotchas

AWS Tips And Gotchas – Part 3
  1. Individual S3 buckets are soft limited to 100 concurrent write transactions per second, and 300 reads initially and only partition as the storage performance quantities grow over time. This sounds like a lot but when you consider the average web page probably consists of 30-60 objects, it would not take a huge number of concurrent users hitting an application at the same time of day to start hitting limits on this.

    The first recommendation here, especially for read intensive workloads, is to cache the content from S3 using a service like CloudFront. This will immediately mean that for your object TTL you would only ever expect to see each object accessed a maximum of around 50 times (once per global edge location), assuming a global user base. A lot less than that if all of your users are in a small number of geographic regions.
    Second, do not use sequentially named S3 objects. Assign a prefix to the start of each filename which is a random set of characters, and will mean that in the background, S3 will shard the data across partitions rather than putting them all in one. This is very effectively explained here:
    http://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html

    Third, effectively shard your data across multiple S3 buckets in some logical fashion, ensuring you are also roughly spreading the read and write requests equally between them, therefore increasing your maximum IO linearly with every additional S3 bucket. You would then potentially need some form of service to keep a track of where your content lives; a common method for this is to store the S3 object locations in a DynamoDB table for resilient and fast retrieval.

    For extra fast retrieval you could also cache these S3 locations in memory using Elasticache (Memcached/Redis).AWS S3 cache all the things
    If you go down this route and assuming older data is less frequently accessed, I suggest you rebalance your data when new S3 buckets are added, otherwise you risk having hot and cold buckets, which defeats the objective of sharing them in the first place!

    Even better, just start with a decent number of S3 buckets anyway, as the buckets themselves are free; you are only charged for the content stored inside them! This, of course, adds some complexity for management and maintenance, so make sure you account for this in your designs!

    Lastly, use a CDN! That way your object access hit counts will be far lower, and your users will get improved performance from delivery of content from local pops! 🙂

  2. If you are using Tags as a method to assign permissions to users or even prevent accidental deletion of content or objects (something I’m not 100% sure I’m convinced is bullet proof but hey!), make sure you then deny the ability for users to modify those tags (duh!).

    For example, if you set a policy which states that any instance tagged with “PROD” may not be deleted without either MFA or elevated permissions, make sure you deny all ability for your users to edit said tags, otherwise they just need to change from PROD to BLAH and they can terminate the instance.AWS Tags Security

  3. This is a configuration point which can cost you a wee chunk of change if you make this error and don’t spot it quickly! When configuring your Auto-Scaling Group make sure the Grace Period is set sufficiently long to ensure your instances have time to start and complete all of their bootstrap scripts.

    If you don’t, the first time you start up your group it will boot an instance, start health checking it, decide the instance has failed, terminate that instance and boot a new one, start health checking it, decide the instance has failed, etc (ad infinitum).

    If your grace period is low this could mean spinning up as many as 60 or more instances in an hour, each with a minimum charge of an hour!Instead, work out your estimated Grace Period and consider adding an extra 20% wiggle room. Similarly, if your bootstrap script has a typo in it (as mine did in one test) which causes your health checks to fail, Auto-Scaling will keep terminating and instantiating new instances until you stop it. Make sure you have thoroughly tested your current bootstrap script prior to using it in an Auto-Scaling group!

    Update: One last point to highlight with this is some sound advice from James Kilby. Be aware as your environment changes that a sufficient grace period may be enough day one, but it might not be later on! Don’t set and forget this stuff, or you may find you come in one day with a big bill and a load of lost revenue when your site needed to scale up and couldn’t!

Find more posts in this series here:
Index of AWS Tips and Gotchas

Amazon AWS Tips and Gotchas – Part 4 – Direct Connect & Public / Private VIFs

Amazon AWS Tips and Gotchas – Part 2 – AWS EBS & RDS MS SQL

Continuing in this series of blog posts taking a bit of a “warts and all” view of a few Amazon AWS features, below are a handful more tips and gotchas when designing and implementing solutions on Amazon AWS, including EBS and MS SQL on RDS.

For the first post in this series with a bit of background on where it all originated from, see here:
http://tekhead.it/blog/2016/02/amazon-aws-tips-and-gotchas-part-1/

For more posts in this series, see here:
Index of AWS Tips and Gotchas

AWS Tips and Gotchas – Part 2 – EBS & RDS
  1. You cannot increase the size of EBS volumes without stopping the instance. If you are designing scale-out / high availability solution then this is not a big issue as you should be able to take some downtime on any individual node, but that downtime is going to be fairly significant, and the larger the volume, the more downtime you will incur. The actual process looks like this (summary below):
    • Stop the instance
    • Snapshot the volume
    • Create a new volume from the snapshot, with your new larger size
    • Detach the old volume
    • Attach the new volume and start the instance back up

    This is one of those features which is bread and butter for a vSphere or Hyper-V admin, and could be done online in seconds with the vast majority of guest operating systems.

    I think it really highlights the key difference between designing for AWS Cloud, and a traditional enterprise virtual infrastructure. In a solution where most of your hosts are ephemeral, this should not be a big issue. If you try to take a traditional enterprise approach, you may find yourself in hot water, having to take service downtime to make simple changes.

    I suggest where possible / appropriate, avoid using EBS and use alternative options such as S3 which can scale on demand.

    UPDATE 13th Feb 2017: Amazon have just released Elastic Volumes, which allow you to scale up EBS volumes on demand! Yay! More info here:
    Amazon EBS Update – New Elastic Volumes Change Everything

  2. Similar to resizing EBS volumes, you cannot hot-resize an instance, or indeed resize them / change their type in place. In order to change instance type you need to detach any EBS volumes (including root volumes if you wish to maintain them too), terminate the instance, create a new one and re-attach your volumes.
    Obviously you cannot re-attach a root volume if you are using instance storage (ephemeral) for this, so make sure you use EBS backed volumes if you want to maintain your root volumes for any scale-up elements of your solutions which cannot simply be re-created from a bootstrap script.
  3. If your application depends on Microsoft SQL, you are going to be in for a fairly unpleasant surprise! It is not currently possible to resize MS SQL volumes on Amazon RDS once they have been deployed! At all. Full stop. Nada.AWS MS SQL - say what nowThe recommendation from AWS is to deploy your estimated future capacity requirement from day one! Not very cloudy at all…Your only growth option when you hit your initial capacity limit is to migrate all the data to a new RDS instance and take some application downtime to fail over.This can be minimised by using things like log shipping from the source instance to get the target as close to up-to-date as possible, but you will still need to shut down and swing your applications, and frankly it’s a risky headache which would be better avoided if possible, and certainly not something you want to be doing on a regular basis.Probably best to design for your estimated growth, and add a percentage on top.

Find more posts in this series here:
Index of AWS Tips and Gotchas

Amazon AWS Tips and Gotchas – Part 3 – S3, Tags and ASG

Domain Migration to http://tekhead.it is Now Complete!

Tekhead Logo

This is just an uber quick reminder that as per my previous post, I have now updated the domain for the blog from http://www.tekhead.org to http://tekhead.it.

The changes went live tonight (27/02/2016) around midnight, and all previous blog paths are now 301 redirected to the equivalent address on the new site. Hopefully I won’t lose too much Google juice with the new address!

So, if you have any difficulties whatsoever accessing paths or content, please let me know via Twitter, and I would be very grateful!

%d bloggers like this: