AWS Certified Developer Associate (CDA) Exam Experience & Tips

The information bellow covers my experience for the AWS Certified Developer Associate (CDA) exam from Amazon. Following this I will post a list of my study materials, so keep checking back for updates or check out my Index of AWS Posts.

Before you continue reading, I would first just say that this is my second AWS exam, having completed the AWS Certified Solution Architect Associate exam earlier this year. As such the materials I used to study towards the exam are more sparse, due to the level of knowledge I already have.

For a really full picture of all of the materials I’ve used over the last 12 months, I highly recommend you check out the Certified Solution Architect Associate exam experience and the Certified Solution Architect Associate study guides, along with a number of tips, tricks and gotchas I have posted over the past few months. I also did a podcast recently with Scott Lowe on the subject of learning AWS. If you are new to AWS, I highly recommend you check it out!

AWS Certified Developer Associate CDA Full Stack Journey

Certified Developer Associate Exam Experience

My personal experience of the AWS Certified Developer Associate exam was that it was quite a bit easier than the Solution Architect Associate exam. Now, I don’t know whether this is more because I have been doing quite a bit of AWS work, as well as writing about it quite a bit in the months since I passed the CSA, or if this was down to the exam being genuinely easier. Most likely a combination of the two, as many people seem to rate the three exams as Developer, Solutions Architect, then SysOps Associate in increasing difficulty.

Either way the exam itself was actually very reasonable if you have any experience working with AWS. The way that AWS seem to structure their exams, is with some general questions across their portfolio, then specific technologies taking precedence in each. The developer exam was no different; there is definitely a distinct bias towards DynamoDB, S3, SQS and authentication. All the things which AWS Developers are likely to use when building distributed and highly scalable applications of course!

It is worth noting that AWS do not expect you to be a developer to pass the exam. You don’t need to know how to code in any language or similar. It would be useful for you to understand the basic format of JSON, but again this isn’t critical to pass the exam. If you want to work hands on with any of the AWS tooling in real life however, this is pretty critical!

The exam itself is 80 minutes and 55 questions. Again AWS (as is their way) do some odd things like not giving you a passing grade requirement, but it’s generally safe to assume that if you get 70% or more, in the Certified Developer Associate exam then you will pass. The Kryterion exam environment is frankly a little poor / dated, but I already wrote about that in the CSA guide here, so I won’t repeat myself again! Suffice to say, read the other article for a detailed overview.

There’s not a huge amount of advice I can give regarding the exam itself, other than if you are stuck, go with your gut. Believe it or not, the most obvious answer is often the actual answer! Don’t second guess yourself and say “No way it couldn’t be that simple!”.AWS Certified Developer Associate CDA Gut Feeling

Things like the specific API syntax used by AWS is generally quite logical, however there are a few weird things! For example, the read commands GetItem and BatchGetItem match syntax with eachother and are logical, but the write equivalents, PutItem and BatchWriteItem, do not! Knowing these types of little weird things can potentially help you come exam time so make sure you memorise some of the more common API calls.

It is also very worthwhile practicing your DynamoDB maths, as AWS expect you to be able to do this in your head. Memorising and practicing Ryan’s simple method really helped me to get my head around it.

Finally, if there is one thing I recommend you read, it’s the DynamoDB FAQ. This is a goldmine of information that will stand you in good stead for both the exam, and developing solutions on AWS!

Best of luck, and if you found this article useful, please leave a comment below! 🙂

Want to Learn More?

For part 2 of this article, the AWS Certified Developer Associate exam study guide and materials, see here:

Index of Tekhead.it Blog Posts on Amazon AWS

AWS, Certification, Cloud , , , , , , , , , , , , , , ,

Tech Field Day 12 (TFD12) – Preview

For those people who haven’t heard of Tech Field Day, it’s an awesome event run by the inimitable Stephen Foskett. The event enables tech vendors and real engineers / architects / bloggers (aka delegates) to sit down and have a conversation about their latest products, along with technology and industry trends.

Ever been reading up on a vendor’s website about their technology and had some questions they didn’t answer? One of the roles of the TFD delegates is to ask the questions which help viewers to understand the technology. If you tune in live, you can also post questions via twitter and the delegates, who will happily ask them on your behalf!

As a delegate it’s an awesome experience as you get to spend several days visiting some of the biggest and newest companies in the industry, nerding out with like-minded individuals, and learning as much from the other delegates as you do from the vendors!

So with this in mind, I am very pleased to say that I will be joining the TFD crew for the third time in San Jose, for Tech Field Day 12, from the 15th-16th of November!

Tech Field Day 12 (TFD12) Vendors

As you can see from the list of vendors, there are some truly awesome sessions coming up! Having previously visited Intel and Cohesity, as well as written about StorageOS, it will be great to catch up with them and find out about their latest innovations. DellEMC are going through some massive changes at the moment, so their session should be fascinating. Finally, I haven’t had the pleasure of visiting rubrik, DriveScale or Igneous to date, so should be very interesting indeed!

That said, if there was one vendor I am probably most looking forward to visiting at Tech Field Day 12, it’s Docker! Container adoption is totally changing the way that developers architect and deploy software, and I speak to customers regularly who are now beginning to implement them in anger. It will definitely be interesting to find out about their latest developments.

If you want to tune in live to the sessions, see the following link:
Tech Field Day 12

If for any reason you can’t make it live, have no fear! All of the videos are posted on YouTube and Vimeo within a day or so of the event.

Finally, if you can’t wait for November, pass the time by catching some of the fun and highlights from the last event I attended:

Storage Field Day 9 – Behind the Curtain

Cloud, Docker, Storage, Tech Field Day , , , , , , , , , , , , , ,

Amazon AWS Tips and Gotchas – Part 10 – EFS (Elastic File System)

Continuing in this series of blog posts taking a bit of a “warts and all” view of a few Amazon AWS features, below are a handful more tips and gotchas when designing and implementing solutions on Amazon AWS. This week, we talk about the latest feature of AWS, EFS (aka Elastic File System).

For the first post in this series with a bit of background on where it all originated from, see here:
Amazon #AWS Tips and Gotchas – Part 1

For more posts in this series, see here:
Index of AWS Tips and Gotchas

20. Amazon AWS Tips and Gotchas – Part 10 – EFS (Elastic File System)

A big challenge when designing highly available web infrastructures is historically how to provide a centralised content store for static content without wasting resources.

A classic model for this is a pair of web / file servers with either rsync or Gluster to replicate the content between them. In Windows world, this would be something like either a WSFC (failover cluster) or perhaps something evil like a DFS replicated share. This means that not only are you wasting money on multiple virtual machines / instances just to serve file content, but you also add significant risk and complexity in the replication and failover between these machines.

Enter, AWS EFS!AWS EFSAt a simple level, EFS is basically an NFS (v4.1) share within the AWS cloud, which is replicated across all AZs in any one region. No need for managing and replicating between instances, or indeed paying for EC2 instances just to create file shares! Great!

As this is still a relatively immature product, there are still a few “features” to be aware of:

  1. There is no native EFS backup solution (yet!). I’m sure this will come very soon. As we have Re:invent coming up, it wouldn’t surprise me if something came out then. In the meantime, your main methods would be either to use Data Pipeline to backup to another EFS store or potentially mount EFS and backup through an EC2 instance using your own tools or scripts. I would be concerned about backing up EFS to EFS (if in the same region), as this is putting all your eggs in one basket. Hopefully, AWS will provide other target options in the future.
  2. There is no native encryption of EFS data as yet. If you need this right now, you could achieve it by simply pre-encrypting the data in your application first, before it is written to EFS. Alternatively, just hold your breath as AWS have already stated that:
    “Amazon EFS does not currently provide the option to encrypt data at rest, but we will offer this option soon”.AWS EFS Meme
  3. If you have less than about 100GB, then due to the way the performance burst credits work you may not get the performance you need. The more you buy, the more performance you get, so don’t short change your app for the sake of a few dollars!

    “Amazon EFS uses a credit system to determine when file systems can burst. Each file system earns credits over time at a baseline rate that is determined by the size of the file system, and uses credits whenever it reads or writes data”
    .

    In early testing, it has been seen that very small filesystems can lead to IO starvation and performance issues. I would recommend you start with 100GB as a minimum (subject to your workload requirements of course!). This is still pretty cheap at only about $30-33 a month; a lot less than even a pair of EC2 instances, never mind the complexity reduction benefits. KISS!

    Of course, the more caching you can do on that content, e.g. using CloudFront as a CDN, the lower the IO requirements on your EFS store.

    For more info on performance see here:
    Amazon EFS Performance

    kiss - Keep it simple stupid EFS

  4. And finally… being NFS based, this is obviously primarily aimed at Linux solutions. It would be nice to think that AWS will release an SMB version in the future… we can but hope!

Thanks to my learned colleague Tom Ellis for the tip! As he says, “The size needs to be determined by the throughput needs, and not the storage capacity needs. “

Find more posts in this series here:
Index of AWS Tips and Gotchas

AWS, Cloud , , , , , , , , , , , , , , ,

Amazon AWS Tips and Gotchas – Part 9 – Scale-Up Patching

Continuing in this series of blog posts taking a bit of a “warts and all” view of a few Amazon AWS features, below is another tip for designing and implementing solutions on Amazon AWS. In this case, Scale-Up Patching of Auto-Scaling Groups (ASGs) and a couple of wee bonuses about Dark Launch techniques.

For the first post in this series with a bit of background on where it all originated from, see here:
Amazon #AWS Tips and Gotchas – Part 1

For more posts in this series, see here:
Index of AWS Tips and Gotchas

19. AWS Tips and Gotchas – Part 9 – Scale-Up Patching in ASGs

Very quick tip on Auto Scaling Groups this week, courtesy of an awesome session I attended at the AWS User Group UK (London) last week on DevOps, presented by Chris Turvil from The Trainline.

Assuming you need to just do a code release to an existing farm of servers running in an ASG, and you aren’t planning anything complex such as a DB schema update, you can use a technique called “Scale-Up Patching”. I hadn’t heard the term before, but it’s actually incredibly simple, but very effective! There are a couple of methods you might use, depending on how you deliver your code, but the technique is the same; make your new code or image live, double the minimum size of your ASG, then halve it! Job done!AWS Scale-Up Patching with ASGs (Auto-Scaling Groups)So how does this work?

If you have looked into the detail of ASGs, assuming you have roughly even instances spread over multiple AZs then when an ASG shrinks / scales down, the oldest EC2 instances are killed first. For more detail on the exact rules, see here.

If you double the size of your current number of instances, all of the new instances will be deployed with your new code version. This leaves you with a farm of 50% vOld and 50% vNew. When you then tell the ASG to scale to the original size, it will obviously kill off all of the vOld instances, leaving your entire farm upgraded. If you found an issue and had to roll back, you simply rinse and repeat the same exercise! How brilliant is that?!

This process will work exactly the same regardless of whether you deploy your code via updated AMIs each time, or simply post-boot using a user-data script which pulls your source from a bucket, repo, or similar. Either way, the result is the same and infinitely repeatable!

The one counter to this which a colleague of mine brought up, is that you are explicitly depending on a specific feature of AWS always functioning in the same way and not changing in the future. An alternative might be to deploy in a blue-green setup with independent ELBs and instances. You then simply failover using Route53, either all in one go or using weighted routing for a canary release process. Funnily enough, AWS released a white paper on exactly that subject a couple of months ago:
Blue/Green Deployments on AWS Whitepaper

They also cover the scale-up patching method in detail from page 17 of the whitepaper.

Brucie Bonus One – Deployment Dictionary

Incidentally, you can actually deploy said code, without it actually going live immediately, by using methods called “Dark Launch Techniques”. As the name suggests, this separates code deployment from feature launches. You pre-release your code into production, but you simply don’t toggle it on for anyone (or everyone) at first. You can then either toggle it on for everyone, or even better, smaller canary groups. Web-scale companies such as Netflix, Facebook and Google have been doing this for many years!

This process then completely avoids the panic-inducing impact of deploying a large new code release whilst simultaneously having that code go live and ramping up utilisation at the same time!

devops Dark Launch Meme

Combining dark launch methods with scale-up patching or blue/green deployments should lead to a few less grey hairs in the long run, that’s for sure!

For more info, see the following overview:
What is a dark launch in terms of continuous delivery of software?

Brucie Bonus Two – Environment Manager

Lastly, a bit of interesting news which also came from The Trainline is that they have open sourced their own internal deployment tool, they call Environment Manager.

With an AngularJS front end, and a Node.js back end, it’s a home-grown continuous deployment tool which includes a self-service portal, REST APIs, and a number of operational governance features. The governance elements include a feature which prevents rogue developers deploying anything which hasn’t already been defined in the central service catalogue.

The Tramline Environment Manager Architecture

You can check out Environment Manager on GitHub:
https://trainline.github.io/environment-manager

Want More AWS Tips and Gotchas?

Find more posts in this series here:
Index of AWS Tips and Gotchas

Amazon AWS Tips and Gotchas – Part 10 – EFS (Elastic File System)

AWS, Cloud , , , , , , , , , , , , , , , , ,