Extending ECS Auto-scaling for under $2/month with Lambda

The Problem

Amazon Web Services (AWS) is pretty cool. You ought to know that by now. if you don’t, take a few hours and check out some tutorials and play around.

One of the many services AWS provides is the EC2 Container Service (ECS), where the scheduling and lifecycle management of running Docker containers is handled by the ECS control plane (probably magic cooked up in Seattle over coffee or in Dublin over a pint or seven).

You can read all about its launch here.

One missing feature from the ECS offering in comparison to other container schedulers was the concept of scheduling a service to be run on each host in a cluster, such as a logging or monitoring agent.
This feature allows clusters to grow or shrink and still have the correct services running on each node.

A published workaround was to have each node individually run an instance of the defined task on startup, which works pretty well.

The downside here is is that if a task definition changes, ECS has no way of triggering an update to the running tasks – normal services will stop then start the task with a new definition, and use your logic to maintain some degree of uptime.
To achieve the update, one must terminate/replace the entire ECS Container Instance (the EC2 host) and if you’re using AutoScalingGroups, get a fresh node with the updated task.

Other Solutions

  • Docker Swarm calls this a global service, and will run one instance of the service on every node.
  • Mesos’ Marathon doesn’t support this yet either, and is in deep discussion on GitHub on how to implement this in their constraints syntax.
  • Kubernetes has a DaemonSet to run a pod on each node.
  • The recently-released ECS-focused Blox provides a daemon-scheduler to accomplish this, but brings along extra components to accomplish the scheduling.

Back to ECS

So imagine my excitement when the ECS team announced the release of their new Task Placement Strategies last week, offering a “One Task Per Host” strategy as part of the Service declaration.
This indeed is awesome and works as advertised, with no extra components, installs, schedulers, etc.

However! Currently each Service requires a “Desired Count” parameter of how many instances of this service you want to run in the cluster.

Given a cluster with 5 ECS Container Instance hosts, setting the Desired Count to 5 ensures that one runs on each host, provided there are resources available (cpu, ram, available port).

If the cluster grows to 6 (autoscaling, manually adding, etc), there’s nothing in the Service definition that will increase the desired count to 6, so this solution is actually worse off than our previous mode of using user-data to run the task at startup.

One approach is to arbitrarily raise the Desired count to a very high number, such as 100 for this cluster, with the consideration that we are unlikely to grow the cluster to this size without realizing it.
The scheduler will periodically examine the cluster for placement, and handle any hosts missing the service.

The problem with this is that it’s not deterministic, and CloudWatch metrics will report these unplaced tasks as Pending, and I have alarms to notify me if tasks aren’t placed in clusters, as this can point to a resource allocation mismatch.

Enter The Players

To accomplish an automated service desired count, we must use some elements to “glue” a few of the systems together with our custom logic.

Here’s a sequence diagram of the conceptual flow between the components.

UML Sequence Flow

Every time there is a change in an ECS Cluster, CloudWatch Events will receive a payload.
Based on a rule we craft to select events classified as “Container Instance State Change”, CW Events will emit an event to the target of your choice, in our case, Lambda.

We could feasibly use a cron-like schedule to fire this every N minutes to inspect, evaluate, and remediate a semi-static set of services/cluster, but having a system that is reactive to change feels preferable to poll/test/repair.

A simple rule that captures all Container Instance changes:

{
  "source": [
    "aws.ecs"
  ],
  "detail-type": [
    "ECS Container Instance State Change"
  ]
}

You can restrict this to specific clusters by adding the cluster’s ARN to the keys like so:

  "detail": {
    "clusterArn": [
      "arn:aws:ecs:us-east-1:123456789012:cluster/my-specific-cluster",
      "arn:aws:ecs:us-east-1:123456789012:cluster/another-cluster"
    ]
  }

If being throttled or cost is a concern here, you may wish to filter to a set of known clusters, but this reduces the reactiveness of the logic to new clusters being brought online.

The Actual Logic

The Lambda function receives the event, performs some basic validation checks to ensure it has enough details to proceed, and then makes a single API call to the ECS endpoint to find our specified service in the cluster that fired the change event.

If no such service is found, we terminate now, and move on.

If the cluster does indeed have this service defined, then we perform another API call to describe the count of registered container instances, and compare that with the value we already have from the service definition call.

If there’s a mismatch, we perform a final third API call to adjust the service definition’s desired task count.

All in all, a maximum total of 3 possible API calls, usually in under 300ms.

In my environment, I want this task to apply to every cluster in my account, as we later on inspect the cluster to see if it has a service definition applied to it, to act upon.
In my ballpark figures with a set of 10 active clusters, the cost for running this logic should be under $2/month – yes, two dollars a month to ensure your cluster has the correct number of tasks for a given service.
Do you own estimation with the Lambda Pricing Calculator.

Conclusions

The code can be found on GitHub, and was developed with test-everything philosophy, where I spent a large amount of time learning how to actually write the code and tests elegantly.
Writing out all of the tests and sequences allowed me to find multiple points of refactoring and increased efficiency from my first implementation, leading to a much cleaner solution.
Taking on a project like this is a great way to increase one’s own technical prowess, leading to the ability to reason about other problems.

While I strongly believe that this feature should be part of the ECS platform and not require any client-side intervention, the ability to take the current offerings and extend them via mechanisms such as Events, Lambda and API calls further demonstrates the flexibility and extensibility of the AWS ecosystem.
The feature launched just over a week ago, and I’ve been able to put together an acceptable solution on my own, using the documentation, tooling, and infrastructure while minimizing costs and making my system more reactive to change.

I look forward to what else the ECS, Lambda and CloudWatch Events team cook up in the future!

Setting Up a Datadog-to-AWS Integration

When approaching a new service provider, sometimes it can be confusing on how to get set up to best communicate with them – some processes involve multiple steps, multiple interfaces, confusing terminology, and

Amazon Web Services is an amazing cloud services provider, and in order to allow access informational services inside a customer’s account, a couple of known mechanisms exist to delegate access:

  • Account Keys, where you generate a key and secret and share them. The other party stores these (usually in either clear text or using reversible encryption) and uses them as needed to make API calls
  • Role Delegation, where you create a Role and shared secret to provide to a the external service provider, who then is allowed to use their own internal security credentials to request temporary access to your account’s resources via API calls

In the former model, the keys are exchanged once, and once out of your immediate domain, you have little idea what happens to them.
In the latter, a rule is put into place that requires ongoing authenticated access to request assumption of a known role with a shared secret.

Luckily, in both scenarios, a restrictive IAM Policy is in place that allows only the actions you’ve decided to allow ahead of time.

Setting up the desired access is made simpler by having good documentation on how to do this manually. In this modern era, we likely want to keep our infrastructure as code where possible, as well as have a mechanism to apply the rules and test later if they are still valid.

Here’s a quick example I cooked up using Terraform, a new, popular tool to compose cloud infrastructure as code and execute to create the desired state.
[gist https://gist.github.com/miketheman/72197ec28bd527137e196054b3ab6dec#file-datadog-role-delegation-tf /]

The output should look a lot like this:

[gist https://gist.github.com/miketheman/72197ec28bd527137e196054b3ab6dec#file-output-sh-session /]

The Account ID is actually a full ARN, and you can copy your Account ID from there.
Terraform doesn’t have a mechanism to emit only the Account ID yet – so if you have some ideas, contribute!

Use the Account ID, Role Name and External ID and paste those into the Datadog Integrations dialog, after selecting Role Delegation. This will immediately validate that the permissions are correct, and return an error otherwise.

Don’t forget to click “Install Integration” when you’re done (it’s at the very bottom of the screen).

Now metrics and events will be collected by Datadog from any allowed AWS services, and you can keep this setup instruction in any revision system of your choice.

P.S. I tried to set this up via CloudFormation (Sparkleformation, too!). I ended up writing it “freehand” and took more than 3 times as long to get similar functionality.

You can see the CloudFormation Stack here, and decide which works for you.


Further reading:

Fixing unintended consequences of the past

In the age of technology, everyone races forward to get the win. Anything that can provide you the competitive edge is considering important.
This is especially true in the realm of web media, where optimizing for page load times, providing secure transport, adhering to standards can make a difference in how a site is handled by client browsers, ranked by search engines, and most importantly how it is seen by viewers.

To this end, there are many sites, services and companies that will provide methods to audit a site and point out what could be problematic – count broken links, produce reports of actionable corrections, and more.
Some are better than others, and occasionally, you’ll come across something you’ve never seen before.

Recently, I was pinged about pages on a site that is hosted on an Amazon Simple Storage Service (S3) website-enabled bucket.
Since S3 is an object store only, this means that the pages in this site are statically generated and there is no associated web server, backend database, or other components to serve the pages.

This model is becoming more common for sites that can be simplified to run with no dynamic loading of data from a database, withstand heavy bursts of requests, as well as run cheaply (there’s even a free tier, beyond which pricing still remains affordable).

The idea is that you create your content in one format, run a compiler process to generate all the rendered files containing the links and content, and then upload the the compiled files to the S3 location to be requested by browsers. There are many guides on the web on how to do this – I’m not going to link to any now, search and ye shall find.

This particular site had been deployed since 2011 – and the mechanism to copy compiled files to S3 has been using the popular open source command line tool s3cmd – deployment basically looked like this (and still does!):

 s3cmd sync output/ s3://www.mysite.com

where output/ contains the compiled files, ready for deployment.

This has worked very well for over 4 years – until it came to my attention that when uploading to S3, the s3cmd tool was adding some metadata to each file as it uploaded it, as part of the design to support website hosting on S3.

For instance, when uploading a .css file to S3, s3cmd attempts to determine extra details about the file, and set the correct metadata for browsers to understand, such as Content-Type: text/css.
This is a critical function, as it would be difficult to take the time to determine each file’s content type, set that manually, across many files.
You can read more about content media types on Wikipedia.

Since this project was set up a long time ago, the version of s3cmd used as still in alpha stage – and it was used because it performed well enough, and nothing broke, so we were happy to continue running the with same version since early 2013.

The problem reported to me was that many files on the site were returning an invalid Content-Encoding value, something that has been typically not a problem, as the client’s browser will send an Accept-Encoding header when making a request, typically something along the lines of:

Browser: Hi there! Can I have this resource, and I'll accept a response encoded in the following formats: a, b, or c
Server: No problem! Here's the resource you're looking for, with a content encoded in XYZZY

Now, the XYZZY in this example was being set by the s3cmd upload process, and it was determined to be a bug and fixed in late 2013, but since we never knew about the problem, and the site loads just fine, we never addressed it.
There have been even more stability fixes and releases of s3cmd since – as recently as February 2015.

The particular invalid encodings being set were UTF-8 and ANSI_X3.4-1968. While these are valid encodings for files, they are invalid values for the Content-Encoding field.

Here’s an example of how to show the headers of a particular remote file:

$ curl -sI http://www.mysite.com/static/css/style.css | grep Content
Content-Encoding: ANSI_X3.4-1968
Content-Type: text/css
Content-Length: 7073

Many modern browsers will send something along the lines of ‘Accept-Encoding: gzip, deflate, sdch‘ in their request header, in hopes that the server can respond with one that matches, and then save on overall bytes sent over the wire, to speed up pages.

It’s the responsibility of the client (browser) to handle the response. I looked into the source code of Chromium (the basis for Google Chrome), and can see from here that in my example above, at Content-Encoding type of XYZZY will pretty much be ignored, which in this case, is fine, since we’re sending an invalid type.

So there’s no direct user impact, why should we care? Well, according to some popular ranking engines:

Using non-HTML content types for landing pages results in significantly reduced SEO ranking.

So all of this is fine, cool – update s3cmd tool to a newer version, and upload the output files again? Well, it’s not that simple.

Since during a sync operation, s3cmd determines what files might have changed, and only uploads the changed ones, it doesn’t reset the object metadata, as this is basically a new object, and the file itself hasn’t been changed.

One solution might be to edit every file, add an extra space somewhere – maybe an extra blank line at the end – then compile, deploy the changed files – however this might take too long.

Instead, I decided to solve the problem of iterating over every object in a bucket, and checking to see if it had the incorrect Content-Encoding set, and create a new copy of the file without the heading set.

This was pretty straightforward, once I understood the concept of object immutability – once written, you can’t change it, rather what feels like a change from a user interface actually creates a new version of the object with the new settings/metadata.

I also didn’t want to have to download each file locally and then upload it back to S3 – that it a slow operation, and could result in extra network traffic and disk space consumption.

Instead, I used the AWS SDK for Ruby gem, and came up with a short-and-sweet solution:

The code aims to be short and sweet, and sure enough, post-execution, we get the response without the offending header:

$ curl -sI http://www.mysite.com/static/css/style.css | grep Content
Content-Type: text/css
Content-Length: 7073

This swift diagnosis and resolution would not have been possible had the tooling being used not been open source, as many times I was trying to figure out why something behaved the way it did, and while not being familiar with the code, I could reason enough about how things work in general to apply that reasoning on how I should implement my resolution.

Support open source where possible, and happy hunting!

Read more on the standards RFC2616.