Reduce AWS Lambda Latencies with Keep-Alive in Python

It starts, as many stories do, with a question. On September 10th, AWS Serverless Hero Luc van Donkersgoed shared his observations on the relationship of reduced latency with increased request rate when using AWS Lambda. This is always an interesting conversation, and sure enough other AWS Heroes like myself are curious about some of the outlier behaviors, and what exactly is going into each request. AWS Data Hero Alex DeBrie, AWS Container Hero Vlad Ionescu both ask excellent questions about the setup and the behaviors, leading Luc to share what he’s seeing with regards to DNS lookups that don’t make sense to him.

After asking a couple of more questions of my own, I rolled up my sleeves and dug into the what, how, and why.

getting ready to read things and hit them with sticks

I dive in to all parts of the stack in use to try and understand why Luc’s code is seeing DNS lookups.
For example, if your function needs to call AWS S3 or a Twilio API, we usually provide the domain name, and have the code or library perform a request to a Domain Name System (DNS) server to return the current IP address, and then communicate using the IP address. This is a network call and can be expensive (in milliseconds) if it’s performed more frequently than the DNS response’s Time To Live (TTL) – kind of like an expiration date. The DNS lookup adds some more latency to your overall call, which is why many systems will cache DNS responses until the TTL is expired, and then make a new call. If you perform DNS lookups when not needed, that’s adding latency unnecessarily. Read the tweet thread for more!

I arrive at two possible solutions:

  1. If the Python code calls more than 10 AWS service endpoints, it will trigger a DNS lookup, as urllib3‘s PoolManager will only maintain 10 connections (set by botocore defaults) and will need to recycle if exceeded.
  2. Since we’re unlikely to be hitting the limit of 10, something else is at play.
    I found that the default behavior of boto3 is to not use Keep Alive, thus explaining why the occasional connection is reset, triggering a DNS lookup. (Read the tweet thread for the full discovery.)

Using Keep-Alive is nothing new, and was covered quite well by AWS Serverless Hero Yan Cui back in 2019 for Node. It’s even in the official AWS Documentation, citing Yan’s article for the proposed update. Thanks Yan!


There’s precious little literature on using Keep Alive for Python Lambdas that I could find, leading to issues like Luc’s and reports like this one, so I decided to dig a little further. Knowing now that the default for Keep Alive is off by default for users of the popular boto3 package to interact with AWS services, I wanted to explore what that looks like in practice.

I decided to pattern an app after Yan’s example – a function at receives an event body, and persists it to DynamoDB. All in all, not a too complex operation – we perform a single DNS lookup for the DynamoDB service endpoint, and then use the response IP address to connect over HTTP to put an object into the DynamoDB table.

After re-writing the same function in Python, I was able to test the same kind of behavior that Yan did, running a call to the function once per second, isolating any concurrency concerns, replicating Luc’s test. This should have the benefit of reusing the same Lambda context (no cold starts) and seeing that the latencies range from 7 to 20 milliseconds for the same operation:

filtered log view showing only the latency for put_item calls to DynamoDB for 30 seconds

So far, so good – pretty much the same. The overall values are lower than Yan’s original experiment, which I attribute to the entire Lambda ecosystem improving, but we can see there’s variance and we often enter double-digit latencies, when we know that the DynamoDB operation is likely to only take 6-7 milliseconds.

left side shows spiky responses; right side shows most responses are fast, with some slower outliers

As Yan showed in his approach adapted from Matt Levine’s talk snippets, he was able to reconstruct the AWS Config by rebuilding the lowest-level HTTP agent that the library relies on to make the calls, and thereby set the behavior for Keep Alive. This has since been obsoleted by the AWS Node.JS SDK adding an environment variable to enable the keep alive behavior, which is awesome! But what about Python? 🐍

In the recent release of botocore 1.27.84 we can modify the AWS Config passed into the client constructor:

# before:
import boto3
client = boto3.client("dynamodb")

# after:
import boto3
from botocore.config import Config
client = boto3.client("dynamodb", config=Config(tcp_keepalive=True))

With the new configuration in place, if you try this on AWS python3.9 execution runtime, you’ll get this error:
[ERROR] TypeError: Got unexpected keyword argument 'tcp_keepalive'

While the AWS Python runtime includes versions of boto3 and botocore, they do not yet support the new tcp_keepalive parameter – the runtime currently ships:
– boto3 1.20.32
– botocore 1.23.32

So we have to solve another way.

The documentation tells us that we can configure this via a config file in ~/.aws/config, added in version 1.9.17 back in October 2018 – presumably when all the Keep Alive conversations were fresh in folks’ minds.

However, since the Lambda runtime environment disallows writing to that path, we can’t write the config file easily. We might be able to create a custom Docker runtime and place a file in the path, but that’s a bit harder, and we lose some of the benefits of using the AWS prebuilt runtime like startup latency, which when we’re exploring a latency-oriented article, seems like the wrong choice 😁.

Using serverless framework CLI with the serverless-python-requirements (what I’m currently using), or AWS SAM, you can add the updated version of boto3 and botocore, and deploying the updated application allows us to leverage the new setting in a Lambda environment. You may already be using one of these approaches for a more evolved application.
Hopefully 🤞 the Lambda Runtime will be updated to include these versions in the near future, so we don’t have to package these dependencies to get this specific feature.

With the updated packages, we can pass the custom Config with tcp_keepalive enabled (as shown above), and observe more constant performance for the same style of test:

left: much smoother!! right: narrower distribution of values, max 8.50 ms

There’s an open request for the config value to be available via environment variable – check it out and give it a 👍 to add your desire and subscribe via GitHub notifications.

Enjoy lower, more predictable latencies with Keep Alive!

Check out the example code here: https://github.com/miketheman/boto3-keep-alive


Postscript: If you’re interested in pinpointing calls for performance, I recommend checking out Datadog’s APM and associated ddtrace module to see the specifics of every call to AWS endpoints and associated latencies, as well as other parts of your application stack. There’s a slew of other vendors that can help surface these metrics.

Container-to-Container Communication

Question ❓

In a containerized world, is there a material difference between communicating over local network TCP vs local Unix domain sockets?

Given an application with more than a single container that need to talk to each other, is there an observable difference in latency/throughput when using one inter-component communication method over another from an end-users’ perspective?


Background 🌆

There’s this excellent write-up on the comparison back in 2005, and many things have changed since then, especially around the optimizations in the kernel and networking stack, along with the container runtime that is usually abstracted away from the end user’s concerns. Redis benchmarks from a few years ago also point out significant improvements using Unix sockets when the server and benchmark are co-located.

There’s other studies out there that have their own performance comparisons, and produce images like these – and every example is going to have its own set of controls and caveats.

I wanted to use a common-ish scenario: a web service running on cloud infrastructure I don’t own.

Components 🧩

For the experiment, I chose this set of components:

  • nginx (web server) – terminate SSL, proxy requests to upstream web server
  • gunicorn (http server) – speaks HTTP and WSGI protocol, runs application
  • starlette (python application framework) – handle request/response
components

I considered using FastAPI for the application layer – but since I didn’t need any of those features, I didn’t add it, but it’s a great framework – check it out!

As gunicorn server runs the starlette framework and the custom application code, I will be referring to them as a single component later as "app", as the tests I’m comparing is the behavior between nginx and the "app" layer, using overall user-facing latency and throughput as the main result.

nginx 🌐

nginx is awesome. Really powerful, and has many built-in features, highly configurable. Been using it for years, and it’s my go-to choice for a reliable web server.

For our purposes, we need an external port to listen for inbound requests, and a stanza to proxy the requests to the upstream application server.

You might ask: Why use nginx at all, if Gunicorn can terminate connections directly? Well, there’s often a class of problems that nginx is better suited at handling rather than a fully-fledged Python runtime – examples include static file serving (robots.txt, favicon.ico et. al.) as well as caching, header or path rewriting, and more.

nginx is a commonly used in front of all manner of applications.

Python Application 🐍

To support the testing of a real-world scenario, I’m creating a JSON response, as that’s how most web applications communicate today. This often incurs some serialization overhead in the application.

I took the example from starlette and added a couple of tweaks to emit the current timestamp and a random number. This prevents any potential caching occurring in any of the layers and polluting the experiment.

Here’s what the main request/response now looks like:

async def homepage(request):
    return JSONResponse(
        {
            "hello": "world",
            "utcnow": datetime.datetime.utcnow().isoformat(),
            "random": random.random(),
        }
    )

A response looks like this:

{
  "hello": "world",
  "utcnow": "2021-12-27T00:31:42.383861",
  "random": 0.5352573557347882
}

And while there are ways to improve JSON serialization speed, or tweak the Python runtime, I wanted to keep the experiment with defaults, since the point isn’t about maximizing total throughput, rather seeing the difference between the architectures.

Cloud Environment ☁️

For this experiment, I chose Amazon Elastic Container Service (ECS) with AWS Fargate compute. These choices provide a way to construct all the pieces needed in a repeatable fashion in the shortest amount of time, and abstract a lot of the infra concerns. To set everything up, I used AWS Copilot CLI, an open-source tool that does even more of the heavy lifting for me.

The Copilot Application type of Load Balanced Web Service will create an Application Load Balancer (ALB), which is the main external component outside my application stack, but an important one for actual scaling, SSL termination at the edge, and more. For the sake of this experiment, we assume (possibly incorrectly!) that ALBs will perform consistently for each test.

Architectures 🏛

Using containers, I wanted to test multiple architecture combinations to see which one proved the "best" when it came to user-facing performance.

Example 1: "tcp"

The communication between nginx container and the app container takes places over the dedicated network created by the Docker runtime (or Container Network Interface in Fargate). This means there’s TCP overhead between nginx and the app – but is it significant? Let’s find out!

Example 2: "sharedvolume"

Here we create a shared volume between the nginx container and the app container. Then we use a Unix domain socket to communicate between the containers using the shared volume.

This architecture maintains a separation of concerns between the two components, which is generally a good practice, so as to have a single essential process per container.

Example 3: "combined"

In this example, we combine both nginx and app in a single container, and use local Unix sockets within the container to communicate.

The main difference here is that we add a process supervisor to run both nginx and app runtimes – which some may consider an anti-pattern. I’m including it for the purpose of the experiment, mainly to uncover if there’s performance variation between a local volume and a shared volume.

This approach simulates what we’d expect in a single "server" scenario – where a traditional instance (hardware or virtual) runs multiple processes and all have some access to a local shared volume for inter-process communication (IPC).

To make this a fair comparison, I’ve also doubled the CPU and memory allocation.

Copilot ✈️

Time to get off the ground.

Copilot CLI assumes you already have an app prepared in a Dockerfile. The Quickstart has you clone a repo with a sample app – so instead I’ve created a Dockerfile for each of the architectures, along with a docker-compose.yml file for local orchestration of the components.

Then I’ll be able to launch and test each one in AWS with its own isolated set of resources – VPC, networking stack, and more.

I’m not going into all the details of how to install Copilot and launch the services, for that, read the Copilot CLI documentation (linked above), and read the experiment code.

This test is using AWS Copilot CLI v1.13.0.

Test Protocol 🔬

There’s an ever-growing list of tools and approaches to benchmark web request/response performance.

For the sake of time, I’ll use a single one here, to focus on the comparison of the server-side architecture performance.

All client-side requests will be performed from an AWS CloudShell instance running in the same AWS Region as the running services (us-east-1) to isolate a lot of potential network chatter. It’s not a perfect isolation of potential variables, but it’ll have to do.

To baseline, I ran each test locally (see later).

Apache Bench

Apache Bench, or ab, is a common tool for testing web endpoints, and is not specific to Apache httpd servers. I’m using: Version 2.3 <$Revision: 1879490 $>

I chose single concurrency, and ran 1,000 requests. I also ignore variable length, as the app can respond with a variable-length random number choice, and ab considers different length responses a failure unless specified.

ab -n 1000 -c 1 -l http://service-target....

Each test should take less than 5 seconds.

The important stats I’m comparing are:

  • Requests per second (mean) – higher is better
  • Time per request (mean) – lower is better
  • Duration at 99th percentile. 99% of all requests completed within (milliseconds) – lower is better

To reduce variance, I also "warmed up" the container by running the test for a larger amount of requests

Local Test

To establish a baseline, I ran the same benchmark test against the local services. Using Docker Desktop 4.3.2 (72729) on macOS. These aren’t demonstrative of a real user experience, but provides a sense of performance before launching the architectures in the cloud.

arch reqs per sec ms per req 99th pctile
tcp (local) 679.77 1.471 2
sharedvolume (local) 715.62 1.397 2
combined (local) 705.55 1.871 2

In the local benchmark, the clear loser is the tcp architecture, and the sharedvolume has a slight edge on combined – but not a huge win. No real difference in the 99th percentiles – requests are being served in under 2ms.

This shows that the shared resources for the combined architecture are near the performance of the sharedvolume – possibly due to Docker Desktop’s bridging and network abstraction. A better comparison might be tested on a native Linux machine.

Remote Test

Once I ran through the setup steps using Copilot CLI to create the environment and services, I performed the same ab test, and collected the results in this table:

arch reqs per sec ms per req 99th pctile
tcp (aws) 447.57 2.234 5
sharedvolume (aws) 394.55 2.535 6
combined (aws) 428.60 2.333 4

With the remote tests, minor surprise that the combined service performed better than the sharedvolume service, as in the local test it performed worse.

The bigger surprise was to find that the tcp architecture wins slightly over the socket-based architectures.

This could be due to the way ECS Fargate uses the Firecracker microvm, and has tuned the network stack to perform faster than using a shared socket on a volume when communicating between two containers on the same host machine. The best part is – as a consumer of a utility, I don’t care, as long as it’s performing well!

ARM/Graviton Remote Test

With the Copilot manifest defaults for the Intel x86 platform, let’s also test the performance on the linux/arm64 platform (Graviton2, probably).

For this to work, I had to rebuild the nginx sidecars manually, as Copilot doesn’t yet build&push sidecar images. I also had to update the manifest.yml to set the desired platform, and deploy the service with copilot svc deploy .... (The combined version needed some Dockerfile surgery too…)

Results:

arch reqs per sec ms per req 99th pctile
tcp (aws/arm) 475.03 2.105 3
sharedvolume (aws/arm) 451.71 2.214 4
combined (aws/arm) 433.94 2.304 4

We can see that all the stats are better on the Graviton architecture, lending some more credibility to studies done by other benchmark posts and papers.

Aside: The linux/arm64-based container images were tens of megabytes smaller, so if space and network pull time is a concern, these will be a few microseconds faster.

Other Testing Tools

If you’re interested in performing longer tests, or emulating different user types, check out some of these other benchmark tools I considered and didn’t use for this experiment:

  • Python – https://locust.io/ https://molotov.readthedocs.io/
  • JavaScript – https://k6.io/
  • Golang – https://github.com/rakyll/hey
  • C – https://github.com/wg/wrk

There’s also plenty of vendors that build out extensive load testing platforms – I’m not covering any of them here. If you run a test with these, would definitely like to see your results!

Conclusions 💡

Using the Copilot CLI wasn’t without some missteps – the team is hard at work improving the documentation, and are pretty responsive in both their GitHub Issues and Discussions, as well as their Gitter chat room – always helpful when learning a new framework. Once I got the basics, being able to establish a reproducible stack is valuable to the experimentation process, as I was able to provision and tear down the stack easily, as well as update with changes relatively easily.

Remember: these are micro-benchmarks, on not highly-tuned environments or real-world workloads. This test was designed to test a very specific type of workload, which may change as more concurrency is introduced, CPU or memory saturation is achieved, auto-scaling of application instances comes into play, and more.

Your mileage may vary.

When I started this experiment, I assumed the winner would be a socket-based communication architecture (sharedvol or combined), from existing literature, and it also made sense to me. The overhead of creating TCP packets between the processes would be eliminated, and thus performance would be better.

However, in these benchmarks, I found that using the TCP communication architecture performs best, possibly due to optimizations beyond our view in the underlying stack. This is precisely what I want from an infrastructure vendor – for them to figure out how to optimize performance without having to re-architect an application to perform better in a given deployment scenario.

The main conclusion I’ve drawn is: Using TCP to communicate between containers is best, as it affords the most flexibility, follows established patterns, and performs slightly better than the alternatives in a real(ish) world scenario. And if you can, use Graviton2 (ARM) CPU architecture.

Go forth, test your own scenarios, and let me know what you come up with. (Don’t forget to delete your resource when done!! 💸 )

AWS DeepComposer 🎹➡️☁️🎶

This year’s Amazon Web Services re:Invent conference in Las Vega, Nevada, was a veritable smorgasbord of announcements, product launches, previews, and a ton of information to try and digest at once.

One very exciting announcement was AWS DeepComposer – which continues to expand on AWS’ mission of “Putting machine learning in the hands of every developer”.
Here’s a slick intro video from the product announcement – come back after!

The service is still in Preview mode, and has an application/review process – so while I wait for the application to clear, I figured I’d poke around a bit and see what I got.

📦 Box Contents

The box. Not super impressive.
The box, open. More impressive.

Opening the box, I’m immediately reminded of a 1980s Casio Keyboard – we had one, and I enjoyed it a lot. This is larger, has no batteries or speakers.

The keyboard itself.

It’s a 32-key keyboard, while the key sizing isn’t 100% the same as that baby grand piano you have tucked somewhere in your vast mansion, it’ll probably be good enough.

The interface is USB Type B. I recently recycled roughly over 20 of these cables in an e-waste purge, thinking “I don’t have anything that uses this connection!” Well, now I do. It’s 2019 – I thought at least Micro USB, if not USB-C would have been the right choice?

Lucky for me, the box also contains a USB-A to USB-B cable, so at least that’s that.
Wait a minute… my 12-inch MacBook from 2016 that I’m using only has a single USB-C port.
Ruh-roh.
Apparently, I packed my USB-A to USB-C plug that I got with my Google Pixel 4 – let’s see if that will work! Even if it does, that means that I can’t use the DeepComposer and charge my laptop at the same time without an external port hub.
Considering that’s the only port (other than a 3.5mm audio jack) on my mac, I’m not too worried about it, especially since the battery is still pretty good.

There’s other packing materials, and a little card with a nice tagline of “Press play on ML” and a URL to visit: https://aws.amazon.com/startcomposing (redirects to the product page link – maybe a future device-specific landing page? Hmmm…)

⚡️ Power it up

I know I don’t have the provisioned account access yet, so I won’t be able to run all the things the presenter did in the video, so I figured I might poke around the connectivity interface and see what I might be able to glean in the absence of a proper setup.

Before I plug in the device, let’s also look at the current state of the Input/Output (I/O) devices, filtered specifically to the Apple USB Host Controller:

$ ioreg -w0 -rc AppleUSBHostController
+-o XHC1@14000000  <class AppleUSBXHCISPTLP, id 0x1000001dd, registered, matched, active, busy 0 (5263 ms), retain 55>
  | {
  |   "IOClass" = "AppleUSBXHCISPTLP"
  |   "kUSBSleepPortCurrentLimit" = 1500
  |   "IOPowerManagement" = {"ChildrenPowerState"=1,"DevicePowerState"=0,"CurrentPowerState"=1,"CapabilityFlags"=4,"MaxPowerState"=3,"DriverPowerState"=0}
  |   "IOProviderClass" = "IOPCIDevice"
  |   "IOProbeScore" = 1000
  |   "UsbRTD3Supported" = Yes
  |   "locationID" = 335544320
  |   "name" = <"XHC1">
  |   "64bit" = Yes
  |   "kUSBWakePortCurrentLimit" = 1500
  |   "IOPCIPauseCompatible" = Yes
  |   "device-properties" = {"acpi-device"="IOACPIPlatformDevice is not serializable","acpi-path"="IOACPIPlane:/_SB/PCI0@0/XHC1@140000"}
  |   "IOPCIPrimaryMatch" = "0x9d2f8086"
  |   "IOMatchCategory" = "IODefaultMatchCategory"
  |   "CFBundleIdentifier" = "com.apple.driver.usb.AppleUSBXHCIPCI"
  |   "Revision" = <0003>
  |   "IOGeneralInterest" = "IOCommand is not serializable"
  |   "IOPCITunnelCompatible" = Yes
  |   "controller-statistics" = {"kControllerStatIOCount"=78,"kControllerStatPowerStateTime"={"kPowerStateOff"="142ms (0%)","kPowerStateSleep"="40191894ms (99%)","kPowerStateOn"="75024ms (0%)","kPowerStateSuspended"="1332ms (0%)"},"kControllerStatSpuriousInterruptCount"=0}
  |   "kUSBSleepSupported" = Yes
  | }
  |
  +-o HS01@14100000  <class AppleUSB20XHCIPort, id 0x100000245, registered, matched, active, busy 0 (4773 ms), retain 13>
  +-o HS03@14200000  <class AppleUSB20XHCIPort, id 0x100000246, registered, matched, active, busy 0 (0 ms), retain 10>
  +-o HS04@14300000  <class AppleUSB20XHCIPort, id 0x100000249, registered, matched, active, busy 0 (0 ms), retain 10>
  +-o HS09@14400000  <class AppleUSB20XHCIPort, id 0x10000024c, registered, matched, active, busy 0 (0 ms), retain 9>
  +-o SSP1@14500000  <class AppleUSB30XHCIPort, id 0x10000024d, registered, matched, active, busy 0 (0 ms), retain 14>
  +-o SSP3@14600000  <class AppleUSB30XHCIPort, id 0x10000024e, registered, matched, active, busy 0 (0 ms), retain 12>
  +-o SSP4@14700000  <class AppleUSB30XHCIPort, id 0x10000024f, registered, matched, active, busy 0 (0 ms), retain 12>

A shorter version of this can be seen in the built-in System Information app, under the USB section.

Now I’m ready – let’s see what happens!

Plugging in, the first positive indication is that I see a series of red and blue LEDs briefly light up behind the top row of buttons, a quick cycle. So we know that at the very least, the little adapter is providing some power to the USB device.

Let’s look at the output of the I/O device state now:

$ ioreg -w0 -rc AppleUSBHostController
+-o XHC1@14000000  <class AppleUSBXHCISPTLP, id 0x1000001dd, registered, matched, active, busy 0 (7030 ms), retain 60>
  | {
  |   "IOClass" = "AppleUSBXHCISPTLP"
  |   "kUSBSleepPortCurrentLimit" = 1500
  |   "IOPowerManagement" = {"ChildrenPowerState"=3,"DevicePowerState"=2,"CurrentPowerState"=3,"CapabilityFlags"=32768,"MaxPowerState"=3,"DriverPowerState"=0}
  |   "IOProviderClass" = "IOPCIDevice"
  |   "IOProbeScore" = 1000
  |   "UsbRTD3Supported" = Yes
  |   "locationID" = 335544320
  |   "name" = <"XHC1">
  |   "64bit" = Yes
  |   "kUSBWakePortCurrentLimit" = 1500
  |   "IOPCIPauseCompatible" = Yes
  |   "device-properties" = {"acpi-device"="IOACPIPlatformDevice is not serializable","acpi-path"="IOACPIPlane:/_SB/PCI0@0/XHC1@140000"}
  |   "IOPCIPrimaryMatch" = "0x9d2f8086"
  |   "IOMatchCategory" = "IODefaultMatchCategory"
  |   "CFBundleIdentifier" = "com.apple.driver.usb.AppleUSBXHCIPCI"
  |   "Revision" = <0003>
  |   "IOGeneralInterest" = "IOCommand is not serializable"
  |   "IOPCITunnelCompatible" = Yes
  |   "controller-statistics" = {"kControllerStatIOCount"=104,"kControllerStatPowerStateTime"={"kPowerStateOff"="142ms (0%)","kPowerStateSleep"="40554314ms (99%)","kPowerStateOn"="245721ms (0%)","kPowerStateSuspended"="1333ms (0%)"},"kControllerStatSpuriousInterruptCount"=0}
  |   "kUSBSleepSupported" = Yes
  | }
  |
  +-o HS01@14100000  <class AppleUSB20XHCIPort, id 0x100000245, registered, matched, active, busy 0 (6540 ms), retain 18>
  | +-o AKM322@14100000  <class IOUSBHostDevice, id 0x100004670, registered, matched, active, busy 0 (1766 ms), retain 23>
  |   +-o AppleUSBHostLegacyClient  <class AppleUSBHostLegacyClient, id 0x100004673, !registered, !matched, active, busy 0, retain 9>
  |   +-o AppleUSBHostCompositeDevice  <class AppleUSBHostCompositeDevice, id 0x10000467b, !registered, !matched, active, busy 0, retain 4>
  |   +-o IOUSBHostInterface@0  <class IOUSBHostInterface, id 0x10000467d, registered, matched, active, busy 0 (3 ms), retain 6>
  |   +-o IOUSBHostInterface@1  <class IOUSBHostInterface, id 0x10000467e, registered, matched, active, busy 0 (3 ms), retain 6>
  +-o HS03@14200000  <class AppleUSB20XHCIPort, id 0x100000246, registered, matched, active, busy 0 (0 ms), retain 10>
  +-o HS04@14300000  <class AppleUSB20XHCIPort, id 0x100000249, registered, matched, active, busy 0 (0 ms), retain 10>
  +-o HS09@14400000  <class AppleUSB20XHCIPort, id 0x10000024c, registered, matched, active, busy 0 (0 ms), retain 9>
  +-o SSP1@14500000  <class AppleUSB30XHCIPort, id 0x10000024d, registered, matched, active, busy 0 (0 ms), retain 14>
  +-o SSP3@14600000  <class AppleUSB30XHCIPort, id 0x10000024e, registered, matched, active, busy 0 (0 ms), retain 12>
  +-o SSP4@14700000  <class AppleUSB30XHCIPort, id 0x10000024f, registered, matched, active, busy 0 (0 ms), retain 12>

Again, this is pretty verbose, but if you look closely, you’ll see that the device at address HS01@14100000 now has a sub-device associated with it – AKM322@14100000.

Yay! We can see that the device is powered, and the system registers it.

What is this thing??

A quick search for the device prefix string “AKM322” brought be to a device similar in nature:
https://www.amazon.com/midiplus-32-Key-Keyboard-Controller-AKM322/dp/B016O5F2GQ
Here’s the listing for the DeepComposer device: https://www.amazon.com/AWS-DeepComposer-learning-enabled-keyboard-developers/dp/B07YGZ4V5B/

If you’re asking – “why the price difference?”, well the DeepComposer device comes with some cloud features too!

We want you to know:
To train your models and create new musical compositions, AWS DeepComposer is priced at $99, this includes the keyboard, plus a 3-month free trial of AWS DeepComposer services to train your models and create original musical compositions. Each month of the free trail includes enough to cover up to 4 training jobs and 40 inference jobs per month, during the free trial period.

So for the dollar value, you’re getting not only the device, but also some AWS Cloud Goodness!

Visiting what appears to be the manufacturer’s page, we can see more details about the hardware, so that’s cool. It’s a MIDI device, translating analog signals (like pressing keys for with different pressures and durations) into digital signals.
Cool stuff! There might be some secret AWS goodness in the DeepComposer model – we’ll have to wait and see.

Make some noise!!

Again, I don’t yet have access to the DeepComposer interface, so I found a macOS MIDI testing guide that I followed: https://support.apple.com/en-gb/HT201840

The test was successful, but I only got a single note “ding” response, confirming that the device works, can communicate back to my computer. But I want to hear something!

Apple produces Logic Pro – but at a $199 price tag, I don’t really want to spend that just to mess around until I can really try out the DeepComposer service.
Apple also produces GarageBand – for free! Fire it up, and wait for the 2GB download to complete over hotel wifi. This is also where I unplug the keyboard, and plug in the power – since we’re going to be here for a while…

I’ll check back once I’ve got some more details to report. Hope you enjoyed this set of musings, and hopefully I’ll have more to show you soon!

Other Reading

There’s not too much out there just yet – as this is a preview service, just announced.
I posted a link to a video of the original announcement, and you can also read some of the announcement blog post details here:

https://aws.amazon.com/blogs/aws/aws-deepcomposer-compose-music-with-generative-machine-learning-models/

Extending ECS Auto-scaling for under $2/month with Lambda

The Problem

Amazon Web Services (AWS) is pretty cool. You ought to know that by now. if you don’t, take a few hours and check out some tutorials and play around.

One of the many services AWS provides is the EC2 Container Service (ECS), where the scheduling and lifecycle management of running Docker containers is handled by the ECS control plane (probably magic cooked up in Seattle over coffee or in Dublin over a pint or seven).

You can read all about its launch here.

One missing feature from the ECS offering in comparison to other container schedulers was the concept of scheduling a service to be run on each host in a cluster, such as a logging or monitoring agent.
This feature allows clusters to grow or shrink and still have the correct services running on each node.

A published workaround was to have each node individually run an instance of the defined task on startup, which works pretty well.

The downside here is is that if a task definition changes, ECS has no way of triggering an update to the running tasks – normal services will stop then start the task with a new definition, and use your logic to maintain some degree of uptime.
To achieve the update, one must terminate/replace the entire ECS Container Instance (the EC2 host) and if you’re using AutoScalingGroups, get a fresh node with the updated task.

Other Solutions

  • Docker Swarm calls this a global service, and will run one instance of the service on every node.
  • Mesos’ Marathon doesn’t support this yet either, and is in deep discussion on GitHub on how to implement this in their constraints syntax.
  • Kubernetes has a DaemonSet to run a pod on each node.
  • The recently-released ECS-focused Blox provides a daemon-scheduler to accomplish this, but brings along extra components to accomplish the scheduling.

Back to ECS

So imagine my excitement when the ECS team announced the release of their new Task Placement Strategies last week, offering a “One Task Per Host” strategy as part of the Service declaration.
This indeed is awesome and works as advertised, with no extra components, installs, schedulers, etc.

However! Currently each Service requires a “Desired Count” parameter of how many instances of this service you want to run in the cluster.

Given a cluster with 5 ECS Container Instance hosts, setting the Desired Count to 5 ensures that one runs on each host, provided there are resources available (cpu, ram, available port).

If the cluster grows to 6 (autoscaling, manually adding, etc), there’s nothing in the Service definition that will increase the desired count to 6, so this solution is actually worse off than our previous mode of using user-data to run the task at startup.

One approach is to arbitrarily raise the Desired count to a very high number, such as 100 for this cluster, with the consideration that we are unlikely to grow the cluster to this size without realizing it.
The scheduler will periodically examine the cluster for placement, and handle any hosts missing the service.

The problem with this is that it’s not deterministic, and CloudWatch metrics will report these unplaced tasks as Pending, and I have alarms to notify me if tasks aren’t placed in clusters, as this can point to a resource allocation mismatch.

Enter The Players

To accomplish an automated service desired count, we must use some elements to “glue” a few of the systems together with our custom logic.

Here’s a sequence diagram of the conceptual flow between the components.

UML Sequence Flow

Every time there is a change in an ECS Cluster, CloudWatch Events will receive a payload.
Based on a rule we craft to select events classified as “Container Instance State Change”, CW Events will emit an event to the target of your choice, in our case, Lambda.

We could feasibly use a cron-like schedule to fire this every N minutes to inspect, evaluate, and remediate a semi-static set of services/cluster, but having a system that is reactive to change feels preferable to poll/test/repair.

A simple rule that captures all Container Instance changes:

{
  "source": [
    "aws.ecs"
  ],
  "detail-type": [
    "ECS Container Instance State Change"
  ]
}

You can restrict this to specific clusters by adding the cluster’s ARN to the keys like so:

  "detail": {
    "clusterArn": [
      "arn:aws:ecs:us-east-1:123456789012:cluster/my-specific-cluster",
      "arn:aws:ecs:us-east-1:123456789012:cluster/another-cluster"
    ]
  }

If being throttled or cost is a concern here, you may wish to filter to a set of known clusters, but this reduces the reactiveness of the logic to new clusters being brought online.

The Actual Logic

The Lambda function receives the event, performs some basic validation checks to ensure it has enough details to proceed, and then makes a single API call to the ECS endpoint to find our specified service in the cluster that fired the change event.

If no such service is found, we terminate now, and move on.

If the cluster does indeed have this service defined, then we perform another API call to describe the count of registered container instances, and compare that with the value we already have from the service definition call.

If there’s a mismatch, we perform a final third API call to adjust the service definition’s desired task count.

All in all, a maximum total of 3 possible API calls, usually in under 300ms.

In my environment, I want this task to apply to every cluster in my account, as we later on inspect the cluster to see if it has a service definition applied to it, to act upon.
In my ballpark figures with a set of 10 active clusters, the cost for running this logic should be under $2/month – yes, two dollars a month to ensure your cluster has the correct number of tasks for a given service.
Do you own estimation with the Lambda Pricing Calculator.

Conclusions

The code can be found on GitHub, and was developed with test-everything philosophy, where I spent a large amount of time learning how to actually write the code and tests elegantly.
Writing out all of the tests and sequences allowed me to find multiple points of refactoring and increased efficiency from my first implementation, leading to a much cleaner solution.
Taking on a project like this is a great way to increase one’s own technical prowess, leading to the ability to reason about other problems.

While I strongly believe that this feature should be part of the ECS platform and not require any client-side intervention, the ability to take the current offerings and extend them via mechanisms such as Events, Lambda and API calls further demonstrates the flexibility and extensibility of the AWS ecosystem.
The feature launched just over a week ago, and I’ve been able to put together an acceptable solution on my own, using the documentation, tooling, and infrastructure while minimizing costs and making my system more reactive to change.

I look forward to what else the ECS, Lambda and CloudWatch Events team cook up in the future!

Setting Up a Datadog-to-AWS Integration

When approaching a new service provider, sometimes it can be confusing on how to get set up to best communicate with them – some processes involve multiple steps, multiple interfaces, confusing terminology, and

Amazon Web Services is an amazing cloud services provider, and in order to allow access informational services inside a customer’s account, a couple of known mechanisms exist to delegate access:

  • Account Keys, where you generate a key and secret and share them. The other party stores these (usually in either clear text or using reversible encryption) and uses them as needed to make API calls
  • Role Delegation, where you create a Role and shared secret to provide to a the external service provider, who then is allowed to use their own internal security credentials to request temporary access to your account’s resources via API calls

In the former model, the keys are exchanged once, and once out of your immediate domain, you have little idea what happens to them.
In the latter, a rule is put into place that requires ongoing authenticated access to request assumption of a known role with a shared secret.

Luckily, in both scenarios, a restrictive IAM Policy is in place that allows only the actions you’ve decided to allow ahead of time.

Setting up the desired access is made simpler by having good documentation on how to do this manually. In this modern era, we likely want to keep our infrastructure as code where possible, as well as have a mechanism to apply the rules and test later if they are still valid.

Here’s a quick example I cooked up using Terraform, a new, popular tool to compose cloud infrastructure as code and execute to create the desired state.

# Read more about variables and how to override them here:
# https://www.terraform.io/docs/configuration/variables.html
variable "aws_region" {
type = "string"
default = "us-east-1"
}
variable "shared_secret" {
type = "string"
default = "SOOPERSEKRET"
}
provider "aws" {
region = "${var.aws_region}"
}
resource "aws_iam_policy" "dd_integration_policy" {
name = "DatadogAWSIntegrationPolicy"
path = "/"
description = "DatadogAWSIntegrationPolicy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"autoscaling:Describe*",
"cloudtrail:DescribeTrails",
"cloudtrail:GetTrailStatus",
"cloudwatch:Describe*",
"cloudwatch:Get*",
"cloudwatch:List*",
"ec2:Describe*",
"ec2:Get*",
"ecs:Describe*",
"ecs:List*",
"elasticache:Describe*",
"elasticache:List*",
"elasticloadbalancing:Describe*",
"elasticmapreduce:List*",
"iam:Get*",
"iam:List*",
"kinesis:Get*",
"kinesis:List*",
"kinesis:Describe*",
"logs:Get*",
"logs:Describe*",
"logs:TestMetricFilter",
"rds:Describe*",
"rds:List*",
"route53:List*",
"s3:GetBucketTagging",
"ses:Get*",
"ses:List*",
"sns:List*",
"sns:Publish",
"sqs:GetQueueAttributes",
"sqs:ListQueues",
"sqs:ReceiveMessage"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_role" "dd_integration_role" {
name = "DatadogAWSIntegrationRole"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Principal": { "AWS": "arn:aws:iam::464622532012:root" },
"Action": "sts:AssumeRole",
"Condition": { "StringEquals": { "sts:ExternalId": "${var.shared_secret}" } }
}
}
EOF
}
resource "aws_iam_policy_attachment" "allow_dd_role" {
name = "Allow Datadog PolicyAccess via Role"
roles = ["${aws_iam_role.dd_integration_role.name}"]
policy_arn = "${aws_iam_policy.dd_integration_policy.arn}"
}
output "AWS Account ID" {
value = "${aws_iam_role.dd_integration_role.arn}"
}
output "AWS Role Name" {
value = "${aws_iam_role.dd_integration_role.name}"
}
output "AWS External ID" {
value = "${var.shared_secret}"
}

The output should look a lot like this:

The Account ID is actually a full ARN, and you can copy your Account ID from there.
Terraform doesn’t have a mechanism to emit only the Account ID yet – so if you have some ideas, contribute!

Use the Account ID, Role Name and External ID and paste those into the Datadog Integrations dialog, after selecting Role Delegation. This will immediately validate that the permissions are correct, and return an error otherwise.

Don’t forget to click “Install Integration” when you’re done (it’s at the very bottom of the screen).

Now metrics and events will be collected by Datadog from any allowed AWS services, and you can keep this setup instruction in any revision system of your choice.

P.S. I tried to set this up via CloudFormation (Sparkleformation, too!). I ended up writing it “freehand” and took more than 3 times as long to get similar functionality.

You can see the CloudFormation Stack here, and decide which works for you.


Further reading:

Fixing unintended consequences of the past

In the age of technology, everyone races forward to get the win. Anything that can provide you the competitive edge is considering important.
This is especially true in the realm of web media, where optimizing for page load times, providing secure transport, adhering to standards can make a difference in how a site is handled by client browsers, ranked by search engines, and most importantly how it is seen by viewers.

To this end, there are many sites, services and companies that will provide methods to audit a site and point out what could be problematic – count broken links, produce reports of actionable corrections, and more.
Some are better than others, and occasionally, you’ll come across something you’ve never seen before.

Recently, I was pinged about pages on a site that is hosted on an Amazon Simple Storage Service (S3) website-enabled bucket.
Since S3 is an object store only, this means that the pages in this site are statically generated and there is no associated web server, backend database, or other components to serve the pages.

This model is becoming more common for sites that can be simplified to run with no dynamic loading of data from a database, withstand heavy bursts of requests, as well as run cheaply (there’s even a free tier, beyond which pricing still remains affordable).

The idea is that you create your content in one format, run a compiler process to generate all the rendered files containing the links and content, and then upload the the compiled files to the S3 location to be requested by browsers. There are many guides on the web on how to do this – I’m not going to link to any now, search and ye shall find.

This particular site had been deployed since 2011 – and the mechanism to copy compiled files to S3 has been using the popular open source command line tool s3cmd – deployment basically looked like this (and still does!):

 s3cmd sync output/ s3://www.mysite.com

where output/ contains the compiled files, ready for deployment.

This has worked very well for over 4 years – until it came to my attention that when uploading to S3, the s3cmd tool was adding some metadata to each file as it uploaded it, as part of the design to support website hosting on S3.

For instance, when uploading a .css file to S3, s3cmd attempts to determine extra details about the file, and set the correct metadata for browsers to understand, such as Content-Type: text/css.
This is a critical function, as it would be difficult to take the time to determine each file’s content type, set that manually, across many files.
You can read more about content media types on Wikipedia.

Since this project was set up a long time ago, the version of s3cmd used as still in alpha stage – and it was used because it performed well enough, and nothing broke, so we were happy to continue running the with same version since early 2013.

The problem reported to me was that many files on the site were returning an invalid Content-Encoding value, something that has been typically not a problem, as the client’s browser will send an Accept-Encoding header when making a request, typically something along the lines of:

Browser: Hi there! Can I have this resource, and I'll accept a response encoded in the following formats: a, b, or c
Server: No problem! Here's the resource you're looking for, with a content encoded in XYZZY

Now, the XYZZY in this example was being set by the s3cmd upload process, and it was determined to be a bug and fixed in late 2013, but since we never knew about the problem, and the site loads just fine, we never addressed it.
There have been even more stability fixes and releases of s3cmd since – as recently as February 2015.

The particular invalid encodings being set were UTF-8 and ANSI_X3.4-1968. While these are valid encodings for files, they are invalid values for the Content-Encoding field.

Here’s an example of how to show the headers of a particular remote file:

$ curl -sI http://www.mysite.com/static/css/style.css | grep Content
Content-Encoding: ANSI_X3.4-1968
Content-Type: text/css
Content-Length: 7073

Many modern browsers will send something along the lines of ‘Accept-Encoding: gzip, deflate, sdch‘ in their request header, in hopes that the server can respond with one that matches, and then save on overall bytes sent over the wire, to speed up pages.

It’s the responsibility of the client (browser) to handle the response. I looked into the source code of Chromium (the basis for Google Chrome), and can see from here that in my example above, at Content-Encoding type of XYZZY will pretty much be ignored, which in this case, is fine, since we’re sending an invalid type.

So there’s no direct user impact, why should we care? Well, according to some popular ranking engines:

Using non-HTML content types for landing pages results in significantly reduced SEO ranking.

So all of this is fine, cool – update s3cmd tool to a newer version, and upload the output files again? Well, it’s not that simple.

Since during a sync operation, s3cmd determines what files might have changed, and only uploads the changed ones, it doesn’t reset the object metadata, as this is basically a new object, and the file itself hasn’t been changed.

One solution might be to edit every file, add an extra space somewhere – maybe an extra blank line at the end – then compile, deploy the changed files – however this might take too long.

Instead, I decided to solve the problem of iterating over every object in a bucket, and checking to see if it had the incorrect Content-Encoding set, and create a new copy of the file without the heading set.

This was pretty straightforward, once I understood the concept of object immutability – once written, you can’t change it, rather what feels like a change from a user interface actually creates a new version of the object with the new settings/metadata.

I also didn’t want to have to download each file locally and then upload it back to S3 – that it a slow operation, and could result in extra network traffic and disk space consumption.

Instead, I used the AWS SDK for Ruby gem, and came up with a short-and-sweet solution:

The code aims to be short and sweet, and sure enough, post-execution, we get the response without the offending header:

$ curl -sI http://www.mysite.com/static/css/style.css | grep Content
Content-Type: text/css
Content-Length: 7073

This swift diagnosis and resolution would not have been possible had the tooling being used not been open source, as many times I was trying to figure out why something behaved the way it did, and while not being familiar with the code, I could reason enough about how things work in general to apply that reasoning on how I should implement my resolution.

Support open source where possible, and happy hunting!

Read more on the standards RFC2616.