Reduce AWS Lambda Latencies with Keep-Alive in Python

It starts, as many stories do, with a question. On September 10th, AWS Serverless Hero Luc van Donkersgoed shared his observations on the relationship of reduced latency with increased request rate when using AWS Lambda. This is always an interesting conversation, and sure enough other AWS Heroes like myself are curious about some of the outlier behaviors, and what exactly is going into each request. AWS Data Hero Alex DeBrie, AWS Container Hero Vlad Ionescu both ask excellent questions about the setup and the behaviors, leading Luc to share what he’s seeing with regards to DNS lookups that don’t make sense to him.

After asking a couple of more questions of my own, I rolled up my sleeves and dug into the what, how, and why.

getting ready to read things and hit them with sticks

I dive in to all parts of the stack in use to try and understand why Luc’s code is seeing DNS lookups.
For example, if your function needs to call AWS S3 or a Twilio API, we usually provide the domain name, and have the code or library perform a request to a Domain Name System (DNS) server to return the current IP address, and then communicate using the IP address. This is a network call and can be expensive (in milliseconds) if it’s performed more frequently than the DNS response’s Time To Live (TTL) – kind of like an expiration date. The DNS lookup adds some more latency to your overall call, which is why many systems will cache DNS responses until the TTL is expired, and then make a new call. If you perform DNS lookups when not needed, that’s adding latency unnecessarily. Read the tweet thread for more!

I arrive at two possible solutions:

  1. If the Python code calls more than 10 AWS service endpoints, it will trigger a DNS lookup, as urllib3‘s PoolManager will only maintain 10 connections (set by botocore defaults) and will need to recycle if exceeded.
  2. Since we’re unlikely to be hitting the limit of 10, something else is at play.
    I found that the default behavior of boto3 is to not use Keep Alive, thus explaining why the occasional connection is reset, triggering a DNS lookup. (Read the tweet thread for the full discovery.)

Using Keep-Alive is nothing new, and was covered quite well by AWS Serverless Hero Yan Cui back in 2019 for Node. It’s even in the official AWS Documentation, citing Yan’s article for the proposed update. Thanks Yan!


There’s precious little literature on using Keep Alive for Python Lambdas that I could find, leading to issues like Luc’s and reports like this one, so I decided to dig a little further. Knowing now that the default for Keep Alive is off by default for users of the popular boto3 package to interact with AWS services, I wanted to explore what that looks like in practice.

I decided to pattern an app after Yan’s example – a function at receives an event body, and persists it to DynamoDB. All in all, not a too complex operation – we perform a single DNS lookup for the DynamoDB service endpoint, and then use the response IP address to connect over HTTP to put an object into the DynamoDB table.

After re-writing the same function in Python, I was able to test the same kind of behavior that Yan did, running a call to the function once per second, isolating any concurrency concerns, replicating Luc’s test. This should have the benefit of reusing the same Lambda context (no cold starts) and seeing that the latencies range from 7 to 20 milliseconds for the same operation:

filtered log view showing only the latency for put_item calls to DynamoDB for 30 seconds

So far, so good – pretty much the same. The overall values are lower than Yan’s original experiment, which I attribute to the entire Lambda ecosystem improving, but we can see there’s variance and we often enter double-digit latencies, when we know that the DynamoDB operation is likely to only take 6-7 milliseconds.

left side shows spiky responses; right side shows most responses are fast, with some slower outliers

As Yan showed in his approach adapted from Matt Levine’s talk snippets, he was able to reconstruct the AWS Config by rebuilding the lowest-level HTTP agent that the library relies on to make the calls, and thereby set the behavior for Keep Alive. This has since been obsoleted by the AWS Node.JS SDK adding an environment variable to enable the keep alive behavior, which is awesome! But what about Python? 🐍

In the recent release of botocore 1.27.84 we can modify the AWS Config passed into the client constructor:

# before:
import boto3
client = boto3.client("dynamodb")

# after:
import boto3
from botocore.config import Config
client = boto3.client("dynamodb", config=Config(tcp_keepalive=True))

With the new configuration in place, if you try this on AWS python3.9 execution runtime, you’ll get this error:
[ERROR] TypeError: Got unexpected keyword argument 'tcp_keepalive'

While the AWS Python runtime includes versions of boto3 and botocore, they do not yet support the new tcp_keepalive parameter – the runtime currently ships:
– boto3 1.20.32
– botocore 1.23.32

So we have to solve another way.

The documentation tells us that we can configure this via a config file in ~/.aws/config, added in version 1.9.17 back in October 2018 – presumably when all the Keep Alive conversations were fresh in folks’ minds.

However, since the Lambda runtime environment disallows writing to that path, we can’t write the config file easily. We might be able to create a custom Docker runtime and place a file in the path, but that’s a bit harder, and we lose some of the benefits of using the AWS prebuilt runtime like startup latency, which when we’re exploring a latency-oriented article, seems like the wrong choice 😁.

Using serverless framework CLI with the serverless-python-requirements (what I’m currently using), or AWS SAM, you can add the updated version of boto3 and botocore, and deploying the updated application allows us to leverage the new setting in a Lambda environment. You may already be using one of these approaches for a more evolved application.
Hopefully 🤞 the Lambda Runtime will be updated to include these versions in the near future, so we don’t have to package these dependencies to get this specific feature.

With the updated packages, we can pass the custom Config with tcp_keepalive enabled (as shown above), and observe more constant performance for the same style of test:

left: much smoother!! right: narrower distribution of values, max 8.50 ms

There’s an open request for the config value to be available via environment variable – check it out and give it a 👍 to add your desire and subscribe via GitHub notifications.

Enjoy lower, more predictable latencies with Keep Alive!

Check out the example code here: https://github.com/miketheman/boto3-keep-alive


Postscript: If you’re interested in pinpointing calls for performance, I recommend checking out Datadog’s APM and associated ddtrace module to see the specifics of every call to AWS endpoints and associated latencies, as well as other parts of your application stack. There’s a slew of other vendors that can help surface these metrics.

Container-to-Container Communication

Question ❓

In a containerized world, is there a material difference between communicating over local network TCP vs local Unix domain sockets?

Given an application with more than a single container that need to talk to each other, is there an observable difference in latency/throughput when using one inter-component communication method over another from an end-users’ perspective?


Background 🌆

There’s this excellent write-up on the comparison back in 2005, and many things have changed since then, especially around the optimizations in the kernel and networking stack, along with the container runtime that is usually abstracted away from the end user’s concerns. Redis benchmarks from a few years ago also point out significant improvements using Unix sockets when the server and benchmark are co-located.

There’s other studies out there that have their own performance comparisons, and produce images like these – and every example is going to have its own set of controls and caveats.

I wanted to use a common-ish scenario: a web service running on cloud infrastructure I don’t own.

Components 🧩

For the experiment, I chose this set of components:

  • nginx (web server) – terminate SSL, proxy requests to upstream web server
  • gunicorn (http server) – speaks HTTP and WSGI protocol, runs application
  • starlette (python application framework) – handle request/response
components

I considered using FastAPI for the application layer – but since I didn’t need any of those features, I didn’t add it, but it’s a great framework – check it out!

As gunicorn server runs the starlette framework and the custom application code, I will be referring to them as a single component later as "app", as the tests I’m comparing is the behavior between nginx and the "app" layer, using overall user-facing latency and throughput as the main result.

nginx 🌐

nginx is awesome. Really powerful, and has many built-in features, highly configurable. Been using it for years, and it’s my go-to choice for a reliable web server.

For our purposes, we need an external port to listen for inbound requests, and a stanza to proxy the requests to the upstream application server.

You might ask: Why use nginx at all, if Gunicorn can terminate connections directly? Well, there’s often a class of problems that nginx is better suited at handling rather than a fully-fledged Python runtime – examples include static file serving (robots.txt, favicon.ico et. al.) as well as caching, header or path rewriting, and more.

nginx is a commonly used in front of all manner of applications.

Python Application 🐍

To support the testing of a real-world scenario, I’m creating a JSON response, as that’s how most web applications communicate today. This often incurs some serialization overhead in the application.

I took the example from starlette and added a couple of tweaks to emit the current timestamp and a random number. This prevents any potential caching occurring in any of the layers and polluting the experiment.

Here’s what the main request/response now looks like:

async def homepage(request):
    return JSONResponse(
        {
            "hello": "world",
            "utcnow": datetime.datetime.utcnow().isoformat(),
            "random": random.random(),
        }
    )

A response looks like this:

{
  "hello": "world",
  "utcnow": "2021-12-27T00:31:42.383861",
  "random": 0.5352573557347882
}

And while there are ways to improve JSON serialization speed, or tweak the Python runtime, I wanted to keep the experiment with defaults, since the point isn’t about maximizing total throughput, rather seeing the difference between the architectures.

Cloud Environment ☁️

For this experiment, I chose Amazon Elastic Container Service (ECS) with AWS Fargate compute. These choices provide a way to construct all the pieces needed in a repeatable fashion in the shortest amount of time, and abstract a lot of the infra concerns. To set everything up, I used AWS Copilot CLI, an open-source tool that does even more of the heavy lifting for me.

The Copilot Application type of Load Balanced Web Service will create an Application Load Balancer (ALB), which is the main external component outside my application stack, but an important one for actual scaling, SSL termination at the edge, and more. For the sake of this experiment, we assume (possibly incorrectly!) that ALBs will perform consistently for each test.

Architectures 🏛

Using containers, I wanted to test multiple architecture combinations to see which one proved the "best" when it came to user-facing performance.

Example 1: "tcp"

The communication between nginx container and the app container takes places over the dedicated network created by the Docker runtime (or Container Network Interface in Fargate). This means there’s TCP overhead between nginx and the app – but is it significant? Let’s find out!

Example 2: "sharedvolume"

Here we create a shared volume between the nginx container and the app container. Then we use a Unix domain socket to communicate between the containers using the shared volume.

This architecture maintains a separation of concerns between the two components, which is generally a good practice, so as to have a single essential process per container.

Example 3: "combined"

In this example, we combine both nginx and app in a single container, and use local Unix sockets within the container to communicate.

The main difference here is that we add a process supervisor to run both nginx and app runtimes – which some may consider an anti-pattern. I’m including it for the purpose of the experiment, mainly to uncover if there’s performance variation between a local volume and a shared volume.

This approach simulates what we’d expect in a single "server" scenario – where a traditional instance (hardware or virtual) runs multiple processes and all have some access to a local shared volume for inter-process communication (IPC).

To make this a fair comparison, I’ve also doubled the CPU and memory allocation.

Copilot ✈️

Time to get off the ground.

Copilot CLI assumes you already have an app prepared in a Dockerfile. The Quickstart has you clone a repo with a sample app – so instead I’ve created a Dockerfile for each of the architectures, along with a docker-compose.yml file for local orchestration of the components.

Then I’ll be able to launch and test each one in AWS with its own isolated set of resources – VPC, networking stack, and more.

I’m not going into all the details of how to install Copilot and launch the services, for that, read the Copilot CLI documentation (linked above), and read the experiment code.

This test is using AWS Copilot CLI v1.13.0.

Test Protocol 🔬

There’s an ever-growing list of tools and approaches to benchmark web request/response performance.

For the sake of time, I’ll use a single one here, to focus on the comparison of the server-side architecture performance.

All client-side requests will be performed from an AWS CloudShell instance running in the same AWS Region as the running services (us-east-1) to isolate a lot of potential network chatter. It’s not a perfect isolation of potential variables, but it’ll have to do.

To baseline, I ran each test locally (see later).

Apache Bench

Apache Bench, or ab, is a common tool for testing web endpoints, and is not specific to Apache httpd servers. I’m using: Version 2.3 <$Revision: 1879490 $>

I chose single concurrency, and ran 1,000 requests. I also ignore variable length, as the app can respond with a variable-length random number choice, and ab considers different length responses a failure unless specified.

ab -n 1000 -c 1 -l http://service-target....

Each test should take less than 5 seconds.

The important stats I’m comparing are:

  • Requests per second (mean) – higher is better
  • Time per request (mean) – lower is better
  • Duration at 99th percentile. 99% of all requests completed within (milliseconds) – lower is better

To reduce variance, I also "warmed up" the container by running the test for a larger amount of requests

Local Test

To establish a baseline, I ran the same benchmark test against the local services. Using Docker Desktop 4.3.2 (72729) on macOS. These aren’t demonstrative of a real user experience, but provides a sense of performance before launching the architectures in the cloud.

arch reqs per sec ms per req 99th pctile
tcp (local) 679.77 1.471 2
sharedvolume (local) 715.62 1.397 2
combined (local) 705.55 1.871 2

In the local benchmark, the clear loser is the tcp architecture, and the sharedvolume has a slight edge on combined – but not a huge win. No real difference in the 99th percentiles – requests are being served in under 2ms.

This shows that the shared resources for the combined architecture are near the performance of the sharedvolume – possibly due to Docker Desktop’s bridging and network abstraction. A better comparison might be tested on a native Linux machine.

Remote Test

Once I ran through the setup steps using Copilot CLI to create the environment and services, I performed the same ab test, and collected the results in this table:

arch reqs per sec ms per req 99th pctile
tcp (aws) 447.57 2.234 5
sharedvolume (aws) 394.55 2.535 6
combined (aws) 428.60 2.333 4

With the remote tests, minor surprise that the combined service performed better than the sharedvolume service, as in the local test it performed worse.

The bigger surprise was to find that the tcp architecture wins slightly over the socket-based architectures.

This could be due to the way ECS Fargate uses the Firecracker microvm, and has tuned the network stack to perform faster than using a shared socket on a volume when communicating between two containers on the same host machine. The best part is – as a consumer of a utility, I don’t care, as long as it’s performing well!

ARM/Graviton Remote Test

With the Copilot manifest defaults for the Intel x86 platform, let’s also test the performance on the linux/arm64 platform (Graviton2, probably).

For this to work, I had to rebuild the nginx sidecars manually, as Copilot doesn’t yet build&push sidecar images. I also had to update the manifest.yml to set the desired platform, and deploy the service with copilot svc deploy .... (The combined version needed some Dockerfile surgery too…)

Results:

arch reqs per sec ms per req 99th pctile
tcp (aws/arm) 475.03 2.105 3
sharedvolume (aws/arm) 451.71 2.214 4
combined (aws/arm) 433.94 2.304 4

We can see that all the stats are better on the Graviton architecture, lending some more credibility to studies done by other benchmark posts and papers.

Aside: The linux/arm64-based container images were tens of megabytes smaller, so if space and network pull time is a concern, these will be a few microseconds faster.

Other Testing Tools

If you’re interested in performing longer tests, or emulating different user types, check out some of these other benchmark tools I considered and didn’t use for this experiment:

  • Python – https://locust.io/ https://molotov.readthedocs.io/
  • JavaScript – https://k6.io/
  • Golang – https://github.com/rakyll/hey
  • C – https://github.com/wg/wrk

There’s also plenty of vendors that build out extensive load testing platforms – I’m not covering any of them here. If you run a test with these, would definitely like to see your results!

Conclusions 💡

Using the Copilot CLI wasn’t without some missteps – the team is hard at work improving the documentation, and are pretty responsive in both their GitHub Issues and Discussions, as well as their Gitter chat room – always helpful when learning a new framework. Once I got the basics, being able to establish a reproducible stack is valuable to the experimentation process, as I was able to provision and tear down the stack easily, as well as update with changes relatively easily.

Remember: these are micro-benchmarks, on not highly-tuned environments or real-world workloads. This test was designed to test a very specific type of workload, which may change as more concurrency is introduced, CPU or memory saturation is achieved, auto-scaling of application instances comes into play, and more.

Your mileage may vary.

When I started this experiment, I assumed the winner would be a socket-based communication architecture (sharedvol or combined), from existing literature, and it also made sense to me. The overhead of creating TCP packets between the processes would be eliminated, and thus performance would be better.

However, in these benchmarks, I found that using the TCP communication architecture performs best, possibly due to optimizations beyond our view in the underlying stack. This is precisely what I want from an infrastructure vendor – for them to figure out how to optimize performance without having to re-architect an application to perform better in a given deployment scenario.

The main conclusion I’ve drawn is: Using TCP to communicate between containers is best, as it affords the most flexibility, follows established patterns, and performs slightly better than the alternatives in a real(ish) world scenario. And if you can, use Graviton2 (ARM) CPU architecture.

Go forth, test your own scenarios, and let me know what you come up with. (Don’t forget to delete your resource when done!! 💸 )

Counts are good, States are better

Datadog is great at pulling in large amounts of metrics, and provides a web-based platform to explore, find, and monitor a variety of systems.

One such system integration is PostgresQL (aka ‘Postgres’, ‘PG’) – a popular Open Source object-relational database system, ranking #4 in its class (at the time of this writing), with over 15 years of active development, and an impressive list of featured users.
It’s been on an upwards trend for the past couple of years, fueled in part by Heroku Postgres, and has spun up entire companies supporting running Postgres, as well as Amazon Web Services providing PG as one of their engines in their RDS offering.

It’s awesome at a lot of things that I won’t get into here, but it definitely my go-to choice for relational data.

One of the hardest parts of any system is determining whether the current state of the system is better or worse than before, and tracking down the whys, hows and wheres it got to a worse state.

That’s where Datadog comes in – the Datadog Agent has included PG support since 2011, and over the past 5 years, has progressively improved and updated the mechanisms by which metrics are collected. Read a summary here.

Let’s Focus

Postgres has a large number of metrics associated with it, and there’s much to learn from each.

The one metric that I’m focusing on today is the “connections” metric.

By establishing a periodic collection of the count of connections, we can examine the data points over time and draw lines to show the values.
This is built-in to the current Agent code, named postgresql.connections in Datadog, by selecting the value of the numbackends column from the pg_stat_database table.

01-default-connections

Another two metrics exist, introduced into the code around 2014, that assist with using the counts reported with alerting.
These are postgresql.max_connections and postgresql.percent_usage_connections.

(Note: Changing PG’s max_connections value requires a server restart and in a replication cluster has other implications.)

The latter, percent_usage_connections, is a calculated value, returning ‘current / max’, which you could compute yourself in an alert definition if you wanted to account for other variables.
It is normally sufficient for these purposes.

02-pct_used-connections

A value of postgresql.percent_usage_connections:0.15 tells us that we’re using 15% of our maximum allowable connections. If this hits 1, then we will receive this kind of response from PG:

FATAL: too many connections for role...

And you likely have a Sad Day for a bit after that.

Setting an alert threshold at 0.85 – or a Change Alert to watch the percent change in the values over the previous time window – should prompt an operator to investigate the cause of the connections increase.
This can happen for a variety of reasons such as configuration errors, SQL queries with too-long timeouts, and a host of other possibilities, but at least we’ll know before that Sad Day hits.

Large Connection Counts

If you’ve launched your application, and nobody uses it, you’ll have very low connection counts, you’ll be fine. #dadjoke

If your application is scaling up, you are probably running more instances of said application, and if it uses the database (which is likely), the increase in connections to the database is typically linear with the count of running applications.

Some PG drivers offer connection pooling to the app layer, so as methods execute, instead of opening a fresh connection to the database (which is an expensive operation), the app maintains some amount of “persistent connections” to the database, and the methods can use one of the existing connections to communicate with PG.

This works for a while, especially if the driver can handle application concurrency, and if the overall count of application servers remains low.

The Postgres Wiki has an article on handling the number of database connections, in which the topic of a connection pooler comes up.
An excerpt:

If you look at any graph of PostgreSQL performance with number of connections on the x axis
and tps on the y access [sic] (with nothing else changing), you will see performance climb as
connections rise until you hit saturation, and then you have a “knee” after which performance
falls off.

The need for connection pooling is well established, and the decision to not have this part of core is spelled out in the article.

So we install a PG connection pooler, like PGBouncer (or pgpool, or something else), configure it to connect to PG, and point our apps at the pooler.

In doing so, we configure the pooler to establish some amount of connections to PG, so that when an application requests a connection, it can receive one speedily.

Interlude: Is Idle a Problem?

Over the past 4 years, I’ve heard the topic raised again and again:

If the max_connections is set in the thousands, and the majority of them are in idle state,
is that bad?

Let’s say that we have 10 poolers, and each establishes 100 connections to PG, for a max of 1000. These poolers serve some large number of application servers, but have the 1000 connections at-the-ready for any application request.

It is entirely possible that most of the time, a significant portion of these established connections are idle.

You can see a given connection’s state in the pg_stat_activity table, with a query like this:

SELECT datname, state, COUNT(state)
FROM pg_stat_activity
GROUP BY datname, state
HAVING COUNT(state) > 0;

A sample output from my local dev database that’s not doing much:

datname  | state  | count
---------+--------+-------
postgres | active |     1
postgres | idle   |     2
(2 rows)

We can see that there is a single active connection to the postgres database (that’s me!) and two idle connections from a recent application interaction.

If it’s idle, is it harming anyone?

A similar question was asked on the PG Mailing List in 2015, to which Tom Lane responds to the topic of idle: (see link for full quote):

Those connections have to be examined when gathering snapshot information, since you don’t know that they’re idle until you look.
So the cost of taking a snapshot is proportional to the total number of connections, even when most are idle.
This sort of situation is known to aggravate contention for the ProcArrayLock, which is a performance bottleneck if you’ve got lots of CPUs.

So we now know why idling connections can impact performance, despite not doing anything, especially with modern DBs that we scale up to multi-CPU instances.

Back to the show!

Post-Pooling Idling

Now that we know that high connection counts are bad, and we are able to cut the total count of connections with pooling strategies, we must ask ourselves – how many connections do we actually need to have established, yet not have a high count of idling connections that impact performance.

We could log in, run the SELECT statement from before, and inspect the output, or we could add this to our Datadog monitoring, and trend it over time.

The Agent docs show how to write an Agnet Check, and you could follow the current postgres.py to write another custom check, or you could use the nifty custom_metrics syntax in the default postgres.yaml to extend the check to perform more checks.

Here’s an example:

custom_metrics:
  - # Postgres Connection state
    descriptors:
      - [datname, database]
      - [state, state]
    metrics:
      COUNT(state): [postgresql.connection_state, GAUGE]
    query: >
      SELECT datname, state, %s FROM pg_stat_activity
      GROUP BY datname, state HAVING COUNT(state) > 0;
    relation: false

Wait, what was that?

Let me explain each key in this, in an order that made sense to me, instead of alphabetically.

  • relation: false informs the check to perform this once per collection, not against each of any specified tables (relations) that are part of this database entry in the configuration.
  • query: This is pretty similar to our manual SELECT, with one key differentiation – the %s informs the query to replace this with the contents of the metrics key.
  • metrics: For each entry in here, the query will be run, substituting the key into the query. The metric name and type are specified in the value.
  • descriptors: Each column returned has a name, and here’s how we convert the returned name to a tag on the metric.

Placing this config section in our postgres.yaml file and restarting the Agent gives us the ability to define a query like this in a graph:

sum:postgresql.connection_state{*} by {state}

03-conn_state-by-state

As can be seen in this graph, the majority of my connections are idling, so I might want to re-examine my configuration settings on application or pooler configuration.

Who done it?

Let’s take this one step further, and ask ourselves – now that we know the state of each connection, how might we determine which of our many applications connecting to PG is idling, and target our efforts?

As luck would have it, back in PG 8.5, a change was added to allow for clients to set an application_name value during the connection, and this value would be available in our pg_stat_activity table, as well as in logs.

This typically involves setting a configuration value at connection startup. In Django, this might be done with:

DATABASES = {
  'default': {
    'ENGINE': 'django.db.backends.postgresql',
    ...
    'OPTIONS': {
      'application_name': 'myapp',
    }
    ...

No matter what client library you’re using, most have the facility to pass extra arguments along, some in the form of a database connection URI, so this might look like:

postgresql://other@localhost/otherdb?connect_timeout=10&application_name=myapp

Again, this all depends on your client library.

I can see clearly now

So now that we have the configuration in place, and have restarted all of our apps, a modification to our earlier Agent configuration code for postgres.yaml would look like:

custom_metrics:
  - # Postgres Connection state
    descriptors:
      - [datname, database]
      - [application_name, application_name]
      - [state, state]
    metrics:
      COUNT(state): [postgresql.connection_state, GAUGE]
    query: >
      SELECT datname, application_name, state, %s FROM pg_stat_activity
      GROUP BY datname, application_name, state HAVING COUNT(state) > 0;
    relation: false

With this extra dimension in place, we can craft queries like this:

sum:postgresql.connection_state{state:idle} by {application_name}

04-conn_state-idle-by-app_name

So now I can see that my worker-medium application has the most idling connections, so there’s some tuning to be done here – either I open too many connections for the application, or it’s not doing much.

I can confirm this with refining the query structure to narrow in on a single application_name:

sum:postgresql.connection_state{application_name:worker-medium} by {state}

05-conn_state-app_name-by-state

So now that I’ve applied methodology of surfacing connection states, and increased visibility into what’s going on, before making any changes to resolve.

Go forth, measure, and learn how your systems evolve!