Counts are good, States are better

Datadog is great at pulling in large amounts of metrics, and provides a web-based platform to explore, find, and monitor a variety of systems.

One such system integration is PostgresQL (aka ‘Postgres’, ‘PG’) – a popular Open Source object-relational database system, ranking #4 in its class (at the time of this writing), with over 15 years of active development, and an impressive list of featured users.
It’s been on an upwards trend for the past couple of years, fueled in part by Heroku Postgres, and has spun up entire companies supporting running Postgres, as well as Amazon Web Services providing PG as one of their engines in their RDS offering.

It’s awesome at a lot of things that I won’t get into here, but it definitely my go-to choice for relational data.

One of the hardest parts of any system is determining whether the current state of the system is better or worse than before, and tracking down the whys, hows and wheres it got to a worse state.

That’s where Datadog comes in – the Datadog Agent has included PG support since 2011, and over the past 5 years, has progressively improved and updated the mechanisms by which metrics are collected. Read a summary here.

Let’s Focus

Postgres has a large number of metrics associated with it, and there’s much to learn from each.

The one metric that I’m focusing on today is the “connections” metric.

By establishing a periodic collection of the count of connections, we can examine the data points over time and draw lines to show the values.
This is built-in to the current Agent code, named postgresql.connections in Datadog, by selecting the value of the numbackends column from the pg_stat_database table.


Another two metrics exist, introduced into the code around 2014, that assist with using the counts reported with alerting.
These are postgresql.max_connections and postgresql.percent_usage_connections.

(Note: Changing PG’s max_connections value requires a server restart and in a replication cluster has other implications.)

The latter, percent_usage_connections, is a calculated value, returning ‘current / max’, which you could compute yourself in an alert definition if you wanted to account for other variables.
It is normally sufficient for these purposes.


A value of postgresql.percent_usage_connections:0.15 tells us that we’re using 15% of our maximum allowable connections. If this hits 1, then we will receive this kind of response from PG:

FATAL: too many connections for role...

And you likely have a Sad Day for a bit after that.

Setting an alert threshold at 0.85 – or a Change Alert to watch the percent change in the values over the previous time window – should prompt an operator to investigate the cause of the connections increase.
This can happen for a variety of reasons such as configuration errors, SQL queries with too-long timeouts, and a host of other possibilities, but at least we’ll know before that Sad Day hits.

Large Connection Counts

If you’ve launched your application, and nobody uses it, you’ll have very low connection counts, you’ll be fine. #dadjoke

If your application is scaling up, you are probably running more instances of said application, and if it uses the database (which is likely), the increase in connections to the database is typically linear with the count of running applications.

Some PG drivers offer connection pooling to the app layer, so as methods execute, instead of opening a fresh connection to the database (which is an expensive operation), the app maintains some amount of “persistent connections” to the database, and the methods can use one of the existing connections to communicate with PG.

This works for a while, especially if the driver can handle application concurrency, and if the overall count of application servers remains low.

The Postgres Wiki has an article on handling the number of database connections, in which the topic of a connection pooler comes up.
An excerpt:

If you look at any graph of PostgreSQL performance with number of connections on the x axis
and tps on the y access [sic] (with nothing else changing), you will see performance climb as
connections rise until you hit saturation, and then you have a “knee” after which performance
falls off.

The need for connection pooling is well established, and the decision to not have this part of core is spelled out in the article.

So we install a PG connection pooler, like PGBouncer (or pgpool, or something else), configure it to connect to PG, and point our apps at the pooler.

In doing so, we configure the pooler to establish some amount of connections to PG, so that when an application requests a connection, it can receive one speedily.

Interlude: Is Idle a Problem?

Over the past 4 years, I’ve heard the topic raised again and again:

If the max_connections is set in the thousands, and the majority of them are in idle state,
is that bad?

Let’s say that we have 10 poolers, and each establishes 100 connections to PG, for a max of 1000. These poolers serve some large number of application servers, but have the 1000 connections at-the-ready for any application request.

It is entirely possible that most of the time, a significant portion of these established connections are idle.

You can see a given connection’s state in the pg_stat_activity table, with a query like this:

SELECT datname, state, COUNT(state)
FROM pg_stat_activity
GROUP BY datname, state
HAVING COUNT(state) > 0;

A sample output from my local dev database that’s not doing much:

datname  | state  | count
postgres | active |     1
postgres | idle   |     2
(2 rows)

We can see that there is a single active connection to the postgres database (that’s me!) and two idle connections from a recent application interaction.

If it’s idle, is it harming anyone?

A similar question was asked on the PG Mailing List in 2015, to which Tom Lane responds to the topic of idle: (see link for full quote):

Those connections have to be examined when gathering snapshot information, since you don’t know that they’re idle until you look.
So the cost of taking a snapshot is proportional to the total number of connections, even when most are idle.
This sort of situation is known to aggravate contention for the ProcArrayLock, which is a performance bottleneck if you’ve got lots of CPUs.

So we now know why idling connections can impact performance, despite not doing anything, especially with modern DBs that we scale up to multi-CPU instances.

Back to the show!

Post-Pooling Idling

Now that we know that high connection counts are bad, and we are able to cut the total count of connections with pooling strategies, we must ask ourselves – how many connections do we actually need to have established, yet not have a high count of idling connections that impact performance.

We could log in, run the SELECT statement from before, and inspect the output, or we could add this to our Datadog monitoring, and trend it over time.

The Agent docs show how to write an Agnet Check, and you could follow the current to write another custom check, or you could use the nifty custom_metrics syntax in the default postgres.yaml to extend the check to perform more checks.

Here’s an example:

  - # Postgres Connection state
      - [datname, database]
      - [state, state]
      COUNT(state): [postgresql.connection_state, GAUGE]
    query: >
      SELECT datname, state, %s FROM pg_stat_activity
      GROUP BY datname, state HAVING COUNT(state) > 0;
    relation: false

Wait, what was that?

Let me explain each key in this, in an order that made sense to me, instead of alphabetically.

  • relation: false informs the check to perform this once per collection, not against each of any specified tables (relations) that are part of this database entry in the configuration.
  • query: This is pretty similar to our manual SELECT, with one key differentiation – the %s informs the query to replace this with the contents of the metrics key.
  • metrics: For each entry in here, the query will be run, substituting the key into the query. The metric name and type are specified in the value.
  • descriptors: Each column returned has a name, and here’s how we convert the returned name to a tag on the metric.

Placing this config section in our postgres.yaml file and restarting the Agent gives us the ability to define a query like this in a graph:

sum:postgresql.connection_state{*} by {state}


As can be seen in this graph, the majority of my connections are idling, so I might want to re-examine my configuration settings on application or pooler configuration.

Who done it?

Let’s take this one step further, and ask ourselves – now that we know the state of each connection, how might we determine which of our many applications connecting to PG is idling, and target our efforts?

As luck would have it, back in PG 8.5, a change was added to allow for clients to set an application_name value during the connection, and this value would be available in our pg_stat_activity table, as well as in logs.

This typically involves setting a configuration value at connection startup. In Django, this might be done with:

  'default': {
    'ENGINE': 'django.db.backends.postgresql',
    'OPTIONS': {
      'application_name': 'myapp',

No matter what client library you’re using, most have the facility to pass extra arguments along, some in the form of a database connection URI, so this might look like:


Again, this all depends on your client library.

I can see clearly now

So now that we have the configuration in place, and have restarted all of our apps, a modification to our earlier Agent configuration code for postgres.yaml would look like:

  - # Postgres Connection state
      - [datname, database]
      - [application_name, application_name]
      - [state, state]
      COUNT(state): [postgresql.connection_state, GAUGE]
    query: >
      SELECT datname, application_name, state, %s FROM pg_stat_activity
      GROUP BY datname, application_name, state HAVING COUNT(state) > 0;
    relation: false

With this extra dimension in place, we can craft queries like this:

sum:postgresql.connection_state{state:idle} by {application_name}


So now I can see that my worker-medium application has the most idling connections, so there’s some tuning to be done here – either I open too many connections for the application, or it’s not doing much.

I can confirm this with refining the query structure to narrow in on a single application_name:

sum:postgresql.connection_state{application_name:worker-medium} by {state}


So now that I’ve applied methodology of surfacing connection states, and increased visibility into what’s going on, before making any changes to resolve.

Go forth, measure, and learn how your systems evolve!

There’s a New Player in Town, named Habitat

You may have heard some buzz around the launch of Chef‘s new open source project Habitat (still in beta), designed to change a bit of how we think about building and delivering software applications in the modern age.

There’s a lot of press, video announcement, and even a Food Fight Show where we got to chat with some of the brains behind the framework, and get into some of the nitty-gritty details.

In the vibrant Slack channel where a lot of the fast-paced discussion happens with a bunch of the core habitat developers, a community member had brought up a pain point, as many do.
They were trying to build a Python application, and had to result to playing pretty hard with either the PYTHONPATH variable or with sys.path post-dependency install.
One even used Virtualenv inside the isolated environment.

I had worked on making an LLVM compiler package, and while notoriously slow to compile on my laptop, I used the waiting time to get a Python web application working.

My setup is OSX 10.11.5, with Docker (native) 1.12.0-rc2 (almost out of beta!).

I decided to use the Flask web framework to carry out a Hello World, as it would prove a few of pieces:

  • Using Python to install dependencies using pip
  • Adding “local” code into a package
  • Importing the Python package in the app code
  • Executing the custom binary that the Flask package installs

Key element: it needed to be as simple as possible, but no simpler.

On my main machine, I wrote my application.
It listens on port 5000, and responds with a simple phrase.
Yay, I wrote a website.

Then I set about to packaging it into a deliverable where, in habitat’s nomenclature, it becomes a self-contained package, which can then be run via the habitat supervisor.

This all starts with getting the habitat executable, conveniently named hab.
A recent addition to the Homebrew Casks family, installing habitat was as simple as:

$ brew cask install hab

habitat version 0.7.0 is in use during the authoring of this article.

I sat down, wrote a file, that describes how to put the pieces together.

There’s a bunch of phases in the build cycle that are fully customizable, or “stub-able” if you don’t want them to take the default action.
Some details were garnered from here, despite my package not being a binary.

Once I got my package built, it was a matter of figuring out how to run it, and one of the default modes is to export the entire thing as a Docker image, so I set about to run that, to get a feel for the iterative development cycle of making the application work as configured within the habitat universe.

(This step usually isn’t the best one for regular application development, but it is good for figuring out what needs to be configured and how.)

# In first OSX shell
$ hab studio enter
[1][default:/src:0]# build
   python-hello: Build time: 0m36s
[2][default:/src:0]# hab pkg export docker miketheman/python-hello
Successfully built 2d2740a182fb

# In another OSX shell:
$ docker run -it -p 5000:5000 -p 9631:9631 miketheman/python-hello
hab-sup(MN): Starting miketheman/python-hello
hab-sup(GS): Supervisor cb719c1e-0cac-432a-8d86-afb676c3cf7f
hab-sup(GS): Census python-hello.default: 19b7533a-66ba-4c6f-b6b7-c011abd7dbe1
hab-sup(GS): Starting inbound gossip listener
hab-sup(GS): Starting outbound gossip distributor
hab-sup(GS): Starting gossip failure detector
hab-sup(CN): Starting census health adjuster
python-hello(SV): Starting
python-hello(O):  * Running on (Press CTRL+C to quit)

# In a third shell, or use a browser:
$ curl http://localhost:5000
Hello, World!

The code for this example can be found in this GitHub repo.
See the and hooks/ for Habitat-related code.
The src/ directory is the actual Python app.

At this point, I declared success.

There’s a large amount of other pieces to the puzzle that I hadn’t explored yet, but getting this part running was the first one.
Items like interacting with the supervisor, director, healthchecks, topologies – these have some basic docs, but there’s not a bevy of examples or use cases yet to lean upon for inspiration.

During this process I uncovered a couple of bugs, submitted some feedback, and the team is very receptive so far.
There’s still a bunch of rough edges to be polished down, many around the documentation, use cases and how the pieces fit together, and what benefit it all drives.

There appears to be some hooks for using Chef Delivery as well – I haven’t seen those yet, as I don’t use Delivery.
I will likely try looking at making a larger strawman deployment to test these pieces another time.

I am looking forward to seeing how this space evolves, and what place habitat will take in the ever-growing, ever-evolving software development life-cycle, as well as how the community approaches these concepts and terminology.

How Do You Let The Cat Out of the Bag?

On what has now become to be known as Star Wars Day, I thought it prudent to write about A New Hope, Mike-style.

A few weeks ago, I took a step in life that is a bit different from everything I’ve ever done before.
I know I’m likely to get questions about it, so I figured I would attempt to preemptively answer here.

I’ve left Datadog, a company I hold close and dear to my heart.

I started working as a consultant for Datadog in 2011 with some co-workers from a earlier position, and joined full-time in 2013. For the past 3 years, I’ve pretty much eaten, dreamt, lived Datadog. It’s been an amazing ride.

Having the fortune to work with some of the smartest minds in the business, I was able to help build what I believe to be the best product in the marketplace of application and systems monitoring.

I still believe in the mission of building the best damn monitoring platform in the world, and have complete faith that the Datadog crew are up to the task.

Q: Were you let go?
A: No, I left of my own free will and accord.

Q: Why would you leave such a great place to work?
A: Well, 3 years (5 if you count the preliminary work) is a reasonable amount of time in today’s fast-paced market.
Over the course of my tenure, I learned a great many things, positively affected the lives of many, and grew in a direction that doesn’t exactly map to the current company’s vision for me.
There is likely a heavy dose of burnout in the mix as well.
Instead of letting it grow and fester until some sour outcome, I found it best to part ways as friends, knowing that we will definitely meet again, in some other capacity.
Taking a break to do some travel, focus on some non-work life goals for a short time felt like the right thing.

Q: Did some other company lure you away?
A: While I am lucky to receive a large amount of unsolicited recruiter email, I have not been hired by anyone else, rather choosing to take some time off to reflect on the past 20 years of my career, and figure out what it is that I want to try next.
I’m also trying a 30-day fitness challenge, something that has been consistently de-prioritized, in attempt to get a handle on my fitness, before jumping headfirst into the next life challenge, so recruiters – you will totally gain brownie points by not contacting me before June 4th.

Q: Are you considering leaving New York City?
A: A most emphatic No. I’ve lived in a couple of places in California, Austin TX, many locations in Israel, and now NYC. I really like the feel of this city.

Q: What about any Open Source you worked on?
A: Before I started at Datadog, and during my employment, I was lucky enough to have times when I was able to work on Open Source software, and will continue to do so as it interests me. It has never paid the bills, rather providing an interesting set of puzzles and challenges to solve.
If there’s a project that interests you and you’d like to help contribute to, please let me know!

Q: What’s your favorite flavor of ice cream?
A: That’s a hard one. I really like chocolate, and am pretty partial to Ben & Jerry’s Phish Food – it’s pretty awesome.

Q: What about a question that you didn’t answer here?
A: I’m pretty much available over all social channels – Facebook, Twitter, LinkedIn – and good ol’ email and phone.
I respond selectively, so please understand that if you don’t hear back for a bit, or at all.
If it’s a really good question that might fit here, I might update the post.

TL; DR: I’m excited about taking a break from work for a bit, and enjoying some lazy summer days. Let’s drink a glass of wine sometime, and May the Fourth Be With You!

Reduce logging volume

Quick self-reminder on reducing logging volume when monitoring an http endpoint with the Datadog Agent HTTP Check.

For nginx, add something like:

    location / {
        if ($http_user_agent ~* "Datadog Agent/.*") {
            access_log off;

to your site’s location statement.

This should cut down on your logging volume, at the expense of not having a log statement for every time the check runs (once every 20 seconds).

Fixing unintended consequences of the past

In the age of technology, everyone races forward to get the win. Anything that can provide you the competitive edge is considering important.
This is especially true in the realm of web media, where optimizing for page load times, providing secure transport, adhering to standards can make a difference in how a site is handled by client browsers, ranked by search engines, and most importantly how it is seen by viewers.

To this end, there are many sites, services and companies that will provide methods to audit a site and point out what could be problematic – count broken links, produce reports of actionable corrections, and more.
Some are better than others, and occasionally, you’ll come across something you’ve never seen before.

Recently, I was pinged about pages on a site that is hosted on an Amazon Simple Storage Service (S3) website-enabled bucket.
Since S3 is an object store only, this means that the pages in this site are statically generated and there is no associated web server, backend database, or other components to serve the pages.

This model is becoming more common for sites that can be simplified to run with no dynamic loading of data from a database, withstand heavy bursts of requests, as well as run cheaply (there’s even a free tier, beyond which pricing still remains affordable).

The idea is that you create your content in one format, run a compiler process to generate all the rendered files containing the links and content, and then upload the the compiled files to the S3 location to be requested by browsers. There are many guides on the web on how to do this – I’m not going to link to any now, search and ye shall find.

This particular site had been deployed since 2011 – and the mechanism to copy compiled files to S3 has been using the popular open source command line tool s3cmd – deployment basically looked like this (and still does!):

 s3cmd sync output/ s3://

where output/ contains the compiled files, ready for deployment.

This has worked very well for over 4 years – until it came to my attention that when uploading to S3, the s3cmd tool was adding some metadata to each file as it uploaded it, as part of the design to support website hosting on S3.

For instance, when uploading a .css file to S3, s3cmd attempts to determine extra details about the file, and set the correct metadata for browsers to understand, such as Content-Type: text/css.
This is a critical function, as it would be difficult to take the time to determine each file’s content type, set that manually, across many files.
You can read more about content media types on Wikipedia.

Since this project was set up a long time ago, the version of s3cmd used as still in alpha stage – and it was used because it performed well enough, and nothing broke, so we were happy to continue running the with same version since early 2013.

The problem reported to me was that many files on the site were returning an invalid Content-Encoding value, something that has been typically not a problem, as the client’s browser will send an Accept-Encoding header when making a request, typically something along the lines of:

Browser: Hi there! Can I have this resource, and I'll accept a response encoded in the following formats: a, b, or c
Server: No problem! Here's the resource you're looking for, with a content encoded in XYZZY

Now, the XYZZY in this example was being set by the s3cmd upload process, and it was determined to be a bug and fixed in late 2013, but since we never knew about the problem, and the site loads just fine, we never addressed it.
There have been even more stability fixes and releases of s3cmd since – as recently as February 2015.

The particular invalid encodings being set were UTF-8 and ANSI_X3.4-1968. While these are valid encodings for files, they are invalid values for the Content-Encoding field.

Here’s an example of how to show the headers of a particular remote file:

$ curl -sI | grep Content
Content-Encoding: ANSI_X3.4-1968
Content-Type: text/css
Content-Length: 7073

Many modern browsers will send something along the lines of ‘Accept-Encoding: gzip, deflate, sdch‘ in their request header, in hopes that the server can respond with one that matches, and then save on overall bytes sent over the wire, to speed up pages.

It’s the responsibility of the client (browser) to handle the response. I looked into the source code of Chromium (the basis for Google Chrome), and can see from here that in my example above, at Content-Encoding type of XYZZY will pretty much be ignored, which in this case, is fine, since we’re sending an invalid type.

So there’s no direct user impact, why should we care? Well, according to some popular ranking engines:

Using non-HTML content types for landing pages results in significantly reduced SEO ranking.

So all of this is fine, cool – update s3cmd tool to a newer version, and upload the output files again? Well, it’s not that simple.

Since during a sync operation, s3cmd determines what files might have changed, and only uploads the changed ones, it doesn’t reset the object metadata, as this is basically a new object, and the file itself hasn’t been changed.

One solution might be to edit every file, add an extra space somewhere – maybe an extra blank line at the end – then compile, deploy the changed files – however this might take too long.

Instead, I decided to solve the problem of iterating over every object in a bucket, and checking to see if it had the incorrect Content-Encoding set, and create a new copy of the file without the heading set.

This was pretty straightforward, once I understood the concept of object immutability – once written, you can’t change it, rather what feels like a change from a user interface actually creates a new version of the object with the new settings/metadata.

I also didn’t want to have to download each file locally and then upload it back to S3 – that it a slow operation, and could result in extra network traffic and disk space consumption.

Instead, I used the AWS SDK for Ruby gem, and came up with a short-and-sweet solution:

The code aims to be short and sweet, and sure enough, post-execution, we get the response without the offending header:

$ curl -sI | grep Content
Content-Type: text/css
Content-Length: 7073

This swift diagnosis and resolution would not have been possible had the tooling being used not been open source, as many times I was trying to figure out why something behaved the way it did, and while not being familiar with the code, I could reason enough about how things work in general to apply that reasoning on how I should implement my resolution.

Support open source where possible, and happy hunting!

Read more on the standards RFC2616.

Tracking application performance on Heroku with Datadog

I thought about using a clickbait title – “You’ll never believe how this guy captures metrics!” – but decided that 99% of these are not worth the time invested in coming up with the catch title.

So instead, I’ll simply talk about what I wanted to, and you be the judge of my title.

Application Performance Monitoring, or APM, is a crazily complex landscape, with an enormous amount of tooling, terminology, and providers looking to get some piece of the action.
There are many vendors, and all have their advantages, as well as disadvantages.

The vendor that I am pretty happy with (and I now work there) is Datadog.

One solution that has caught on quite well for surgical application monitoring is the use of the statsd protocol to send metrics from inside your application to a listener which can then store these metrics for querying later on. This is achieved by placing strategic “emitter” callouts in your code so that they can report metrics during runtime.

Flickr, then Etsy have started these projects, and they have been refined, ported to most languages, and are seeing adoption in companies where a focus on measuring is an important goal.
A blog post on Datadog’s implementation and extension of Statsd was written last year and goes into deeper detail.

One common question has always been “How do I collect metrics from an application running on Heroku with Datadog?”.

And I think we finally have one answer.

The Heroku Dyno container is pretty simple – you wanna run a process? Describe it in a Procfile.
You wanna scale? You tell Heroku to launch more Dynos with the process name, as specified in the Procfile.

However, the actual Dyno is a fairly limited environment by design – the root filesystem is read-only, the only writable area is in the application’s root directory, and disappears when terminated. There’s no sysvinit, upstart or systemd for people to bicker about. Use a Procfile, which is also really simple.

So a challenge to overcome became: “how to install a Datadog Agent package that runs a dogstatsd listener as a second process, inside an environment that is pretty locked down?”

First, we have to install the package. Heroku has a concept of “[buildpacks]”( that can be used to run compilation steps before adding your application code and launching it. The use of multiple buildpacks is also available, to chain steps together to achieve the desired outcome.

I read the heroku-buildpack-apt and found a bunch of good ideas, and came up with a Datadog-Agent-specific installer buildpack that drops off the package, as well as the needed environment for the runtime.

Now how do I run the listener process alongside my application?

Enter foreman. Foreman, not to be confused with “theforeman“, has long been a great way for application developers writing Heroku-targeted applications to run them locally in a similar manner that they will be run on the remote platform.

Foreman reads the Profile, and runs the processes based on the directives contained inside.

This feature is the one that we leverage to run multiple processes on a single Dyno.

By using foreman inside the Dyno, we are able to tell foreman to run more than one process type at a time, with another Procfile that specifies the startup process for the actual application as well as the dogstatsd listener.

When deploying any code revision, Heroku will read the base Procfile, and run a foreman process inside the Dyno, which will in turn, start up the app & dogstasd.

And while foreman is a Ruby gem, your project may be in Python (use honcho), Go (use forego or goreman) and I’m sure there are others out there. I haven’t found or tested all of them, tell me if they work out for you.

I did, however, take the time to write up a README with the procedure to follow to use this, as well as commit-by-commit example application.

Here’s the buildpack code:

Here’s the example application:

Here’s an image of the stats collected by the example application in Datadog, with increasing web load:
Heroku App Load

Here’s a random dog:

Hope this helps you find deeper insight into how you monitor your applications!

Update (2014-12-15)

A quick addition on this topic.

A couple of days after this was published, I had a short Twitter exchange with Bo Jeanes, after which he submitted a Pull Request to the buildpack, (as well as an update to the example app).
This simplifies the end-user’s deployment of the Agent package, in that the user no longer has to spend any time on doing Procfile-in-Procfile solutions, as well as remove the need from foreman and the like from inside the container, rather the dogstatsd process will be started via the profile.d mechanism which is run on Dyno startup.

This makes the solution even more elegant, so thanks a ton, Bo!

A Quick Drop Into Data Structures For A Minute

So here’s the story, from A to Z…

Well, I’m not going to all the way to Z, but let me lay some details on you.

At Datadog, we provide a nice interface for configuring the Datadog Agent – it’s usually pretty simple to drop some YAML configuration into a file at a specific location, restart the Agent main process, and voilà, you’ve got monitoring.

This gets more complicated when you want to generate a valid YAML file from another system, typically from something like Configuration Management, where you want to take the notion of “Things I know about this particular system” should then trigger “monitor this system with the things I know about it”.

In the popular open source config management system Chef, it is a common practice to create a template of the file you wish to place on a given system, and then extract particular variables to pass to a template ‘resource’, and use those as dynamic values that can make the template reusable across systems and projects, as the template itself can be populated by inputs not included in the initial template design.

Another concept in Chef is the ability to set node ‘attributes’ to control the behavior of recipes, templates and any amount of resources. This has pros and cons, neither of which I will attempt to cover here, but suffice it to say that the pattern is well-established that if you want to share your resources with others, having a mechanism of “tweaking the knobs” of your resources with attributes is a common way of doing it.

In the datadog cookbook for Chef, we provide an interface just like this. An end user can build up a list of structured data entries made up of hash objects (or maps or dicts, depending on your favorite language), and then pass that into a node object, and expect that these details will be rendered into a configuration file template (and restart the service, etc).

This allows the end user to take the code, not modify it at all, and provide inputs to it to receive the desired state.

Jumping further into Chef’s handling of node attributes now.

== Attribute
Attribute implements a nested key-value (Hash) and flat collection
(Array) data structure supporting multiple levels of precedence, such
that a given key may have multiple values internally, but will only
return the highest precedence value when reading.

Attributes are subclassed of the Mash object type – which has some cool features, like deep-merging lower data structures – and then attributes are compiled together to make collections of these node attribute objects, which are then “frozen” into another class type named Chef::Node::ImmutableArray or Chef::Node::ImmutableHash to prevent further mucking around with them.

All this is cool so far, and is really useful in most cases.

In my case, I want to allow the user to provide the data needed, and then have the data written our, or deserialized, into a configuration file, which can then be read by the Agent process.

The simple way you might think to do this is to tell the YAML module of Ruby’s standard library (which is actually an alias to the Psych module) to emit the structured YAML and be done with it.

In an Erubis (ERB) template, this would look like this:

<%= YAML.dump(array_of_mash_data) %>

However, I’d like to inject a header to the array before rendering it, so I’ll do that first:

<%= YAML.dump({ 'instances' => array_of_mash_data }) %>

What this does is render a file like so:

- !ruby/hash:Mash
  host: localhost
  port: 9999
  extra_key: extra_val
  - !ruby/hash:Mash
    include: !ruby/hash:Mash
      domain: org.apache.cassandra.db
      - BloomFilterDiskSpaceUsed
      - Capacity
      foo: bar
    - !ruby/hash:Mash
      domain: evil_domain

As you can see, there’s these pesky lines that include a special YAML-oriented tag that start with exclamation points – !ruby/hash:Mash – these are there to describe the data structure to any YAML loader, saying “hey, the thing you’re about to load is an instance of XYZ, not an array, hash, string or integer”.

Unfortunately, when parsing this file from the Python side of things to load it in the Agent, we get some unhappiness:

$ sudo service datadog-agent configcheck
your.yaml contains errors:
    could not determine a constructor for the tag '!ruby/hash:Mash'
  in "<byte string>", line 7, column 5

So it’s pretty apparent that I can do one of two things:

  • teach Python how to interpret a Ruby + Mash constructor
  • figure out how to remove these from being rendered

The latter seemed most likely, since I didn’t really want to teach Python anything new, especially since this is really a Hash (or a dict, in pythonese).

So I experimented with taking items from the Mash, and running them through a built-in method to_hash – which seemed likely to work.

Not really.

<%= YAML.dump({ 'instances' => { |item| item.to_hash }}) %>

That code only steps into the first layer of the data structure and converts the segment starting with host: localhost into a Hash, but the sub-keys remain Mash objects. Grr.

Digging around, I found other reported problems where people have extended Chef objects with some interesting methods trying to solve the same problem.

This means that I’d have to add library code to my project, then modify the template renderer to make the helper code available, then tell the template to render it using these subclassed methods, and then have to worry about it.


Instead, I tried another tactic, which seems to have worked out pretty well.

Instead of trying to walk any size of a data structure and attempt to catch every leaf of the tree, I turned instead to another mechanism to “strip” out the Ruby-specific data structure details, and keep the same structure, so I used the ol’ faithful – JSON.

By using built-ins to convert the Mash to a JSON string, then have the JSON library parse it back into a datastructure, and then serialize it to YAML, we remove all of the extras from the picture, leaving us with a slightly modified ERB method:

<%= JSON.parse(({ 'instances' => @instances }).to_json).to_yaml %>

I then took to benchmarking both methods to see if there would be any significant impact on performance for doing this. Details are over here. Short story: not much impact.

So I’m pretty happy with the way this turned out, and even if I’m moving objects back and forth between serialization formats, the end result is something the next program (Datadog Agent) can consume.

Hope you enjoyed!

On the passage of time and learning

It’s been just over two years since I first wrote a little tool to help me visualize the relationships between objects in a particular system.

I had been working as a consultant for a couple of companies, and I found that all exhibited similar problems of using a powerful system, creating ad-hoc relationships where needed, and not fully following the inheritance and impact of these relationships when they change.

So coming and trying to first understand what was there, and then trying to untangle things to be clearer (and hopefully better), I tried to sit down and draw out in a physical space – probably a whiteboard – all of the objects, their relationships, and “who talks to whom” diagram.

Sidebar: diagrams and visualizations are awesome. A picture is many times worth a thousand words, which is why using pictures and visual representations of hard-to-perceive patterns is key to helping others understand what you may already know.

I quickly realized a few things that were problematic with this manual approach:

  1. There were too many objects and relationships to express effectively and clearly on a whiteboard.
  2. Every time something changed in the objects or their relationships, I had to modify the diagram or start over.
  3. This is probably not the last time I’m going to have this problem, and I’m getting really good at drawing boxes with arrows.

With these things in mind, I sat down and tried to reverse-engineer my own thought process. I knew what kind of visual end result I wanted, so I started by using an open source library that helps place things in relationship to other things, and then renders that as an image.

Once I was able to manually generate the image based on the input I provided, then the focus was to use dynamic input, which was the big win, as then I could point this at any input, and get a picture rendered.

Next was packaging and testing, which became harder and harder – but I kept going and eventually was happy with the results.

There have been over 750 downloads of that first version, and I’ve tweaked a few things here and there over time, but haven’t really done much to change the actual code to incorporate any further features.

“It works, I’m done.”

Looking back at the code I wrote (all told – less than 100 lines!), I realize that if I wanted to change behavior today, it’s much harder to do, as the code itself doesn’t lend itself to be changed.

I hadn’t written any testing around the code itself, only functional tests around “if I press START, do I reach END correctly?” approaches – sometimes termed “Outside-In” testing, as the test will assert that from the outside, everything looks groovy.

These tests are slower, not as comprehensive, as trying to have a test system look at a rendered image and compare it with a known “good” one isn’t trivial either. Some libraries exist, but what if I change the assumptions of what a “good” image is? Update the comparison image? Too much work, says the lazy person in me.

So the code exists, and works, and continues to function, over time.

I take a look at it recently, and realize that it’s all one big function (also known as a ‘method’). And some measurement tools out there state that the method is simply too complex.

How can it be too complex? This method is less than 70 lines of actual code, it can’t be that complex, can it?

In the time since I’ve written this code, I’ve learned a lot, heard a lot, failed a lot, and written a lot more code, and thanks to untold amounts of other people, I’ve been getting a bit better at it.

Here’s where I’ll drop in a reference to Sandi Metz, author of POODR, and more, and talk she gave earlier this year, and I didn’t see in person, only on Youtube. It’s called “All the Little Things”, in which she takes you on a journey of looking at code, refactoring and testing, and basically how to change things to make further changes easy.

It’s a load of information, and a lot of it may not make sense if you’ve never encountered these problems and ideas. But having these ideas (and other design principles and patterns) in your toolbox enable forward progress in your own understanding of how you approach solving problems is really helpful in not only solving the problem today, but helping you solve the problems you don’t even know about yet.

Now I look at that code, and say to myself, “Wow, I don’t really want to change anything in there until I have some better testing around parts of it”. This makes it harder to add anything new, since I don’t know what existing functionality I may break when adding new things.

So if you wrote some code, and let it sit for a while, and look at it a year or two later, you may find yourself shaking your head, with the “who even wrote this mess?” knowing full well that your past self did it.

Be kind to your future self, and try to make decisions today that will help your future self understand what choices you made and why you made them. It’s likely that your future self will have learned more by then and may make other decisions, but will appreciate the efforts of present self in the future.
It’s a weird kind of time-travel, and in the present, you’re trying to better your own future. (cue time paradox arguments)

Thanks for reading!

A decade of writing this stuff? Seriously.

In tinkering with my blogging platform and playing with different technologies, I’ve just realized that I’ve been writing online for well over a decade now.

It started a long time ago, when I was writing personal stuffs on a public Open Diary back in 1995, under an alias, which for the life of me, I can’t recall. The site is currently unavailable, and I was curious to see if they still indexed old entries, and see if I could dig anything up from back then.

It was a place that I tossed out whatever I had in mind, a place to jot down the ideas running through my head, a place for a creative outlet with the safety of knowing nothing would ever come back to me, since I lived behind the veil of anonymity (since back then, PRISM was just a dream…), and I was able to express whatever I wanted, in a safe-like place.

After writing there for a couple of years, I was witness to the 1997 Ben Yehuda Street Bombing – I was at a cafe off the street with some friends, when it happened, and went to offer whatever help I could, having had some First Aid training. After spending some 2 hours dealing with things that I’ve pushed far to the back of my mind, I was gathered by a friend, carted to his house, and sat in shock for a few hours, before making my way home.

The next day, I wrote about it on OD, and referenced my friend by first name only.

A couple of days later, a comment came on my post, asking if my friend was ‘So-and-so from Jerusalem’, and if so, that they knew him, and agreed that he was a great help. We began discussing our mutual friend, and eventually met in person.

This was the first revelation I had – you’re never truly anonymous.

We became pals, hung out a few times, and continued to stay up to date with each other for a while.
I did notice that after a while, my writing dwindled, now I knew that there is someone out there who knew who I am, not that I was saying anything outrageous, but the feeling of freedom dropped.

During my time in the Air Force, I wrote extremely rarely, since getting online was near impossible from base, so after discharge in 2000, I pretty much had stopped writing altogether.

In 2003, my friend Josh Brown invited me to the closed community (at the time) of LiveJournal, where it quickly grew into the local social networking site, where we could post, comment, and basically keep up with each other’s lives.
Online quizzes were ‘the thing’ and posting your results as an embed to your post was The Thing to do.

After spending 4 years on LJ, they began providing additional customizations, added features for paid-only users, and I didn’t want to spend any money on that, rather I wanted to host my own site.

So I did, for a while. In 2006, I built my own WordPress 2.0 site (history!), hosted it on my home server (terrible bandwidth) and began on the journey of customized web application administration. Dealing with databases, application code updates, frameworks, plugins, you name it.

I think I actually enjoyed tinkering with the framework more than actually writing.

Anyhow, I’ve written sporadically over time, about a wide variety of things, both on this site, and elsewhere.
The invention of Facebook, twitter, and pretty much any social network content outlet has replaced a lot of the heavier topic writing that went on here.

But it does indeed fill we with some sense of happiness that I’ve been doing this for a long time, and have preserved whatever I could from 2003 until now, and continue to try and put out some ideas now and then.

My hope is that anyone can take the to express their creativity in whatever fashion they feel possible, and share what they want to with the rest of us.

The Importance of Dependency Testing

Recently I revisited an Open Source project I started over a year ago.

This tool is built to hook into a much larger framework (Chef), and leverages a bunch of code many other people have written, and produce a specific result that I was looking for.

This subject is less about the tool itself, rather the process and procedure involved in testing dependencies.

This project is written in Ruby, and as many have identified in articles and tweets, some project maintainers don’t adhere to a versioning policy, making it hard to ensure working software across multiple versions of dependencies.
A lot rides on the maintainer’s adherence to a versioning standard – one very popular one is Semantic Versioning, or SemVer for short.

This introduces a few other questions, like how frequently should a writer release new versions of code, how frequently should users upgrade to leverage new fixes, features, etc.

In any case, my tool was restricted to running the framework’s version 10.x, considering that between major versions, functionality may change, and that there is no guarantee that my tool will continue working.

A new major version of Chef was released earlier this year and most of my existing projects are still on Chef 10.x, as this is still being updated with stability fixes and security patches, and the ‘jump’ to 11 is not on the schedule right now, so my tool continues functioning just fine.

Time passes, and I have a project running Chef 11 that I want to use my tool with.

Whoops. There’s a constraint built in to the tool’s syntax of dependencies that will report that “you have Chef 11, this wants Chef 10.x and not higher, have a nice day”.

So I change the constraint, install locally, and see that it works. Yay!

Now I want to commit the change that I made to the version constraint logic, but I want to continue testing the tool against the 10.x versions, as I should continue to support the active versions for as long as they are alive and in use, right?

A practice I was using for the tests that I had written was: given a static list of Chef versions, use the static entry as the Chef version for installation/test.

This required me to update the static list each time a new version of Chef was released, and potentially was testing against versions that didn’t need testing – rather I wanted to test against the latest of the mainline release.

I updated my constraint, ran the test suite that I’ve written, and whoops, it failed the tests.

Functionality-wise, it worked correctly on both versions, so the problem must be in my test suite, right?

I found a cool project called Appraisal, that’s been around for a while, and
used by a bunch of other projects, and you can read more about it here.
It allows one to specify multiple version constraints and test against each of them with the same test suite.

Sure enough, passes on version 10, not version 11. Same code, same tests. #wat

So now it’s time to do some digging. I read through some of the Chef ChangeLog, and decide there’s too much to wade through there, rather let’s take a look at the code my tool is using.

The failure was triggering here, and was showing a default value.
This meant that Chef was no longer loading the configuration file that I provide here correctly.

So I took a look at the current version of the configuration loader, and visually compared it with the 10.x version.
Sure enough, there’s one small change that’s affecting me: Old vs New

working_directory? What’s this? Oh, it’s over here, just a few lines prior.

Reading the full commit, and the original ticket, it seems like this is indeed a good idea, but why are my tests failing?

After further digging around in the aruba test suite extension I’m using, I realize that the environment variable PWD remains set to the actual working directory of my shell, not the test suite’s subprocesses.
Thus every time it runs, the chef_config_dir is looking in my current directory, not the directory the tests are running in.

After poking around aruba’s source code, and adding some debugging statements during test runs, I figured out that I need the test suite to change it’s PWD environment variable based on the test’s execution, which led to this commit.

Why is this different? Well, before, Ruby’s Dir.pwd statement would be invoked from inside the running test, loading the config from a location relative to Dir.pwd, where I was placing the test config file.
Now the test was trying to load the config from the process’ environment variable PWD instead, and failing to find the config.

Tests, pass, and now I can have Travis CI continue to test my code with multiple dependencies when it changes and catch things before they go badly.

All in all, an odd behavior to expect in a normal situation, as my tool is mean to be run interactively by a user, not via a test suite that mocks up all sorts of other environments.

So I spent about 2-3 hours digging around to essentially change one line that makes things work better and cleaner than before.

Worth it? Completely, as these changes will allow me to continue to ensure that my tool remains working with upstream releases of the framework, and maintain compatibility with supported versions of the framework.

TL, DR: Don’t skimp on testing you project against multiple versions of external dependencies, especially when your target users are going to be using more than one possible version.

P.S. Shout out to my girlfriend that generously lets me spend time hacking on these kind of things 😀