Setting Up a Datadog-to-AWS Integration

When approaching a new service provider, sometimes it can be confusing on how to get set up to best communicate with them – some processes involve multiple steps, multiple interfaces, confusing terminology, and

Amazon Web Services is an amazing cloud services provider, and in order to allow access informational services inside a customer’s account, a couple of known mechanisms exist to delegate access:

  • Account Keys, where you generate a key and secret and share them. The other party stores these (usually in either clear text or using reversible encryption) and uses them as needed to make API calls
  • Role Delegation, where you create a Role and shared secret to provide to a the external service provider, who then is allowed to use their own internal security credentials to request temporary access to your account’s resources via API calls

In the former model, the keys are exchanged once, and once out of your immediate domain, you have little idea what happens to them.
In the latter, a rule is put into place that requires ongoing authenticated access to request assumption of a known role with a shared secret.

Luckily, in both scenarios, a restrictive IAM Policy is in place that allows only the actions you’ve decided to allow ahead of time.

Setting up the desired access is made simpler by having good documentation on how to do this manually. In this modern era, we likely want to keep our infrastructure as code where possible, as well as have a mechanism to apply the rules and test later if they are still valid.

Here’s a quick example I cooked up using Terraform, a new, popular tool to compose cloud infrastructure as code and execute to create the desired state.

# Read more about variables and how to override them here:
# https://www.terraform.io/docs/configuration/variables.html
variable "aws_region" {
type = "string"
default = "us-east-1"
}
variable "shared_secret" {
type = "string"
default = "SOOPERSEKRET"
}
provider "aws" {
region = "${var.aws_region}"
}
resource "aws_iam_policy" "dd_integration_policy" {
name = "DatadogAWSIntegrationPolicy"
path = "/"
description = "DatadogAWSIntegrationPolicy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"autoscaling:Describe*",
"cloudtrail:DescribeTrails",
"cloudtrail:GetTrailStatus",
"cloudwatch:Describe*",
"cloudwatch:Get*",
"cloudwatch:List*",
"ec2:Describe*",
"ec2:Get*",
"ecs:Describe*",
"ecs:List*",
"elasticache:Describe*",
"elasticache:List*",
"elasticloadbalancing:Describe*",
"elasticmapreduce:List*",
"iam:Get*",
"iam:List*",
"kinesis:Get*",
"kinesis:List*",
"kinesis:Describe*",
"logs:Get*",
"logs:Describe*",
"logs:TestMetricFilter",
"rds:Describe*",
"rds:List*",
"route53:List*",
"s3:GetBucketTagging",
"ses:Get*",
"ses:List*",
"sns:List*",
"sns:Publish",
"sqs:GetQueueAttributes",
"sqs:ListQueues",
"sqs:ReceiveMessage"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_role" "dd_integration_role" {
name = "DatadogAWSIntegrationRole"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Principal": { "AWS": "arn:aws:iam::464622532012:root" },
"Action": "sts:AssumeRole",
"Condition": { "StringEquals": { "sts:ExternalId": "${var.shared_secret}" } }
}
}
EOF
}
resource "aws_iam_policy_attachment" "allow_dd_role" {
name = "Allow Datadog PolicyAccess via Role"
roles = ["${aws_iam_role.dd_integration_role.name}"]
policy_arn = "${aws_iam_policy.dd_integration_policy.arn}"
}
output "AWS Account ID" {
value = "${aws_iam_role.dd_integration_role.arn}"
}
output "AWS Role Name" {
value = "${aws_iam_role.dd_integration_role.name}"
}
output "AWS External ID" {
value = "${var.shared_secret}"
}

The output should look a lot like this:

The Account ID is actually a full ARN, and you can copy your Account ID from there.
Terraform doesn’t have a mechanism to emit only the Account ID yet – so if you have some ideas, contribute!

Use the Account ID, Role Name and External ID and paste those into the Datadog Integrations dialog, after selecting Role Delegation. This will immediately validate that the permissions are correct, and return an error otherwise.

Don’t forget to click “Install Integration” when you’re done (it’s at the very bottom of the screen).

Now metrics and events will be collected by Datadog from any allowed AWS services, and you can keep this setup instruction in any revision system of your choice.

P.S. I tried to set this up via CloudFormation (Sparkleformation, too!). I ended up writing it “freehand” and took more than 3 times as long to get similar functionality.

You can see the CloudFormation Stack here, and decide which works for you.


Further reading:

How Do You Let The Cat Out of the Bag?

On what has now become to be known as Star Wars Day, I thought it prudent to write about A New Hope, Mike-style.

A few weeks ago, I took a step in life that is a bit different from everything I’ve ever done before.
I know I’m likely to get questions about it, so I figured I would attempt to preemptively answer here.

I’ve left Datadog, a company I hold close and dear to my heart.

I started working as a consultant for Datadog in 2011 with some co-workers from a earlier position, and joined full-time in 2013. For the past 3 years, I’ve pretty much eaten, dreamt, lived Datadog. It’s been an amazing ride.

Having the fortune to work with some of the smartest minds in the business, I was able to help build what I believe to be the best product in the marketplace of application and systems monitoring.

I still believe in the mission of building the best damn monitoring platform in the world, and have complete faith that the Datadog crew are up to the task.

Q: Were you let go?
A: No, I left of my own free will and accord.

Q: Why would you leave such a great place to work?
A: Well, 3 years (5 if you count the preliminary work) is a reasonable amount of time in today’s fast-paced market.
Over the course of my tenure, I learned a great many things, positively affected the lives of many, and grew in a direction that doesn’t exactly map to the current company’s vision for me.
There is likely a heavy dose of burnout in the mix as well.
Instead of letting it grow and fester until some sour outcome, I found it best to part ways as friends, knowing that we will definitely meet again, in some other capacity.
Taking a break to do some travel, focus on some non-work life goals for a short time felt like the right thing.

Q: Did some other company lure you away?
A: While I am lucky to receive a large amount of unsolicited recruiter email, I have not been hired by anyone else, rather choosing to take some time off to reflect on the past 20 years of my career, and figure out what it is that I want to try next.
I’m also trying a 30-day fitness challenge, something that has been consistently de-prioritized, in attempt to get a handle on my fitness, before jumping headfirst into the next life challenge, so recruiters – you will totally gain brownie points by not contacting me before June 4th.

Q: Are you considering leaving New York City?
A: A most emphatic No. I’ve lived in a couple of places in California, Austin TX, many locations in Israel, and now NYC. I really like the feel of this city.

Q: What about any Open Source you worked on?
A: Before I started at Datadog, and during my employment, I was lucky enough to have times when I was able to work on Open Source software, and will continue to do so as it interests me. It has never paid the bills, rather providing an interesting set of puzzles and challenges to solve.
If there’s a project that interests you and you’d like to help contribute to, please let me know!

Q: What’s your favorite flavor of ice cream?
A: That’s a hard one. I really like chocolate, and am pretty partial to Ben & Jerry’s Phish Food – it’s pretty awesome.

Q: What about a question that you didn’t answer here?
A: I’m pretty much available over all social channels – Facebook, Twitter, LinkedIn – and good ol’ email and phone.
I respond selectively, so please understand that if you don’t hear back for a bit, or at all.
If it’s a really good question that might fit here, I might update the post.

TL; DR: I’m excited about taking a break from work for a bit, and enjoying some lazy summer days. Let’s drink a glass of wine sometime, and May the Fourth Be With You!

Reduce logging volume

Quick self-reminder on reducing logging volume when monitoring an http endpoint with the Datadog Agent HTTP Check.

For nginx, add something like:

    location / {
        if ($http_user_agent ~* "Datadog Agent/.*") {
            access_log off;
        }
        ....

to your site’s location statement.

This should cut down on your logging volume, at the expense of not having a log statement for every time the check runs (once every 20 seconds).

Fixing unintended consequences of the past

In the age of technology, everyone races forward to get the win. Anything that can provide you the competitive edge is considering important.
This is especially true in the realm of web media, where optimizing for page load times, providing secure transport, adhering to standards can make a difference in how a site is handled by client browsers, ranked by search engines, and most importantly how it is seen by viewers.

To this end, there are many sites, services and companies that will provide methods to audit a site and point out what could be problematic – count broken links, produce reports of actionable corrections, and more.
Some are better than others, and occasionally, you’ll come across something you’ve never seen before.

Recently, I was pinged about pages on a site that is hosted on an Amazon Simple Storage Service (S3) website-enabled bucket.
Since S3 is an object store only, this means that the pages in this site are statically generated and there is no associated web server, backend database, or other components to serve the pages.

This model is becoming more common for sites that can be simplified to run with no dynamic loading of data from a database, withstand heavy bursts of requests, as well as run cheaply (there’s even a free tier, beyond which pricing still remains affordable).

The idea is that you create your content in one format, run a compiler process to generate all the rendered files containing the links and content, and then upload the the compiled files to the S3 location to be requested by browsers. There are many guides on the web on how to do this – I’m not going to link to any now, search and ye shall find.

This particular site had been deployed since 2011 – and the mechanism to copy compiled files to S3 has been using the popular open source command line tool s3cmd – deployment basically looked like this (and still does!):

 s3cmd sync output/ s3://www.mysite.com

where output/ contains the compiled files, ready for deployment.

This has worked very well for over 4 years – until it came to my attention that when uploading to S3, the s3cmd tool was adding some metadata to each file as it uploaded it, as part of the design to support website hosting on S3.

For instance, when uploading a .css file to S3, s3cmd attempts to determine extra details about the file, and set the correct metadata for browsers to understand, such as Content-Type: text/css.
This is a critical function, as it would be difficult to take the time to determine each file’s content type, set that manually, across many files.
You can read more about content media types on Wikipedia.

Since this project was set up a long time ago, the version of s3cmd used as still in alpha stage – and it was used because it performed well enough, and nothing broke, so we were happy to continue running the with same version since early 2013.

The problem reported to me was that many files on the site were returning an invalid Content-Encoding value, something that has been typically not a problem, as the client’s browser will send an Accept-Encoding header when making a request, typically something along the lines of:

Browser: Hi there! Can I have this resource, and I'll accept a response encoded in the following formats: a, b, or c
Server: No problem! Here's the resource you're looking for, with a content encoded in XYZZY

Now, the XYZZY in this example was being set by the s3cmd upload process, and it was determined to be a bug and fixed in late 2013, but since we never knew about the problem, and the site loads just fine, we never addressed it.
There have been even more stability fixes and releases of s3cmd since – as recently as February 2015.

The particular invalid encodings being set were UTF-8 and ANSI_X3.4-1968. While these are valid encodings for files, they are invalid values for the Content-Encoding field.

Here’s an example of how to show the headers of a particular remote file:

$ curl -sI http://www.mysite.com/static/css/style.css | grep Content
Content-Encoding: ANSI_X3.4-1968
Content-Type: text/css
Content-Length: 7073

Many modern browsers will send something along the lines of ‘Accept-Encoding: gzip, deflate, sdch‘ in their request header, in hopes that the server can respond with one that matches, and then save on overall bytes sent over the wire, to speed up pages.

It’s the responsibility of the client (browser) to handle the response. I looked into the source code of Chromium (the basis for Google Chrome), and can see from here that in my example above, at Content-Encoding type of XYZZY will pretty much be ignored, which in this case, is fine, since we’re sending an invalid type.

So there’s no direct user impact, why should we care? Well, according to some popular ranking engines:

Using non-HTML content types for landing pages results in significantly reduced SEO ranking.

So all of this is fine, cool – update s3cmd tool to a newer version, and upload the output files again? Well, it’s not that simple.

Since during a sync operation, s3cmd determines what files might have changed, and only uploads the changed ones, it doesn’t reset the object metadata, as this is basically a new object, and the file itself hasn’t been changed.

One solution might be to edit every file, add an extra space somewhere – maybe an extra blank line at the end – then compile, deploy the changed files – however this might take too long.

Instead, I decided to solve the problem of iterating over every object in a bucket, and checking to see if it had the incorrect Content-Encoding set, and create a new copy of the file without the heading set.

This was pretty straightforward, once I understood the concept of object immutability – once written, you can’t change it, rather what feels like a change from a user interface actually creates a new version of the object with the new settings/metadata.

I also didn’t want to have to download each file locally and then upload it back to S3 – that it a slow operation, and could result in extra network traffic and disk space consumption.

Instead, I used the AWS SDK for Ruby gem, and came up with a short-and-sweet solution:

The code aims to be short and sweet, and sure enough, post-execution, we get the response without the offending header:

$ curl -sI http://www.mysite.com/static/css/style.css | grep Content
Content-Type: text/css
Content-Length: 7073

This swift diagnosis and resolution would not have been possible had the tooling being used not been open source, as many times I was trying to figure out why something behaved the way it did, and while not being familiar with the code, I could reason enough about how things work in general to apply that reasoning on how I should implement my resolution.

Support open source where possible, and happy hunting!

Read more on the standards RFC2616.

Tracking application performance on Heroku with Datadog

I thought about using a clickbait title – “You’ll never believe how this guy captures metrics!” – but decided that 99% of these are not worth the time invested in coming up with the catch title.

So instead, I’ll simply talk about what I wanted to, and you be the judge of my title.

Application Performance Monitoring, or APM, is a crazily complex landscape, with an enormous amount of tooling, terminology, and providers looking to get some piece of the action.
There are many vendors, and all have their advantages, as well as disadvantages.

The vendor that I am pretty happy with (and I now work there) is Datadog.

One solution that has caught on quite well for surgical application monitoring is the use of the statsd protocol to send metrics from inside your application to a listener which can then store these metrics for querying later on. This is achieved by placing strategic “emitter” callouts in your code so that they can report metrics during runtime.

Flickr, then Etsy have started these projects, and they have been refined, ported to most languages, and are seeing adoption in companies where a focus on measuring is an important goal.
A blog post on Datadog’s implementation and extension of Statsd was written last year and goes into deeper detail.

One common question has always been “How do I collect metrics from an application running on Heroku with Datadog?”.

And I think we finally have one answer.

The Heroku Dyno container is pretty simple – you wanna run a process? Describe it in a Procfile.
You wanna scale? You tell Heroku to launch more Dynos with the process name, as specified in the Procfile.

However, the actual Dyno is a fairly limited environment by design – the root filesystem is read-only, the only writable area is in the application’s root directory, and disappears when terminated. There’s no sysvinit, upstart or systemd for people to bicker about. Use a Procfile, which is also really simple.

So a challenge to overcome became: “how to install a Datadog Agent package that runs a dogstatsd listener as a second process, inside an environment that is pretty locked down?”

First, we have to install the package. Heroku has a concept of “[buildpacks]”(https://devcenter.heroku.com/articles/buildpacks) that can be used to run compilation steps before adding your application code and launching it. The use of multiple buildpacks is also available, to chain steps together to achieve the desired outcome.

I read the heroku-buildpack-apt and found a bunch of good ideas, and came up with a Datadog-Agent-specific installer buildpack that drops off the package, as well as the needed environment for the runtime.

Now how do I run the listener process alongside my application?

Enter foreman. Foreman, not to be confused with “theforeman“, has long been a great way for application developers writing Heroku-targeted applications to run them locally in a similar manner that they will be run on the remote platform.

Foreman reads the Profile, and runs the processes based on the directives contained inside.

This feature is the one that we leverage to run multiple processes on a single Dyno.

By using foreman inside the Dyno, we are able to tell foreman to run more than one process type at a time, with another Procfile that specifies the startup process for the actual application as well as the dogstatsd listener.

When deploying any code revision, Heroku will read the base Procfile, and run a foreman process inside the Dyno, which will in turn, start up the app & dogstasd.

And while foreman is a Ruby gem, your project may be in Python (use honcho), Go (use forego or goreman) and I’m sure there are others out there. I haven’t found or tested all of them, tell me if they work out for you.

I did, however, take the time to write up a README with the procedure to follow to use this, as well as commit-by-commit example application.

Here’s the buildpack code: http://miketheman.github.io/heroku-buildpack-datadog/

Here’s the example application: https://github.com/miketheman/buildpack-example-ruby

Here’s an image of the stats collected by the example application in Datadog, with increasing web load:
Heroku App Load

Here’s a random dog:

Hope this helps you find deeper insight into how you monitor your applications!

Update (2014-12-15)

A quick addition on this topic.

A couple of days after this was published, I had a short Twitter exchange with Bo Jeanes, after which he submitted a Pull Request to the buildpack, (as well as an update to the example app).
This simplifies the end-user’s deployment of the Agent package, in that the user no longer has to spend any time on doing Procfile-in-Procfile solutions, as well as remove the need from foreman and the like from inside the container, rather the dogstatsd process will be started via the profile.d mechanism which is run on Dyno startup.

This makes the solution even more elegant, so thanks a ton, Bo!

A Quick Drop Into Data Structures For A Minute

So here’s the story, from A to Z…

Well, I’m not going to all the way to Z, but let me lay some details on you.

At Datadog, we provide a nice interface for configuring the Datadog Agent – it’s usually pretty simple to drop some YAML configuration into a file at a specific location, restart the Agent main process, and voilà, you’ve got monitoring.

This gets more complicated when you want to generate a valid YAML file from another system, typically from something like Configuration Management, where you want to take the notion of “Things I know about this particular system” should then trigger “monitor this system with the things I know about it”.

In the popular open source config management system Chef, it is a common practice to create a template of the file you wish to place on a given system, and then extract particular variables to pass to a template ‘resource’, and use those as dynamic values that can make the template reusable across systems and projects, as the template itself can be populated by inputs not included in the initial template design.

Another concept in Chef is the ability to set node ‘attributes’ to control the behavior of recipes, templates and any amount of resources. This has pros and cons, neither of which I will attempt to cover here, but suffice it to say that the pattern is well-established that if you want to share your resources with others, having a mechanism of “tweaking the knobs” of your resources with attributes is a common way of doing it.

In the datadog cookbook for Chef, we provide an interface just like this. An end user can build up a list of structured data entries made up of hash objects (or maps or dicts, depending on your favorite language), and then pass that into a node object, and expect that these details will be rendered into a configuration file template (and restart the service, etc).

This allows the end user to take the code, not modify it at all, and provide inputs to it to receive the desired state.

Jumping further into Chef’s handling of node attributes now.

== Attribute
Attribute implements a nested key-value (Hash) and flat collection
(Array) data structure supporting multiple levels of precedence, such
that a given key may have multiple values internally, but will only
return the highest precedence value when reading.

Attributes are subclassed of the Mash object type – which has some cool features, like deep-merging lower data structures – and then attributes are compiled together to make collections of these node attribute objects, which are then “frozen” into another class type named Chef::Node::ImmutableArray or Chef::Node::ImmutableHash to prevent further mucking around with them.

All this is cool so far, and is really useful in most cases.

In my case, I want to allow the user to provide the data needed, and then have the data written our, or deserialized, into a configuration file, which can then be read by the Agent process.

The simple way you might think to do this is to tell the YAML module of Ruby’s standard library (which is actually an alias to the Psych module) to emit the structured YAML and be done with it.

In an Erubis (ERB) template, this would look like this:

<%= YAML.dump(array_of_mash_data) %>

However, I’d like to inject a header to the array before rendering it, so I’ll do that first:

<%= YAML.dump({ 'instances' => array_of_mash_data }) %>

What this does is render a file like so:

---
instances:
- !ruby/hash:Mash
  host: localhost
  port: 9999
  extra_key: extra_val
  conf:
  - !ruby/hash:Mash
    include: !ruby/hash:Mash
      domain: org.apache.cassandra.db
      attributes:
      - BloomFilterDiskSpaceUsed
      - Capacity
      foo: bar
    exclude:
    - !ruby/hash:Mash
      domain: evil_domain

As you can see, there’s these pesky lines that include a special YAML-oriented tag that start with exclamation points – !ruby/hash:Mash – these are there to describe the data structure to any YAML loader, saying “hey, the thing you’re about to load is an instance of XYZ, not an array, hash, string or integer”.

Unfortunately, when parsing this file from the Python side of things to load it in the Agent, we get some unhappiness:

$ sudo service datadog-agent configcheck
your.yaml contains errors:
    could not determine a constructor for the tag '!ruby/hash:Mash'
  in "<byte string>", line 7, column 5

So it’s pretty apparent that I can do one of two things:

  • teach Python how to interpret a Ruby + Mash constructor
  • figure out how to remove these from being rendered

The latter seemed most likely, since I didn’t really want to teach Python anything new, especially since this is really a Hash (or a dict, in pythonese).

So I experimented with taking items from the Mash, and running them through a built-in method to_hash – which seemed likely to work.

Not really.

<%= YAML.dump({ 'instances' => @instances.map { |item| item.to_hash }}) %>

That code only steps into the first layer of the data structure and converts the segment starting with host: localhost into a Hash, but the sub-keys remain Mash objects. Grr.

Digging around, I found other reported problems where people have extended Chef objects with some interesting methods trying to solve the same problem.

This means that I’d have to add library code to my project, then modify the template renderer to make the helper code available, then tell the template to render it using these subclassed methods, and then have to worry about it.

ARGH.

Instead, I tried another tactic, which seems to have worked out pretty well.

Instead of trying to walk any size of a data structure and attempt to catch every leaf of the tree, I turned instead to another mechanism to “strip” out the Ruby-specific data structure details, and keep the same structure, so I used the ol’ faithful – JSON.

By using built-ins to convert the Mash to a JSON string, then have the JSON library parse it back into a datastructure, and then serialize it to YAML, we remove all of the extras from the picture, leaving us with a slightly modified ERB method:

<%= JSON.parse(({ 'instances' => @instances }).to_json).to_yaml %>

I then took to benchmarking both methods to see if there would be any significant impact on performance for doing this. Details are over here. Short story: not much impact.

So I’m pretty happy with the way this turned out, and even if I’m moving objects back and forth between serialization formats, the end result is something the next program (Datadog Agent) can consume.

Hope you enjoyed!

On the passage of time and learning

It’s been just over two years since I first wrote a little tool to help me visualize the relationships between objects in a particular system.

I had been working as a consultant for a couple of companies, and I found that all exhibited similar problems of using a powerful system, creating ad-hoc relationships where needed, and not fully following the inheritance and impact of these relationships when they change.

So coming and trying to first understand what was there, and then trying to untangle things to be clearer (and hopefully better), I tried to sit down and draw out in a physical space – probably a whiteboard – all of the objects, their relationships, and “who talks to whom” diagram.

Sidebar: diagrams and visualizations are awesome. A picture is many times worth a thousand words, which is why using pictures and visual representations of hard-to-perceive patterns is key to helping others understand what you may already know.

I quickly realized a few things that were problematic with this manual approach:

  1. There were too many objects and relationships to express effectively and clearly on a whiteboard.
  2. Every time something changed in the objects or their relationships, I had to modify the diagram or start over.
  3. This is probably not the last time I’m going to have this problem, and I’m getting really good at drawing boxes with arrows.

With these things in mind, I sat down and tried to reverse-engineer my own thought process. I knew what kind of visual end result I wanted, so I started by using an open source library that helps place things in relationship to other things, and then renders that as an image.

Once I was able to manually generate the image based on the input I provided, then the focus was to use dynamic input, which was the big win, as then I could point this at any input, and get a picture rendered.

Next was packaging and testing, which became harder and harder – but I kept going and eventually was happy with the results.

There have been over 750 downloads of that first version, and I’ve tweaked a few things here and there over time, but haven’t really done much to change the actual code to incorporate any further features.

“It works, I’m done.”

Looking back at the code I wrote (all told – less than 100 lines!), I realize that if I wanted to change behavior today, it’s much harder to do, as the code itself doesn’t lend itself to be changed.

I hadn’t written any testing around the code itself, only functional tests around “if I press START, do I reach END correctly?” approaches – sometimes termed “Outside-In” testing, as the test will assert that from the outside, everything looks groovy.

These tests are slower, not as comprehensive, as trying to have a test system look at a rendered image and compare it with a known “good” one isn’t trivial either. Some libraries exist, but what if I change the assumptions of what a “good” image is? Update the comparison image? Too much work, says the lazy person in me.

So the code exists, and works, and continues to function, over time.

I take a look at it recently, and realize that it’s all one big function (also known as a ‘method’). And some measurement tools out there state that the method is simply too complex.

How can it be too complex? This method is less than 70 lines of actual code, it can’t be that complex, can it?

In the time since I’ve written this code, I’ve learned a lot, heard a lot, failed a lot, and written a lot more code, and thanks to untold amounts of other people, I’ve been getting a bit better at it.

Here’s where I’ll drop in a reference to Sandi Metz, author of POODR, and more, and talk she gave earlier this year, and I didn’t see in person, only on Youtube. It’s called “All the Little Things”, in which she takes you on a journey of looking at code, refactoring and testing, and basically how to change things to make further changes easy.

It’s a load of information, and a lot of it may not make sense if you’ve never encountered these problems and ideas. But having these ideas (and other design principles and patterns) in your toolbox enable forward progress in your own understanding of how you approach solving problems is really helpful in not only solving the problem today, but helping you solve the problems you don’t even know about yet.

Now I look at that code, and say to myself, “Wow, I don’t really want to change anything in there until I have some better testing around parts of it”. This makes it harder to add anything new, since I don’t know what existing functionality I may break when adding new things.

So if you wrote some code, and let it sit for a while, and look at it a year or two later, you may find yourself shaking your head, with the “who even wrote this mess?” knowing full well that your past self did it.

Be kind to your future self, and try to make decisions today that will help your future self understand what choices you made and why you made them. It’s likely that your future self will have learned more by then and may make other decisions, but will appreciate the efforts of present self in the future.
It’s a weird kind of time-travel, and in the present, you’re trying to better your own future. (cue time paradox arguments)

Thanks for reading!

Recruiting via LinkedIn – Don’t Do This!

I regularly get emails from recruiters all over the planet, telling me about their awesome new technology, latest and greatest ideas, and why I should work for them.

Most get ignored.

One came in this week that annoyed me, since it was from someone at a company that had sent me the exact same email six months ago.

I felt I had to respond:

Hi <recruiter name>,

I think heard of <YourCompany> last year sometime from a friend.

I also received this same stock email from you on 8/22/11, and you had addressed it to “Pascal” – further evidence of a copy-and-paste.

It would behoove you to keep records of whom you contact, as well as reviewing the message you paste before clicking “Send”.

A stock recruiter email is not a very likely way to attract good recruits, especially if you’re listing a ton of things that are not particularly relevant or interesting in the realm of technology.

Asking me to send a resume, while being able to view my full LinkedIn profile also seems superfluous – here’s the information, you have supposedly read it, and that is what attracted you to my profile in the first place, rather than “someone who turned up in a keyword search”.

I wish you, and your company all the best, and hope that these recruiting tactics work for you.

All the best,
-M

I am very curious what kind of response, if any, I shall get.

Ask your systems: “What’s going on?”

This is a sysadmin/devops-style post.
Disclaimers are that I work with these tools and people, and like what they do.

In some amount of our professional lives, we are tasked with bringing order to chaos, keep systems running and have the businesses we work for continue functioning.

In our modern days of large-scale computing, web technology growth explosions, multiple datacenter deployments, cloud providers and other virtualization technologies, the manpower needed to handle the vast amount of technologies, services and systems seems to have a pretty high overhead cost associated with it. “You’ve got X amount of servers? Let’s hire Y amount of sysadmins!”

A lot of tech startups start out with some of the developers performing a lot of the systems tasks, and since this isn’t always their core expertise, decisions are made, scripts are written, and “it works”.  When the team/systems grow large enough to need their own handler, in walks a system admin-style person, and may keel over, due to the state of affairs.

Yes, there are many tech companies where this is not the case, and I commend them of keeping their systems lean, mean and clean.

A lot of companies have figured out that in order to make the X:Y ratio work well, automation is required.  Here’s an article that covers some numbers from earlier this year.  I find that the statement of a ratio of 50 servers to 1 sysadmin pretty low on my view of how things can be, especially given the tools that we have available to us.

One of the popular systems configuration tools I’ve been using heavily is Chef, from Opscode. They provide a hosted solution, as well as an open-source version of their software, for anyone to use.  Getting up and running with some basics is really fast, and there’s a ton of information available, as well as a really responsive community (from mailing lists, bug tracker site and IRC channel).  Once you’re working with Chef, you may wonder how you ever got anything done before you had it.  It’s really treating a large part of your infrastructure as code – something readable, executable, and repeatable.

But this isn’t about getting started with Chef. It’s about “what’s next”.

In any decent starting-out tech company, the amount of servers used will typically range from 2-3 all the way to 200 – or even more.  If you’ve gone all the way to 200 without something like Chef or Puppet, I commend your efforts, and feel somewhat sorry for you.  Once you’re automating your systems creation, deployment and change, then you typically want some feedback on what’s going on. Did what I asked this system to do succeed, or did it fail.

Enter Datadog.

Datadog attempts to bring many sources of information together, to help whomever it is that is supposed to be looking at the systems to make more sense of the situation, from collecting metrics from systems, events from services and other sources, to allowing a timeline and newsfeed that is very human-friendly.

Having all the data at your disposal makes it easier to find patterns and correlations between events, systems and behaviors – helping to minimize the “what just happened?” question.

The Chef model for managing systems is a centralized server (either the open source in your environment or the hosted service in Opscode), which tells a server what it is meant to “be”.  Not what it is meant to “do now”, but the final state it should be in.  They call this model “idempotent” – meaning that no matter how many time you execute the same code on the same server, the behavior should end up the same every time.  But it doesn’t follow up very much on the results of the actions.

An analogy could be that every morning, before your kid leaves the house, your [wife|mother|husband|guardian|pet dragon] tells them “You should wear a coat today.” and then goes on their merry way, not checking whether they wore a coat or not. The next morning, there will get the same comment, and so on and so forth.

So how do we figure out what happened? Did the kid wear a hat or not? I suppose I could check by asking the kid and get the answer, but what if there are 200 of us? Do I have time to ask every kid whether or not they ended up wearing a hat? I’m going to be spending a lot of time dealing with this simple problem, I can tell you now.

Chef has built-in functionality to report on what Chef did – after it has received its instructions from the centralized server. It’s called the “Exception and Report Handlers” – and this is how I tie these two technologes together.

I adapted some code started by Adam Jacob @Opscode, and extended it further into a complete RubyGem with modifications for content, functionality and some rigorous testing.

Once the gem was ready, now I have to distribute it to my servers, and then have it execute every time Chef runs on that server. So, based on the chef_handler cookbook, I added a new recipe to the datadog cookbook – dd-handler.

What this does is adds the necessary components to a Chef execution, and when placed at the beginning of a “run”, will capture all the events and report back on the important ones to the Datadog newsfeed.  It will also push some metrics, like how long the Chef execution too, how many resources were updated, etc.

The process for getting this done was really quite simple, once you boil down all the reading, how’s and why’s – especially if you use git to version control your chef-repo.  The `knife cookbook site install` command is a great method for keeping your git repo “safe” for future releases, thus preserving your changes to the cookbook, allowing for merging of new code automatically. Read more here.

THE MOST IMPORTANT STUFF:

Here’s pretty much the process I used (under chef/knife version 0.10.x):

$ cd chef-repo
$ knife cookbook site install datadog
$ vi cookbooks/datadog/attributes/default.rb

At this point, I head over to Datadog, hit the “Setup” page, and grap my organization’s API Key, as well as create a new Application Key named “chef-handler” and copy the Hash that is created.

I place these two values into the `attributes/default.rb` file, save and close.

$ knife cookbook upload datadog

This places the cookbook on my Chef server, and is now ready to be referenced by a node or role. I use roles, as it’s much more manageable across multiple nodes.

I update the `common-node` role we have to include “recipe[datadog::dd-handler]” as one of the first receipes to execute in the run list.

The common-node role applies to all of our systems, and since they all run chef, I want them all to report on their progress.

And then let it run.

END MOST IMPORTANT STUFF

Since our chef-client runs on a 30 minute interval, and not all execute at the same time, this makes for some interesting graphs at the more recent time slices – not all the data comes in at the same time.  That’s something to get used to.

Here’s an image of a system’s dashboard with only the Chef metrics:

Single Instance dashboard
It displays a 24-hour period, and shows that this particular instance had a low variance in its execution time, as well as not much is being updated during this time (a good thing, since it is consistent).

On a test machine I tossed together, I created a failure, and here’s how it gets reported back to the newsfeed:

 

Testing a failure
As you can see, the stacktrace attempt to provide me with the information I need to diagnose and repair the issue. Once I fix it, and apache can start, this event was logged in the “Low Priority” section of the feed (since succeses are expected, and failures are aberrant behavior):

Test passes

All this is well and wonderful, but what about a bunch of systems? Well, I grabbed a couple snaps off the production environment for you!

These are aggregates I created with the graphing language (had never really read it before today!)

Production aggregate metrics

By being able to see the execution patterns, and a bump closer to the left side of the “Resource Updated” graph – I then investigated, and someone had deployed a new rsyslog package – so there was a temporary increase in deploying the resources, and now there are slightly more resources to manage overall.

The purple bump seen in the “Execution Time” graph led me to investigate, and found a timeout in that system’s call to an “apt-get update” request – probably the remote repo was unavailable for a minute. Having the data available to make that correlation made this task of investigating this problem really fast, easy, and simple – more importantly since it has been succeeding ever since, no cause for alarm.

So now I have these two technologies – Chef to tell the kids (the servers) to wear coats, and Datadog to tell the parents (me) if the kids wore the coats or not, and why.

Really, just wear a coat. It’s cold out there.

———–

Tested on:

  • CentOS 5.7 (x64), Ruby 1.9.2 v180, Chef 0.10.4
  • Ubuntu 10.04 (x64), Ruby 1.8.7 v352, Chef 0.9.18
Used:

Fast and Furious Monitoring

In the past few weeks, I’ve been working with a company that is using ScoutApp‘s hosted monitoring service, which provides a nice interface to quickly get up and running with a lot of basic information about a system.

This SaaS solution, while a paid service, allows a team to get their monitoring metrics put into place in the fastest turnaround time to get moving, while allowing to scale financially at a rate of ~$10/server/month.

Getting up and running is as simple as signing up for their risk-free 30-day trial, logging in to their interface, and following some simple instructions on installing their RubyGem plugin, aptly named scout, like so:

gem install scout

Obviously, needs Ruby installed, which is pretty common in web development these days.

Executing the scout executable will then prompt you for a GUID, provided from the web interface when “Adding a new system”, which tests connectivity to the ScoutApp service, and “checks in”.

Once the new system is added, the scout gem needs to be executed once a minute to check in with the server end, so this is typically achieved by placing an entry in the crontab, and again, the instructions are provided in the most convenient location on the command line, with variations for your system.

Once installed in crontab, it’s pretty much “fire-and-forget” – which is probably the best feature available in any system.

Heading back to the web interface, you’ll see the system details, and the real advantage of the ScoutApp system – the plugins.

Each system starts with a bunch of the basics – server load, memory profiling, disk space. Great! 90% of problems manifest in variations in these metrics, so getting them on the board from the get-go is great.

The Plugin Directory has a bunch of very commonly used applications that are used in the FLOSS stacks very popular amongst web development, so you can readily add a plugin of choice to immediately to the applicable server – so adding a monitor to check your MySQL instance for slow queries is simply choosing the plugin, and the plugin actually tells you what you need to do to make it work – like changing a config file.

Once those pieces are in place, monitoring just keeps working. Plugins typically have some default triggers and alerts, based on “what makes sense” for that plugin.

There’s currently 49 public plugins, which cover a wide range of services, applications, and monitoring methodologies, like checking a JMX counter and watching a log file for a condition you specify.

Extending functionality is pretty easy, as I found out firsthand. Beyond having a succinct plugin development guide, the support team are very helpful, as well as all of the plugins are available in open source on GitHub.

Plugins are written in Ruby – also a popular language in the tech arena these days.

Since one of the many services in our software stack is Apache Zookeeper, and there was no plugin for this service, I set out to write my own, to accomplish:

  1. Get the state of a Zookeeper instance monitored (service up, some counters/metrics)
  2. Learn some Ruby
  3. Give back

I wrote the basics of a plugin, and testing it locally on a Zookeeper instance with Scout proved to be a very fast turnaround, getting results with a day, and then thinking more about how I was doing it, and refactoring, and testing, and refactoring again.

I forked the ScoutApp GitHub repo, added my code, and issued a Pull Request, so they would take my code and incorporate it back into their Plugin Directory.

Lo and behold! It’s included, and anyone running both ScoutApp and using Zookeeper can simply add the plugin and get instant monitoring.

Here’s a screent capture of my plugin running, collecting details, and keeping us safe:

ScoutApp: Zookeeper

I encourage you to check it out, especially if you don’t have a monitoring solution, are starting a new project and have a few servers, or are looking for something else.