How Do You Let The Cat Out of the Bag?

On what has now become to be known as Star Wars Day, I thought it prudent to write about A New Hope, Mike-style.

A few weeks ago, I took a step in life that is a bit different from everything I’ve ever done before.
I know I’m likely to get questions about it, so I figured I would attempt to preemptively answer here.

I’ve left Datadog, a company I hold close and dear to my heart.

I started working as a consultant for Datadog in 2011 with some co-workers from a earlier position, and joined full-time in 2013. For the past 3 years, I’ve pretty much eaten, dreamt, lived Datadog. It’s been an amazing ride.

Having the fortune to work with some of the smartest minds in the business, I was able to help build what I believe to be the best product in the marketplace of application and systems monitoring.

I still believe in the mission of building the best damn monitoring platform in the world, and have complete faith that the Datadog crew are up to the task.

Q: Were you let go?
A: No, I left of my own free will and accord.

Q: Why would you leave such a great place to work?
A: Well, 3 years (5 if you count the preliminary work) is a reasonable amount of time in today’s fast-paced market.
Over the course of my tenure, I learned a great many things, positively affected the lives of many, and grew in a direction that doesn’t exactly map to the current company’s vision for me.
There is likely a heavy dose of burnout in the mix as well.
Instead of letting it grow and fester until some sour outcome, I found it best to part ways as friends, knowing that we will definitely meet again, in some other capacity.
Taking a break to do some travel, focus on some non-work life goals for a short time felt like the right thing.

Q: Did some other company lure you away?
A: While I am lucky to receive a large amount of unsolicited recruiter email, I have not been hired by anyone else, rather choosing to take some time off to reflect on the past 20 years of my career, and figure out what it is that I want to try next.
I’m also trying a 30-day fitness challenge, something that has been consistently de-prioritized, in attempt to get a handle on my fitness, before jumping headfirst into the next life challenge, so recruiters – you will totally gain brownie points by not contacting me before June 4th.

Q: Are you considering leaving New York City?
A: A most emphatic No. I’ve lived in a couple of places in California, Austin TX, many locations in Israel, and now NYC. I really like the feel of this city.

Q: What about any Open Source you worked on?
A: Before I started at Datadog, and during my employment, I was lucky enough to have times when I was able to work on Open Source software, and will continue to do so as it interests me. It has never paid the bills, rather providing an interesting set of puzzles and challenges to solve.
If there’s a project that interests you and you’d like to help contribute to, please let me know!

Q: What’s your favorite flavor of ice cream?
A: That’s a hard one. I really like chocolate, and am pretty partial to Ben & Jerry’s Phish Food – it’s pretty awesome.

Q: What about a question that you didn’t answer here?
A: I’m pretty much available over all social channels – Facebook, Twitter, LinkedIn – and good ol’ email and phone.
I respond selectively, so please understand that if you don’t hear back for a bit, or at all.
If it’s a really good question that might fit here, I might update the post.

TL; DR: I’m excited about taking a break from work for a bit, and enjoying some lazy summer days. Let’s drink a glass of wine sometime, and May the Fourth Be With You!

Recruiting via LinkedIn – Don’t Do This!

I regularly get emails from recruiters all over the planet, telling me about their awesome new technology, latest and greatest ideas, and why I should work for them.

Most get ignored.

One came in this week that annoyed me, since it was from someone at a company that had sent me the exact same email six months ago.

I felt I had to respond:

Hi <recruiter name>,

I think heard of <YourCompany> last year sometime from a friend.

I also received this same stock email from you on 8/22/11, and you had addressed it to “Pascal” – further evidence of a copy-and-paste.

It would behoove you to keep records of whom you contact, as well as reviewing the message you paste before clicking “Send”.

A stock recruiter email is not a very likely way to attract good recruits, especially if you’re listing a ton of things that are not particularly relevant or interesting in the realm of technology.

Asking me to send a resume, while being able to view my full LinkedIn profile also seems superfluous – here’s the information, you have supposedly read it, and that is what attracted you to my profile in the first place, rather than “someone who turned up in a keyword search”.

I wish you, and your company all the best, and hope that these recruiting tactics work for you.

All the best,
-M

I am very curious what kind of response, if any, I shall get.

Ask your systems: “What’s going on?”

This is a sysadmin/devops-style post.
Disclaimers are that I work with these tools and people, and like what they do.

In some amount of our professional lives, we are tasked with bringing order to chaos, keep systems running and have the businesses we work for continue functioning.

In our modern days of large-scale computing, web technology growth explosions, multiple datacenter deployments, cloud providers and other virtualization technologies, the manpower needed to handle the vast amount of technologies, services and systems seems to have a pretty high overhead cost associated with it. “You’ve got X amount of servers? Let’s hire Y amount of sysadmins!”

A lot of tech startups start out with some of the developers performing a lot of the systems tasks, and since this isn’t always their core expertise, decisions are made, scripts are written, and “it works”.  When the team/systems grow large enough to need their own handler, in walks a system admin-style person, and may keel over, due to the state of affairs.

Yes, there are many tech companies where this is not the case, and I commend them of keeping their systems lean, mean and clean.

A lot of companies have figured out that in order to make the X:Y ratio work well, automation is required.  Here’s an article that covers some numbers from earlier this year.  I find that the statement of a ratio of 50 servers to 1 sysadmin pretty low on my view of how things can be, especially given the tools that we have available to us.

One of the popular systems configuration tools I’ve been using heavily is Chef, from Opscode. They provide a hosted solution, as well as an open-source version of their software, for anyone to use.  Getting up and running with some basics is really fast, and there’s a ton of information available, as well as a really responsive community (from mailing lists, bug tracker site and IRC channel).  Once you’re working with Chef, you may wonder how you ever got anything done before you had it.  It’s really treating a large part of your infrastructure as code – something readable, executable, and repeatable.

But this isn’t about getting started with Chef. It’s about “what’s next”.

In any decent starting-out tech company, the amount of servers used will typically range from 2-3 all the way to 200 – or even more.  If you’ve gone all the way to 200 without something like Chef or Puppet, I commend your efforts, and feel somewhat sorry for you.  Once you’re automating your systems creation, deployment and change, then you typically want some feedback on what’s going on. Did what I asked this system to do succeed, or did it fail.

Enter Datadog.

Datadog attempts to bring many sources of information together, to help whomever it is that is supposed to be looking at the systems to make more sense of the situation, from collecting metrics from systems, events from services and other sources, to allowing a timeline and newsfeed that is very human-friendly.

Having all the data at your disposal makes it easier to find patterns and correlations between events, systems and behaviors – helping to minimize the “what just happened?” question.

The Chef model for managing systems is a centralized server (either the open source in your environment or the hosted service in Opscode), which tells a server what it is meant to “be”.  Not what it is meant to “do now”, but the final state it should be in.  They call this model “idempotent” – meaning that no matter how many time you execute the same code on the same server, the behavior should end up the same every time.  But it doesn’t follow up very much on the results of the actions.

An analogy could be that every morning, before your kid leaves the house, your [wife|mother|husband|guardian|pet dragon] tells them “You should wear a coat today.” and then goes on their merry way, not checking whether they wore a coat or not. The next morning, there will get the same comment, and so on and so forth.

So how do we figure out what happened? Did the kid wear a hat or not? I suppose I could check by asking the kid and get the answer, but what if there are 200 of us? Do I have time to ask every kid whether or not they ended up wearing a hat? I’m going to be spending a lot of time dealing with this simple problem, I can tell you now.

Chef has built-in functionality to report on what Chef did – after it has received its instructions from the centralized server. It’s called the “Exception and Report Handlers” – and this is how I tie these two technologes together.

I adapted some code started by Adam Jacob @Opscode, and extended it further into a complete RubyGem with modifications for content, functionality and some rigorous testing.

Once the gem was ready, now I have to distribute it to my servers, and then have it execute every time Chef runs on that server. So, based on the chef_handler cookbook, I added a new recipe to the datadog cookbook – dd-handler.

What this does is adds the necessary components to a Chef execution, and when placed at the beginning of a “run”, will capture all the events and report back on the important ones to the Datadog newsfeed.  It will also push some metrics, like how long the Chef execution too, how many resources were updated, etc.

The process for getting this done was really quite simple, once you boil down all the reading, how’s and why’s – especially if you use git to version control your chef-repo.  The `knife cookbook site install` command is a great method for keeping your git repo “safe” for future releases, thus preserving your changes to the cookbook, allowing for merging of new code automatically. Read more here.

THE MOST IMPORTANT STUFF:

Here’s pretty much the process I used (under chef/knife version 0.10.x):

$ cd chef-repo
$ knife cookbook site install datadog
$ vi cookbooks/datadog/attributes/default.rb

At this point, I head over to Datadog, hit the “Setup” page, and grap my organization’s API Key, as well as create a new Application Key named “chef-handler” and copy the Hash that is created.

I place these two values into the `attributes/default.rb` file, save and close.

$ knife cookbook upload datadog

This places the cookbook on my Chef server, and is now ready to be referenced by a node or role. I use roles, as it’s much more manageable across multiple nodes.

I update the `common-node` role we have to include “recipe[datadog::dd-handler]” as one of the first receipes to execute in the run list.

The common-node role applies to all of our systems, and since they all run chef, I want them all to report on their progress.

And then let it run.

END MOST IMPORTANT STUFF

Since our chef-client runs on a 30 minute interval, and not all execute at the same time, this makes for some interesting graphs at the more recent time slices – not all the data comes in at the same time.  That’s something to get used to.

Here’s an image of a system’s dashboard with only the Chef metrics:

Single Instance dashboard
It displays a 24-hour period, and shows that this particular instance had a low variance in its execution time, as well as not much is being updated during this time (a good thing, since it is consistent).

On a test machine I tossed together, I created a failure, and here’s how it gets reported back to the newsfeed:

 

Testing a failure
As you can see, the stacktrace attempt to provide me with the information I need to diagnose and repair the issue. Once I fix it, and apache can start, this event was logged in the “Low Priority” section of the feed (since succeses are expected, and failures are aberrant behavior):

Test passes

All this is well and wonderful, but what about a bunch of systems? Well, I grabbed a couple snaps off the production environment for you!

These are aggregates I created with the graphing language (had never really read it before today!)

Production aggregate metrics

By being able to see the execution patterns, and a bump closer to the left side of the “Resource Updated” graph – I then investigated, and someone had deployed a new rsyslog package – so there was a temporary increase in deploying the resources, and now there are slightly more resources to manage overall.

The purple bump seen in the “Execution Time” graph led me to investigate, and found a timeout in that system’s call to an “apt-get update” request – probably the remote repo was unavailable for a minute. Having the data available to make that correlation made this task of investigating this problem really fast, easy, and simple – more importantly since it has been succeeding ever since, no cause for alarm.

So now I have these two technologies – Chef to tell the kids (the servers) to wear coats, and Datadog to tell the parents (me) if the kids wore the coats or not, and why.

Really, just wear a coat. It’s cold out there.

———–

Tested on:

  • CentOS 5.7 (x64), Ruby 1.9.2 v180, Chef 0.10.4
  • Ubuntu 10.04 (x64), Ruby 1.8.7 v352, Chef 0.9.18
Used:

Time goes by, so slowly

I seem to be letting larger amounts of time slip by between posts, and that kind of makes me sad.

Between having the ability to Tweet, Facebook status update and Google Buzz, i feel that sometimes I just don’t want to write, and that is a Bad Thing.

Writing is a great way to dump some of the thoughts, feelings and ideas from inside this mess of a brain to written word, and in the past has allowed me to review these at a later date to see what the heck I was thinking and talking about.

Now I am not committing to writing regularly, or even on any set schedule, but just doing it now and then seems to help out.

In recent past, I’ve been tinkering with all kinds of technologies – from TCL to python and powershell, from WordPress php and css to Google AppEngine, and even more in the hardware and software realms.

Some of the things I am teaching myself is how to understand enough of the lowest possible level to get the core ideas to then be able to make that jump into the high-level arena, where having the big picture is crucial.

Some of that lies within data visualization, some of it relies on knowing the inner workings of a system, another is how to get data in and out of a management interface, and trying to figure out what is the question you want answered.

I think figuring out these kind of things are the challenges I like most.