Posts from March 2014

Last Thursday at about 6:40 a.m. we received an email threatening a Distributed Denial of Service (DDoS) attack on our systems, unless we paid a ransom of 2.5 bitcoins. Within minutes an attack was launched and became inaccesible. Our OnCall team immediately went to work to mitigate the attack.

In a DDoS attack, a large number of compromised computers (collectively called a botnet) repeatedly sends many simultaneous web requests to a website, which overwhelms the site’s ability to process regular traffic.

To a regular customer, it appears like the site is unresponsive or in the best case really, really slow. Though such an attack is annoying and potentially costly to the customer, it’s important to note that in no way was any customer data compromised. The attack was directed at the system that handles incoming web requests – our load balancer grid. Our databases and other internal systems were unaffected. In fact, during the attack scheduled messages continued to be sent.

After verifying the attack was legitimate, HootSuite’s OnCall team focused on getting our load balancer (LB) grid back online. We first attempted to identify and isolate the malicious traffic so that we could block it before it reached the LBs. However, our LBs were unable to perform diagnostic activity because they were maxed out handling incoming traffic. Next, we turned our attention to scaling up our capacity to handle the incoming traffic. Fortunately, we have a solid configuration management system in place powered by ansible, which allows us to spin up new LBs and add them to our grid quickly. That combined with the elasticity provided by AWS enabled us to ramp up the capacity of our LB grid to handle the traffic from the attack and process regular traffic at the same time – bringing back online for our users at approximately 9:40am. We then continued to triage the malicious traffic in an attempt to block it, however about an hour after we brought the site back up, the attack stopped.

We recognize that HootSuite is an essential tool for our customers, and as such we invest a lot of effort in making sure it is available 24/7. We’re taking the following steps to reduce downtimes and improve resiliency to DDoS attacks in the future:

  1. Hardening the outer layer of our architecture so malicious traffic gets dropped instead of bogging down servers
  2. Improve effectiveness of our monitoring systems which will allow us to pinpoint the problem quickly
  3. Improve on our auto scaling infrastructure to make it even faster and easier to add capacity to handle attacks

You may be wondering why we didn’t just fork over the 2.5 Bitcoins (about $1200) to pay off the attacker. For one, we are not interested in negotiating with criminals. Additionally, paying the ransom would likely lead to future attacks.

It’s probable that the attacker was testing our willingness to bargain – 2.5 Bitcoins is high enough to sound legitimate, but low enough that there was a reasonable chance we would pay. If we handed over the money it’s likely another ransom would follow with an even higher price tag.

It’s important to us that our customers know what’s going on during an outage – we commit to posting an update to as soon as we detect an issue and at least every 15 minutes after that until it’s resolved. The steps outlined above will ensure we can diagnose issues even faster and get updates out to customers as quickly as possible.

I would like to thank the folks at who were eager to help and provided valuable information to our team, having suffered a similar attack a week before. Our team would be happy to collaborate with anyone suffering a similar attack – feel free to reach out to me on twitter at @sedsimon if you have information. In closing, I would like to offer my sincere apologies to all customers affected by this attack, and to thank you for your patience and continued support.

We recently converted our entire JavaScript codebase to AMD modules that we load using RequireJS, the must-have “script loader that improves speed and code quality.”

After doing so, we quickly realized that following the guidelines indicated on would have the opposite effect on our dashboard and make us lose the benefit of the biggest browser performance improvement ever: the look-ahead pre-parser. Read More …

At HootSuite, we use Jenkins as our continuous integration tool and Git as our source code manager (SCM), which we host in-house with GitLab.

Our git workflow consists of short-lived feature-branches that are merged into the production branch as soon as possible. We build and test each feature branch as it is pushed to the Git remote. This gives our engineers feedback before we merge to production and avoids delays in our continuous deployment pipeline.

Unfortunately, Jenkins is not suited to using Git with a feature branch workflow. Jenkin’s Git plugins assume a single branch build which obviously contrasts to using multiple feature branches.

This blog post will detail how we overcame these deficiencies to successfully use Jenkins with a multiple feature-branch Git workflow.

Read More …

We’ve started using Elasticsearch for a few of our projects. It’s a great tool for storing and querying giant text datasets. In the words of the creators, “Elasticsearch is a flexible and powerful open source, distributed, real-time search and analytics engine”. We like it because it’s fast, easy to use, and very useful in many situations.

One of the things that Elasticsearch does very well is to listen to a large stream of data and index it. This is done very easily using a “river“. A river is an easy way to set up a continuous flow of data that goes into your Elasticsearch datastore. Quoting once more the creators, “A river is a pluggable service running within Elasticsearch cluster pulling data (or being pushed with data) that is then indexed into the cluster.”

It is more convenient than the classical way of manually indexing data because once configured, all the data will be updated automatically. This reduces complexity and also helps build a real-time system.

There are already a few rivers out there, among which are:

  • Twitter River Plugin for ElasticSearch (link)
  • Wikipedia River Plugin for Elasticsearch (link)

There was no GitHub river, though, so we setup to build our own. And thus Elasticsearch GitHub river was born:

We’re big GitHub fans and use it extensively to manage our daily activities. Sometimes we find ourselves looking at heaps of issues that scream for attention. To avoid this pile up, we want to understand better how our team works, how issues end up being forgotten, and how to address the real important ones first. The GitHub river was built to give us this additional insight we crave, and hopefully it will help you too.

Now let’s get down to business and use it:

Using the GitHub river

The GitHub river allows us to periodically index a repository’s events. The repos can be both public and private. If you have a private repository, you’ll need to provide authentication data.

To get a taste of the workflow possibilities this opens, let’s index some data from another open source project we like, use, and contribute to: Lettuce. We can explore it in a pretty Kibana dashboard (click on the image to get the full version).


Assuming you have Elasticsearch already installed, you’ll need to install the river. Make sure you restart Elasticsearch after this so it picks up the new plugin.

Now we can create the GitHub river:

curl -XPUT localhost:9200/river/ghriver/_meta -d '{
    "type": "github",
    "github": {
        "owner": "gabrielfalcao",
        "repository": "lettuce",
        "interval": 3600

Elasticsearch will immediately start indexing the most recent 300 public events of gabrielfalcao/lettuce. The 300-event-limit is imposed by the GitHub API policy. After one hour, it will check again for new events.

The data is accessible in the gabrielfalcao-lettuce index, where you will find a different document type for every GitHub event type.

Using Kibana to visualize the data

Great, we have some data. Now what? In order to make some sense of it, let’s get Kibana up and running. First, you need to download and extract the latest Kibana build. To access it, open kibana-latest/index.html in your favorite web browser.

What you see now is the default Kibana start screen. You could go ahead and configure your own dashboard, but to speed things up we suggest you import the dashboard we’ve set up. First, download the JSON file that defines the dashboard. Then, at the top-right corner of the page, go to Load > Advanced > Choose file and select the downloaded JSON.

That’s it! You now have a basic dashboard set up that shows some key graphs based on the GitHub data you have in Elasticsearch. Furthermore, thanks to the river and the way the dashboard is set up, you will get new data every hour and the dashboard will refresh accordingly.

Happy hacking!

A word about open source:

We’re big fans of open source. We use many great open source products and also do our best to give back as much as we can. Checkout out the uberVU GitHub page for some of the project we’ve built. We’re constantly working and contributing to our own projects or new projects. If you want to contribute to our projects but don’t know where to start or you think that we can help a project in any way let us know!

uberVU is part of the Hootsuite team. Read more about it here