Posts from April 2016

The Problem

A result of migrating our monolithic system towards microservices in the last couple of years is 20+ internal services (so far). Many of these services lacked great API documentation, the kind where our developers could easily grasp all endpoints, behaviours, and error codes. We needed three things for our API documentation: a common and easy way to document each endpoint of every service, a centralized location to access all this documentation, and a way to automatically update after changes to an endpoint. Swagger to the rescue.

Swagger

swagger

Swagger is a specification and complete framework implementation for describing, producing, consuming, and visualizing RESTful web services. It also comes with Swagger UI, a great documentation viewer for endpoints. It lists endpoints and details of each one, and also provides a UI that can actually run test queries against test servers. Another bonus is that Swagger documentation is done with annotations in source code and so reduces overhead of managing multiple document sources. Sounds good!

  • Swagger documentation library for Play framework? Check.
  • Swagger documentation library for arbitrary Java web apps? Check.
  • Swagger documentation for endpoints in general for Scala? Nope.
sbt-swagger

No Swagger for Scala endpoints? OK, let’s help. We made sbt-swagger Scala Library as an SBT plugin. Our sbt-swagger sbt plugin generates swagger-ui compatible JSON data based on the Swagger & JAX-RS (jsr311) annotations in your code. Any Scala applications that provide APIs can benefit from this plugin.

Today, we released it as open source. We have been using sbt-swagger in our products for documenting, exclusively for our internal Scala microservice component that provides our internal protocol over ZeroMQ.

Using sbt-swagger

Three steps:

  1. Add sbt-swagger config/dependency in your SBT build file
  2. Add docs in your Scala code with JAX-RS (jsr311) annotations
  3. run sbt swagger
Read More …

Hootsuite is no stranger to tearing down monoliths. In fact, over the past year we’ve built fifteen microservices to go along with the deconstruction of our monolith. The extraction of each microservice came with its own complications, but they have all generally followed one development blueprint.

A monolith is the result of legacy code becoming fossilized after years of coding with ongoing contributions from multiple people. This makes the code incredibly hard to refactor as it is always in use, and always depended on. In this case, microservices are used as a tool to tear apart the monolith to address the problem of unreliability. However, a microservice in general is not limited in its purpose or functionality.

Hootsuite's Microservices

Hootsuite’s Microservices

Scoping & Prioritization

The first major decision when tearing down your monolith is which part to break off first. Do you want to slice it horizontally or vertically, julienne it or dice it? At Hootsuite, before writing any code, all business objects and use cases were outlined to determine what might make a good microservice. Models that frequently interact with each other were carefully grouped so that a tangled web of distributed services would not be created.

Example

It is clear that business objects like a Team and a Member of a Team should be migrated together. However, if each Team must belong to a single Organization, then it would make sense to also pull out the Organization object instead of leaving it inside the monolith. Read More …

Working at Hootsuite has helped enforce one ideal: security matters. In Hootsuite’s continuous implementation and deployment environment, our developers take security to heart and this is reflected in our code base. Working hand-in-hand with our developers, the security team strives to improve by staying updated and evolving with latest practices. As part of our commitment to staying relevant, we continuously look for tools that may simplify or remove our pain points. One of these pain points is the management of the large amounts of code flowing through the pipeline, and ensuring that they reach our security standards: with thousands of lines of code coming through every day, how do we guarantee those standards are met?

Read More …