Category:

Background

  All of our infrastructure at Hootsuite runs on AWS. We use thousands of AWS EC2 instances to power our microservices. EC2 instances are Amazon’s offering of resizable virtual servers hosted on the cloud. Particular groupings of these servers often share a virtual private network.

  Some of our servers expose public endpoints to the world, creating an attack vector. Let’s say a hacker succeeds in breaching one of these endpoints. The immediate problem is that they can gain access to the server, but the bigger problem is that they can also gain access to everything on the network, including other critical AWS resources. How can we preempt this exploitation?

  Fortunately Amazon’s security isn’t that short-sighted. They have IAM roles to prevent this from happening. What are IAM roles? An IAM (Identity and Access Management) role is a set of permissions that can be attached to entities such as accounts and servers to grant (and implicitly, limit) access to AWS resources. Since all API requests must be signed with AWS credentials, IAM roles are a flexible way of assigning these credentials.

  Excellent, this means can create some IAM roles, attach them to our servers, and trust that this will give us tight control over who can access what from where. This should stop a hacker who has breached our server’s security from exploiting that position to cause further damage elsewhere.

  In this diagram, the blue IAM role grants full access to the blue S3 bucket. We want to allow API requests coming from blue services to be able to access sensitive internal data while blocking requests from less secure services. We do this by attaching the IAM role to the node that the single blue service here lives on, Node 1. This privileges the billing service while restricting the exposed web service, since Node 2 does not have an IAM role that will give it the proper permissions to the bucket. Now we have what we wanted: the required credentials assigned to privileged services and refused to others.

  Not so fast. At Hootsuite, our services are not tied down to specific nodes. They live in containers — highly portable, lightweight, self-contained software packages. These containers are orchestrated by Kubernetes.

  Kubernetes is a clustered container orchestration platform. It has its own scheduling algorithm that determines which nodes containers get scheduled on based on a variety of factors. While we can dictate some aspects of this scheduling, we don’t oversee the whole process. As a result, we can’t always predict how containers will move around on our cluster. The fact that containers, and the services that run in them, are not tethered to specific nodes in a one-to-one fashion throws a wrench into our security model. The diagram shown earlier actually looks more like this:

  What is going on here? We still have the blue IAM role and the corresponding blue S3 bucket, but now we have two containers running on the same node. Only the billing service should have access to the bucket, yet if we attach the IAM role to its node, we risk granting the exposed service equal access. On the other hand, if we don’t attach the IAM role, we impair the billing service from reading or writing updates to the bucket. The consequences of both choices are unacceptable, presenting a conundrum. The previous diagram illustrated a many-to-one relationship between containers and the respective nodes, but a many-to-many representation is closest to modelling a real cluster:

  On a real cluster, blue and red services are distributed across any number of nodes, with nodes hosting any number of services. Since these services have varying permissions to resources, it makes no sense to funnel all their requests through a shared IAM role — a single set of permissions — just because they share a node.

  It would be great if we could attach multiple roles to a node and switch between them as needed. However, as the AWS documentation explains, “Note that only one role can be assigned to an EC2 instance at a time, and all applications on the instance share the same role and permissions.”

  This is the heart of the issue. Our Kubernetes clusters have multiple containers running on the same nodes with different access privileges. Moreover, these containers move around continuously and unpredictably across nodes. We need fine-grained and adaptable credential management to secure access of services on our distributed system. AWS’s typical solution of using IAM roles falls short of our security needs. This project was undertaken to find a solution to this problem.

Kube2IAM

  We wanted to solve this problem without eschewing IAM roles altogether. It would have been possible to sign services’ API requests without them, but given the advantages that IAM roles provided and their integration in the AWS ecosystem, it seemed logical to continue using them.

  The ideal solution would fix the deficiency of IAM roles by making a simple adjustment. Recall that by design, IAM roles can only be attached to entities such as accounts and servers. What if we could attach IAM roles to an additional entity — containers? This IAM-per-container idea is not novel, and after doing some research, it was clear that other companies were also thinking of this approach. There were a few open-source projects that showed potential and caught our attention. Among them was Lyft’s metadataproxy and Atlassian director Jerome Touffe-Blin’s kube2iam.

  At first, the metadataproxy seemed to do exactly what we wanted. It provided a mechanism by which containers could each be assigned an IAM role, and when services in those containers made API requests, they would obtain credentials exclusive to their roles. Sharing a node did not equate to sharing permissions. The problem with the metadataproxy, as we discovered, was not in its approach but in its implementation. The metadataproxy only supported the Docker client and Docker networking model and so made incorrect assumptions about our network layout, particularly when it came to container IP addresses. To use it we would have to significantly extend the code.

  Not finding success with that open-source project, we turned to the next: Kube2IAM. Kube2IAM was intended for Kubernetes clusters, as advertised in the name. It overcame the issues of metadataproxy by supporting different networking overlays that followed the Kubernetes standards. Reading the documentation, we recognized that its fundamental approach did not differ from metadataproxy’s. Their solution was to:

“redirect the traffic that is going to the EC2 metadata API to a [Kube2IAM] container running on each instance. The Kube2IAM container will make a call to the AWS API to retrieve temporary credentials and return these to the calling container. Other calls [that don’t require credentials] will be proxied to the EC2 metadata API.”

  There are a few unfamiliar terms here. The EC2 metadata API is what services must query to obtain credentials to sign their API requests. As stated earlier, all services need AWS credentials to make API requests. Credentials are enclosed in instance metadata (other metadata examples are hostnames, public keys, subnets), and this metadata is acquired by curling a static endpoint http://169.254.169.254/latest/meta-data/.

  Kube2IAM doesn’t want services to be able to directly query for their credentials. Instead it wants to act as a proxy that regulates metadata requests. That’s why the description notes that traffic to the metadata API should be redirected to a container managed by Kube2IAM itself. Thus the Kube2IAM container acts as a credential authority.

  One important point not mentioned in the description above is how the Kube2IAM container will “retrieve temporary credentials” for each calling container. It does this via an AWS API action AssumeRole. The AssumeRole action allows “existing IAM users to access AWS resources that they don’t already have access to.” When called with a valid IAM role, it assumes that role and returns a set of temporary security credentials for that role.

  If this sounds confusing, that’s understandable. Picture a special (gold) IAM role that can’t access any resource on the network. It is special by virtue of one property only: it can assume other roles that “trust” it. If the other roles have greater privileges, the gold IAM role temporarily elevates its own privilege by assuming those roles. Once it does, it can return the credentials for the roles it assumed.

  The following diagram displays the Kube2IAM container depicted as a gold diamond. For the sake of visual clarity, the gold diamond was drawn below the node and given a different colour/shape from the other services. In reality, the Kube2IAM is just an ordinary container running alongside other containers on the node. It is also important to note that the gold IAM role is attached to the node itself, not the container. Since it is the only role attached to the node, we aren’t breaking AWS rules.

  Each service is given a role annotation when it is deployed. The services in the diagram are given the annotations “red IAM role” or “blue IAM role” based on what access permissions they should have. Both these roles “trust” the gold IAM to assume them and return their credentials. When a metadata API request from a red service is made, Kube2IAM reads its role annotation and assumes the red role, returning credentials exclusive to red services. A hacker cannot improve the role annotation of the service they have hacked once it is already deployed without being a cluster administrator.

  Having a single role that can assume other roles and return credentials depending on the requester gives us the fine-grained, adaptable credential management we need to secure services’ access to resources on our network. Thus it solves our original problem.

Setup

  Detailed setup instructions can be found on the Kube2IAM docs. To underline the main points, Kube2IAM runs as a daemonset in Kubernetes, so one “instance” of the Kube2IAM container will run on every node and be restarted if killed. There were a few implementation challenges. First, we were confused about how AssumeRole worked and how to define trust policies. Figuring out how to evaluate whether Kube2IAM was properly working was not straightforward either. We made some assumptions about its behaviour that weren’t true and spent too long debugging the right results for a wrong test.

Conclusion

  Kube2IAM is a solid add-on for Kubernetes clusters running on AWS. It secures the network by granting containers IAM roles and enforcing their access privilege through role annotations. On a personal note, implementing this PoC provided a great opportunity to learn about security and networking.

About the Author

Nafisa Shazia is an Operations Developer Co-op on the Kubernetes Initiatives team. She studies Computer Science at UBC.