Guest post originally published on StackHawk’s blog by Zachary Conger, Senior DevOps Engineer at StackHawk
Overview
We love containers.
At StackHawk we have always been fans of containers. From day one we made the decision to ship HawkScan (our application security scanning engine) as a container, and this is part of what makes it simple to use and integrate into any workflow.
We also bet on containers for our cloud platform, where we run microservices in Kubernetes to handle APIs, authentication, notifications, and all the magic that happens behind the scenes to make HawkScan so powerful and easy to use.
This is the story of how we have approached the challenges of developing a rapidly expanding set of microservices, and some of the tools and techniques we have used. Unfortunately our process is not something that we can package up and share to the world, only because it is bespoke for our environment. But we wanted to share some of the lessons we have learned.
The Problem
We like developing on our laptops.
When you’re building software, it’s important to be able to iterate quickly. Code, build, test, repeat. For HawkScan and our first handful of microservices, it was simple enough to do that on our laptops, and containerize them as a final step. But as the number of microservices grew, tests of a single microservice became meaningless without the others.
We already had a Kubernetes integration environment in AWS, and a full CI/CD pipeline to push freshly built microservices into it on code commit. But that pipeline added minutes to the iteration cycle, and those minutes add up fast.
We needed a way to keep iterating quickly. On our laptops. Like we’re used to.
The Solution (v1)
It’s Docker Compose, with each project contributing its own snippet.
When you think of running compositions of containers on your laptop, Docker Compose comes immediately to mind. But did you know that you can combine multiple compose files to create a larger composition? We thought this would be a great way to build out the entire microservice environment on each developer’s laptop.
We added a requirement that each of our microservice projects should include its own service definition in a compose file, and publish it to our artifact repository. Each project would also include a list of all of the other StackHawk microservices it depended on, which would be used to pull down all of the other compose files a developer would need to construct the whole integration environment locally for a given project.
A service file for one such microservice, service1, might look like this:
# service1.yml
version: "3.7"
services:
redis-service1:
image: redis:latest
container_name: redis-service1
postgres-service1:
image: postgres:latest
container_name: postgres-service1
environment:
- POSTGRES_PASSWORD=super-secure-password
- POSTGRES_USER=user1
- POSTGRES_DB=service1
service1:
image: stackhawk/service1:latest
container_name: service1
ports:
- "3200:3200"
depends_on:
- redis-service1
- postgres-service1
- service2
- service3
- service4
This compose file says that service1 requires its own Redis instance, its own Postgres instance, and three other StackHawk platform microservices, service2, service3, and service4.
The script to build service1’s microservice integration environment would know to download the service files for service2, service3, and service4. It would produce a command like this:
docker-compose -f service2.yml -f service3.yml -f service4.yml up
Notice that the project in question, service1, is not included in the docker-compose command above. That’s because when a developer is working on the service1 project, we want to leave it to her to run it however she likes in order to iterate rapidly.
All the other microservices in the composition come up as containers and listen on the localhost address, and each with their own dependencies, such as Redis and Postgres. So in a given project, you run a simple script to launch all of the dependent microservices as containers, leaving you to work on your service running directly on the laptop.
For example, if you are working on your React front-end web app, all of the backend microservices will come up in Docker Compose, but not the front end. That means you can quickly iterate on your project – code, build, test, repeat – with your own personal integration environment.
It was awesome.
The Next Problem
The platform got too big.
Well that didn’t scale! Within months, we had built up enough microservices that we were melting laptops. Even our relatively powerful machines had insufficient memory to handle all of that container sprawl, leaving only scraps for the developers and their voracious IDEs.
The Options
Naturally I appealed to Corporate for new laptops. That request is pending because I forgot my cover sheet.
We knew the real answer was to move these developer workloads to the Kube. We already had a sandbox Kubernetes instance for experimentation and hackery. The only question was how to dynamically and safely build up environments on that cluster for each developer.
The Kubernetes blog has a great article called Developing on Kubernetes that describes many of the best available tools to help developers integrate Kubernetes into their workflow. There are lots of cool options, but none of them quite matched our situation, and all of them would require developers to rethink the way they work.
The Solution (v2)
Same as before, but enKubernated
We really loved our Docker Compose solution. Why couldn’t we just do that, but in Kubernetes? Then we found Kompose.
Kompose converts Docker Compose files into Kubernetes manifest files. This allowed us to leverage all the work we had already put into cultivating our Docker Compose service files for each project. And with yq (the jq or sed of YAML files), we could easily manipulate those manifests to make any necessary final tweaks.
With Kompose and yq, we had the flexibility to generate and modify manifests to produce ideal development environments for each engineer. These environments performed better, and left all of the resources on our laptops for hungry IDE and compiler operations.
Pulling it Together
Welcome to the DevKube
We built a fair sized shell script to manage the process of downloading Docker Compose files, converting them to manifests, and deploying them to Kubernetes.
We call our script devkube.sh, and it allows developers to easily:
- Check for prerequisites, such as kubectl, Kompose and yq.
- Create a namespace for each user based on their username, for isolation.
- Download compose files for each microservice and convert them to manifests with Kompose and yq.
- Deploy DevKubes and tear them down.
- Update running DevKubes with the latest versions of microservices.
- Discover and proxy service ports so they are reachable on developer laptops at the localhost address.
- Backup DevKube databases to S3, and restore them on startup, to maintain state between DevKube sessions.
It took some refinement to get it working just right, but now it’s reliable and fast, and it brings developers closer to the platform they are targeting. They get more exposure to the tools and details of Kubernetes. And since we still maintain the compose files to run integration tests locally with Docker Compose, it’s still an option to do so.
The Point
It’s great to iterate
Iterating from local development to Docker Compose to Kubernetes has allowed us to efficiently move our development environment forward to match our needs over time. Each incremental step forward has delivered significant improvements in development cycle time and reductions in developer frustration.
As you refine your development process around microservices, think about ways you can build on the great tools and techniques you have already created. Give yourself some time to experiment with a couple of approaches. Don’t worry if you can’t find one general-purpose one-size-fits-all system that is perfect for your shop.
Maybe you can leverage your existing sets of manifest files or Helm charts. Perhaps you can make use of your continuous deployment infrastructure such as Spinnaker or ArgoCD to help produce developer environments. If you have time and resources, you could use Kubernetes libraries for your favorite programming language to build a developer CLI to manage their own environments.
Building your development environment for sprawling microservices will be an ongoing effort. However you approach it, you will find that the time you invest in continuously improving your processes pays off in developer focus and productivity.