Dan Farrelly uses a carpentry analogy to explain the problem his company, Buffer, began having as its team of developers grew over the past few years.
“If you’re building a table by yourself, it’s fine,” the company’s architect says. “If you bring in a second person to work on the table, maybe that person can start sanding the legs while you’re sanding the top. But when you bring a third or fourth person in, someone should probably work on a different table.” Needing to work on more and more different tables led Buffer on a path toward microservices and containerization made possible by Kubernetes.
Since around 2012, Buffer had already been using Elastic Beanstalk, the orchestration service for deploying infrastructure offered by Amazon Web Services. “We were deploying a single monolithic PHP application, and it was the same application across five or six environments,” says Farrelly. “We were very much a product-driven company. It was all about shipping new features quickly and getting things out the door, and if something was not broken, we didn’t spend too much time on it.”
But things came to a head in 2016. With the growing number of committers on staff, Farrelly and Buffer’s then-CTO, Sunil Sadasivan, decided it was time to re-architect and rethink their infrastructure. Some of the company’s team was already successfully using Docker in their development environment. They wanted to go further with Docker, and the next step was looking at options for orchestration.
Kubernetes suited Buffer’s needs. “We wanted to have the kind of liquid infrastructure where a developer could create an app and deploy it and scale it horizontally as necessary,” says Farrelly. “We quickly used some scripts to set up a couple of test clusters, we built some small proof-of-concept applications in containers, and we deployed things within an hour. We had very little experience in running containers in production. It was amazing how quickly we could get a handle on it.”
Kubernetes provided a powerful solution for one of the company’s most distinguishing characteristics: their remote team that’s spread across a dozen different time zones. “We really wanted something where anybody could get a grasp of the system early on and utilize it, and not have to worry that the deploy engineer is asleep,” Farrelly says.
For Buffer’s engineers, it’s an exciting process. “Every time we’re deploying a new service, we need to figure out: OK, what’s the architecture? How do these services communicate? What’s the best way to build this service?” Farrelly says. “And then we use the different features that Kubernetes has to glue all the pieces together. It’s enabling us to experiment as we’re learning how to design a service-oriented architecture.”
At this point, the team at Buffer can’t imagine running their infrastructure any other way—and they’re happy to spread the word. “If you want to run containers in production, with nearly the power that Google uses internally, this is a great way to do that,” Farrelly says. “Pick a couple of things, roll it out, kick the tires on this for a couple of months and see how much it can handle. You start learning a lot this way.”
For a closer look at Buffer’s decision-making process as it rolled cloud native infrastructure to help its distributed team deploy faster, check out this in-depth case study.
Interested in more content like this? We can curate and deliver relevant articles just like this, directly to you, once a month in our CNCF newsletter. Get on the list here.