Guest post by Jan Van Bruggen, Developer Relations Lead at itopia
In the past decade, we’ve seen the rise, standardization and meme-ification of “as code”: Infrastructure as Code, Monitoring as Code, Policy as Code and soon perhaps Data as Code. Essentially, “Stuff as Code” is the practice of statelessly automating the management of “stuff” via version-controlled, declarative configuration files. Therefore, it’s worth asking if the same DevOps practice can be applied effectively to a set of unstable resources even more near and dear to developers than their backends – their local coding environments.
The common problem of developer environment configuration drift is a minor inconvenience when revisiting old projects or following outdated tutorials, but it might cost a professional developer days of progress every month to restore/update their environment(s), especially on a technologically-diversified team. Unfortunately, Terraform can’t save us from local dependency hell, and Kubernetes can’t help Windows and macOS laptops collaborate on a codebase… right? Is it possible to solve these local environment problems with declarative configuration files, version control and a practice of stateless automation?
Let’s consider everything “local” that a person depends upon when writing code:
- integrated development environments (IDEs)
- personal settings
- project-specific libraries
- system wide packages
- operating system (OS)
- computer hardware
- internet connection
- electricity
- cup of water next to their keyboard
One reason that local environments are resistant to configuration is that each of those layers depends on the layers below it, so any careful effort to statelessly automate an upper layer will be undermined by stateful chaos below. For example, a Python project’s dependencies may be defined in a declarative requirements.txt file, but the result of each execution will vary based on what environment variables are set and OS packages are installed. VS Code supporting containerized environments was a major step forward, but it depends upon manual maintenance of a Docker installation (as well as a compatible OS and subscription to run Docker in the first place).
So, is it impossible to statelessly automate a complete local environment? No.
Configure an environment
We’ve recently seen significant progress in the stateless configurability of those chaotic middle layers, and developer workflows are evolving in response. Let’s evaluate the current configurability of each of the software-defined layers of local environments:
Personal settings
Underneath an IDE, the highest-level layer of configuration is personalization. This consists of configuration files that override default settings in apps and tools, such as for…
- IDEs: settings.json, options/*.xml, .vimrc, .editorconfig, etc.
- Terminals: .bashrc, .zshrc, .tmux.conf, etc.
- Other: .gitconfig, psqlrc, etc.
Fun fact: The reason that many of these file names start with a period (and are therefore known as “dotfiles”) is the accidental invention of “hidden files” in 1997.
IDEs depend on personalization for shortcuts, keybindings, accessibility, layout and aesthetics. These files should always be different from one developer to another, as everyone has their own unique needs and habits. So how can we automatically manage personalization across devices and environments without treating developers homogeneously?
The most popular answer is to use a “dotfile manager” like chezmoi, Dotbot, yadm, or others. Dotfile managers each have different features, but they all automate the process of updating personalization files to achieve a desired, version-controlled state. Some even use declarative configuration files!
It seems that “personalization as code” is a solved problem. Although dotfile managers aren’t quite mainstream yet, using one is highly recommended for professional developers that work on multiple devices or accounts, for it will save a lot of time in setting up, context switching between and debugging those environments.
Project libraries
From Java to Rust, almost every software project imports third-party libraries, and this layer is already managed “as code”. A declarative configuration file for library dependencies is usually included with project source code in version control:
- pom.xml for Java with Maven
- requirements.txt for Python
- package.json for Node.js
- Cargo.toml for Rust
The list goes on, with an industry-standard protocol for almost every programming language.
Fun fact: For years, the major exceptions to this trend were C and C++, due to their unique integration with system packages (see next section) and (presumably) the inertia of legacy workflows. However, today both Conan and vcpkg seem to be popular and growing.
IDEs often depend on these libraries to format, analyze and execute source code, so it’s a boon to developer experience that these configuration files are distributed within source code. In fact, “library dependencies as code” has already been solved so completely for so long that…
- almost every project starts statelessly managing its library dependencies on day one.
- almost every programming language is expected to have a library management protocol.
- it’s an unused phrase, since it just seems like common sense.
- it’s taken for granted by most developers until someone calls attention to its omnipresence when talking about other-stuff-as-code.
System packages
Unlike project-specific libraries, system wide (and per-user) packages are used across multiple/all of a developer’s projects and are tailored to each OS. These include:
- compilers and interpreters like gcc and python
- clients and servers, like curl and apache
- terminal shells and GUIs, like xterm and gnome
In addition to being packages themselves, IDEs depend on other packages to render views, connect to services, and analyze/execute code. Each project depends on a unique set of packages, so each developer installs a unique collection of these packages on their OS. However, usually only one version of a package can be installed at a time, and sometimes different packages are incompatible or compete with each other. This leads to frustrating dependency mismatches between (and conflicts within) environments, so is there a way to automatically manage a project’s package dependencies across devices and environments without breaking another project’s package dependencies?
For the past few decades, a stateful solution has been popular: using a package manager like APT, pacman, Homebrew or Chocolatey to install specific package versions one at a time. These tools enable powerful setup scripts, which are more reliable and maintainable when paired with the sterility and ephemerality of containerization (more on that in the next section). Unfortunately, without containerization, a stateful configuration is inherently less scalable than a stateless configuration, which can know at any point in the future if it has become invalid/conflicted and then efficiently repair itself; just ask anyone who chooses Terraform’s approach to Infrastructure as Code for this reason!
That’s why it’s so exciting to see Nix slowly becoming more popular, because Nix is a stateless solution to the same problem. By installing one or more immutable versions of each individual package and explicitly translating package dependencies (both direct and indirect) into a DAG, Nix can future-proof environments against most forms of bit rot and greatly simplify collaboration between developers who use different OSes. Any developer experiencing OS-level dependency hell should definitely install Nix and see if nix-shell brightens their whole week.
Operating systems
At the bottom of every developer’s software stack is their operating system (OS), which includes various subsystems:
- application platform
- file management
- drivers
- resource (processors, memory, etc.) management
It’s impossible to statelessly configure the OS that’s installed on most developers’ computers, so the most popular approach to managing this layer is to instead use containerized guest OSes to host developer environments. For our purposes, “guest OSes” means each environment is isolated from each other, and “containerized” means each environment’s specification can be version-controlled.
Fun fact: Actually, a couple of stateless OSes do exist, but they’re just not (yet) popular. NixOS is a stateless OS managed entirely by the Nix package manager, and Guix System is a libre copycat of NixOS. Adventurous developers might fall in love with NixOS, but it might not (yet) support all of their development tools out of the box, largely due to its radically-different design.
Docker has remained the best platform for specifying, building, and running containers, ever since it originated most of the concepts and jargon about containers. Although it implements a stateful system for specifying build steps, a built container image is an immutable artifact that can be referenced in declarative configuration files.
Recommended approach
While it’s ideal to have stateless automation for every layer, you can incrementally upgrade your environment by introducing new automation tools individually. Nix alone is a powerful addition to projects, and chezmoi alone is a friendly assistant for developers.
On the other hand, if you want to jump straight to a fully-automated environment, the best way to set up and maintain a fully-automated local environment is this:
- when building an environment
- statefully configure a guest OS with Docker
- statefully configure some system packages with your OS’s package manager
- when running an environment
- statelessly configure a container with a built container image
- statelessly configure most system packages with Nix
- statelessly configure project libraries with your language’s tool of choice
- statelessly configure personal settings with chezmoi
Depending on your use case, you may want to move some or all of those stateless configuration steps into the build process, to cache their results in container image layers.
Run an IDE
With all of the above tools at our disposal, we have a variety of ways to reliably and reproducibly configure a local environment. This environment can be a minimal and performant daily driver, as well as isolated from personal files, messaging apps, other environments, and Docker itself. However, it’s not immediately obvious how to interact with an IDE that’s running inside a container, which is a necessary next step (unless you chose NixOS, which doesn’t require containers).
A locally-installed IDE can be used locally, since the IDE acts as its own client, but most containerized IDEs require using an IDE client that is separate from the IDE server, in order to communicate between the OSes. Here are some notable IDE clients, including both web-based (“online IDE”) and native (“hybrid IDE”):
VS Code
VS Code is a popular IDE, and it was the first to provide both hybrid and online solutions.
Remote Development is a set of first-party VS Code extensions that, among other things allow an existing VS Code installation to be used as a hybrid IDE, connected to a VS Code server running somewhere. This is an easy solution to set up, but it does require the user to either configure a VS Code server in their container or use the containers extension to host the whole developer environment. This is a tool for solo developers who want to use their local VS Code IDE as their IDE client.
Alternatively, code-server is a third-party VS Code web server that renders the VS Code client in a web app as an online IDE, connected to a VS Code server. This is a flexible online solution that requires advanced website hosting skills, but it can’t support extensions from Microsoft’s marketplace. This is a tool for solo developers who want to use VS Code non-locally.
JetBrains IDEs
Projector is a first-party containerization solution for the JetBrains suite of IDEs: PyCharm, IntelliJ IDEA, PhpStorm, etc. This allows you to run any JetBrains IDE as an online IDE, but the UX is inconsistent with their native apps because Projector implements an HTML5 connector for the IDE’s Swing-based GUIs.
Other IDEs
selkies-gstreamer is a DIY containerization solution for running any Linux-compatible IDE/GUI in a container, with a browser-based interface. We’ll discuss the broader Selkies platform more in a later section, since it’s a very unique solution, but this standalone component is a powerful tool for solo developers who want to run a specific IDE/GUI that doesn’t have its own containerization tool or web frontend.
Scaling problems
These solutions all require a little Docker tinkering, but they’re great for solo developers. However, deploying and scaling them to serve a medium-sized development team require specialized skills in container orchestration and networking. It’s a nontrivial technical challenge to performantly configure and scalably maintain a Kubernetes cluster for per-user app streaming sessions, and distributed/remote teams may require multiple clusters on multiple continents, which add extra dimensions of maintenance complexity.
Scale to serve your team
We at itopia thought that kind of automation challenge sounded like it would be more fun for our remote workstations team than it would be for the average development team, because most teams have higher priorities than maintaining a workstation cluster. itopia has been managing remote enterprise workstations for almost a decade, so we have some insights into how large teams adopt, scale and maintain productivity-focused environments.
Designed with large teams in mind
Large teams have the following preferences, regarding remote workstations:
- Portability: Browser-based IDE clients are more convenient than OS-native IDE clients.
- Performance: Web apps should do all heavy computation and networking in the cloud, rather than in the browser, to prevent resource bottlenecks.
- Flexibility: All IDEs should be supported with identical UX to local installations, to minimize transition costs and maximize developer efficiency.
- Security: High-security endpoints are essential for protecting intellectual property.
- Maintenance: A fully-managed service is usually more pragmatic for enterprises than a self-hosted one, due to the nuanced upkeep required for self-hosting.
- Transparency: Open source tech is more flexible, secure, and auditable than closed source tech, which matters to CTOs and sysadmins alike.
We were surprised to discover that no solution existed to satisfy all of the above preferences, which meant that teams had to compromise in some way. Both browser-rendered and native IDE clients, which together covered most solutions, require streaming source code to all developer devices, with little protection against IP exfiltration. Opinionated services like GitHub Codespaces are inflexible in their support for only one IDE and version control system. Some solutions offer self-hosting as a security and flexibility upgrade, but that adds maintenance labor.
itopia Spaces checks all the right boxes for teams and enterprises; here’s a free test drive to prove it. See our open source catalog of itopia-managed images for both ideas of what’s possible and starting points for your uniquely-customized images.
The core technology is public, so also check out Selkies, the open source cloud-native streaming platform we’ve created in partnership with Google. If you like what you see today, join our community and let us know what you want to see tomorrow! We love seeing people building on Selkies, and together we hope to cultivate a high-quality new standard for remote app streaming.
It’s satisfying to see how much recent progress has been made in improving developer environments, and it’s exciting to think what the process of coding might look like in a few years. At the very least, however, developers will be able to reliably configure their environments to suit their needs without spending a week manually typing commands and editing files any time a project – or the angle of the cup of water next to their keyboard – changes.