Project post originally published on Helm blog by Adam Korczynski, David Korczynski, and Martin Hickey
In the past year, the team at Ada Logics has worked on integrating continuous fuzzing into the Helm core project. This was an effort focused on improving the security posture of Helm and ensuring a continued good experience for Helm users. The fuzzing integration involved enrolling Helm in the OSS-Fuzz project and writing a set of fuzzers that further enriches the test coverage of Helm. In total, 38 fuzzers were written, and nine bugs were found (with eight fixed so far), demonstrating the work’s value for Helm both short term and long term. All fuzzers were implemented by way of Go-fuzz and are run daily by OSS-Fuzz against the latest Helm commit to make sure Helm is continuously fuzz tested. The full report of the engagement can be found here.
Helm is described as the Kubernetes package manager. It helps simplify finding, sharing, and using software built for Kubernetes. Helm began as what is now known as Helm Classic, a Deis project begun in 2015 and introduced at the inaugural KubeCon. In January of 2016, the project merged with a GCS tool called Kubernetes Deployment Manager, and the project was moved under Kubernetes. Helm was promoted from a Kubernetes subproject to a full-fledged CNCF project in June 2018. Helm graduated as a CNCF project in April 2020. The CNCF annual survey of 2022 found that around 90% of companies are either using or evaluating Kubernetes, and Helm’s performance and security are important, for the continued business operations of these users.
What is fuzzing?
Fuzzing is a technique for testing software for bugs and vulnerabilities by passing it pseudo-random data. The key idea is to write a fuzzing harness similar to a unit — or integration — test that will execute the application under test with some arbitrary input. The fuzzing engine that will run the fuzzing harness uses mutational algorithms to generate new inputs – also called “testcases” – that will cause the code under test to execute uniquely, i.e., generate inputs that trigger new code execution paths. The goal is then to observe if the code under test misbehaves in the event of any of the generated inputs. Fuzzing has been effective in uncovering reliability bugs and vulnerabilities in software for more than two decades, and open source software is increasingly adopting the technique.
Helm fuzzing overview
In this engagement, the goal was to write a set of fuzzers that would cover a lot of the Helm codebase and integrate the setup into the open source fuzzing service OSS-Fuzz. OSS-Fuzz is a free service offered by Google for critical open source projects to run their fuzzers continuously and report any crashes. Continuous analysis is important due to fuzzing relying on genetic algorithms, which effectively means the fuzzers will improve over time, and OSS-Fuzz will run the fuzzers daily indefinitely. In addition to this, continuous analysis is crucial for capturing any regressions.
Helm is written in the Go programming language, making it safe from memory-corruptions. Fuzzing Go will find panics such as slice/index out of range, nil-pointer dereferences, invalid type assertions, timeouts, and out of memory. At the end of this engagement, nine issues were found, all but one of which were fixed. Refer to the report for a detailed breakdown of the issues.
At the end of this engagement, the fuzzers provide significant coverage of the Helm project, including critical parts such as chart handling, release storage, and repositories. To write these fuzzers, Ada Logics used go-fuzz-headers to deterministically create pseudo-random structs from the data provided by libFuzzer.
Closing thoughts
The Helm team is thankful to CNCF for providing the opportunity to work with Ada Logics to develop new fuzzers for Helm. The CNCF takes security seriously, and it previously funded two third-party security audits for the Helm project, source code for the Helm client along with the process Helm uses to handle security and source code for the Helm client along with a threat model for the use of Helm. We want to thank the following Helm maintainers who participated in this endeavour and provided fixes where required: Matt Butcher, Martin Hickey, Matt Farina and Scott Rigby. We would also like to mention and thank the Flux maintainers, especially Paulo Gomes for collaborating on an issue. The fuzzing findings and fixes are valuable add-ons to the previous conclusions of the security audits. The Helm project has efficient test suites, and code changes are backed by tests, but the newly developed fuzzers and findings have provided significant value to the project. During the fuzzing, only nine issues were found, which revalidated the high quality of the Helm code. The Helm team can now maintain the newly developed fuzzers and build on them to continue code quality and security.