Guest post originally published on the Kong blog by Cody De Arkland, Kong
When setting up Kubernetes for the first time, one of the networking challenges you might face is how to safely grant outside clients access to your cluster. By default, pods within a cluster can communicate with all other pods and services. You should restrict access to anything outside of that group.
In this post, we’ll take a closer look at how to introduce a process for monitoring and observing Kubernetes traffic using Kuma, a modern distributed control plane with a bundled Envoy Proxy integration.
Setting Up a Kuma Service Mesh
Application stacks that run as individual containers need to communicate with one another and outside clients. To coordinate between all the requirements necessary to support such platforms—including security, routing and load-balancing—the concept of a service mesh emerged. The goal of a service mesh is to provide seamless management of any service on the network. Thus, while an ingress controller handles the behavior of incoming traffic, a service mesh is responsible for overseeing all aspects of the network, such as monitoring and configuration of the network.
Kuma is one example of a service mesh. It’s an open source project that works across various environments, including Kubernetes and virtual machines, and supports multi-zone deployments. Kuma is supported by the same team that built Kong, a popular API gateway that simplifies network communication.
In addition to providing fine-grained traffic control capabilities, Kuma also offers rapid metrics and observability analyses. Being able to secure your networking access is only part of the solution. Since Kuma integrates with Prometheus for native data collection and Grafana for charting and viewing that data, you’ll be able to see precisely how your load balancing and client routing are behaving.
Installing Kuma is a snap. First, you can download and run the installer like so:
curl -L https://kuma.io/installer.sh | sh -
Then, switch to the installation directory:
cd kuma-1.1.2/bin
From here, you can run Kuma in multi-zone mode or standalone mode if Kuma is just in a single Kubernetes cluster. The command below will deploy Kuma in a single zone configuration, the default:
./kumactl install control-plane | kubectl apply -f -
For other environments, check out the docs on deployment.
There are several ways to interact with Kuma.
- Read-only through its GUI
- For write/edit access – kubectl
- API (note that in a Kubernetes deployment, the API is also read-only, and interactions via kubectl are the correct process.)
To access the GUI, you’ll first need to forward the API service port:
$ kubectl port-forward svc/kuma-control-plane -n kuma-system 5681:5681
After that, you can navigate to http://127.0.0.1:5681/gui.
CNI Compatibility
Before continuing, it’s important to introduce a minor point about configuration, which has major implications.
Kubernetes uses the Container Network Interface (CNI) standard to configure networking for containers. This means that no matter how you design a CNI-compatible tool, it ought to be able to rely on the same set of protocols. Kubernetes provides an API that an ingress controller can use to set and manage the network policies. Multiple CNI-based projects have sprung up in response to enterprise-grade security and ease of use requirements. For example, one such project is Calico.
Depending on your needs, opting for a more customizable service mesh, like Kuma, can help you achieve your specific goals. For example, although Calico adheres to the Network Policies Kubernetes provides, its format for setting up traffic rules is more opaque than Kuma. Kuma provides a way of configuring network policies that run parallel to the first-class API Kubernetes provides. It should come as no surprise that Kuma is also compatible with CNI. This means you can easily swap out any network policies defined by Calico—or any project that uses a CNI-based protocol for Kuma’s traffic rules. The main differentiator between such projects comes down to features. Kuma, for example, can act as a service mesh, an observability platform and a network policy manager all in one. Other projects may have different priorities, and it is the developer’s responsibility to make sure they can all interact with one another properly.
Architecting Traffic Policies in Kubernetes with Kuma
With Kuma set up and running on Kubernetes, let’s see how to establish traffic rules to manage incoming access.
Imagine the following scenario: an eCommerce platform that relies on two microservices that communicate to meet the business’s needs—let’s call them services backend1 and backend2. A third microservice acts as a public API, and any incoming request to this service privately queries the other two. We’d like to expose the API to the public but keep the other two microservices isolated from external networks.
The pure ingress way to do this is to set up a Network Policy. However, Kuma drastically simplifies this process with an easy-to-understand YAML DSL. You can define Traffic Permission policies that explicitly identify which sources the services can communicate specific destination services.
cat <<EOF | kumactl apply -f -
type: TrafficPermission
name: api-to-backends
mesh: default
sources:
- match:
service: 'publicAPI'
destinations:
- match:
service: 'backend1'
- match:
service: 'backend2'
EOF
In this manifest, the Traffic Permission policy gives the frontend permission to send traffic to the backend. The policy will reject any other source.
Traffic Permission is just one of the policies that Kuma provides. Among other features, you can also set up a Health Check policy to keep track of the health of every data plane proxy. This, too, makes use of familiar source and destination matches:
cat <<EOF | kumactl apply -f -
apiVersion: kuma.io/v1alpha1
kind: HealthCheck
mesh: default
metadata:
name: web-to-backend-check
spec:
sources:
- match:
service: 'publicAPI'
destinations:
- match:
service: 'backend1'
- match:
service: 'backend2'
conf:
interval: 10s
timeout: 2s
unhealthyThreshold: 3
healthyThreshold: 1
tcp:
send: Zm9v
receive:
- YmFy
- YmF6
EOF
One Control Plane for Security, Observability and Routing
The goal of any service mesh is to provide a single location to configure how your network behaves across your entire cluster. A service mesh can simplify much of the communication across disparate services. It’s often better to opt for a more restrictive network security rather than one which is open to any connection. Implementing a zero-trust security policy with Kuma is a first-class feature, not an afterthought.
If you’d like to learn more about proper access configuration, you can check out the Kubernetes documentation on controlling access or their best practices on pod security. Kuma’s secure access patterns also provide some guidelines on how to define commonly-required networking policies.
I hope you found this information on traffic policies in Kubernetes helpful. Get in touch via the Kuma community or learn more about other ways you can leverage Kuma for your connectivity needs with these resources: