Guest post originally published on Buoyant’s blog by Zahari Dichev
Applying L4 network policies with a service mesh
In this tutorial, you’ll learn how to run Linkerd and Cilium together and how to use Cilium to apply L3 and L4 network policies to a cluster running Linkerd.
Linkerd is an ultralight, open source service mesh. Cilium is an open source CNI layer for Kubernetes. While there are several ways to combine these two projects, in this guide we’ll do something basic: we’ll use Cilium to enforce L3/L4 network policies on a Linkerd-enabled cluster.
What are Kubernetes network policies?
Kubernetes network policies are controls over which types of network traffic are allowed to happen within a Kubernetes cluster. You might put these in place for reasons of security, or simply as a safeguard against accidents.
The terms “L3” and “L4” refer to layers 3 and 4 of the OSI network model, and refer to the policies that can be expressed in terms of IP addresses (layer 3) and ports (layer 4). For example, “requests between 192.0.2.42:9376 and 192.0.2.43:80 are forbidden” is a layer 4 policy. In orchestrated environments like Kubernetes, policies about individual IP addresses are quite brittle, so typically these policies will instead be expressed in terms of label selectors, e.g. “any pod with label app=egressok
can send packets from port 80”. Under the hood, Cilium will track the pod assignments that Kubernetes does and translate this from label selectors to IP addresses.
L3 and L4 policies stand in contrast to L7 policies, which are expressed in terms of protocol-specific information. For example, “Pods with label env=prod are allowed to make HTTP GET requests to the /foo
endpoint of pods with the label env=admin” is a layer 7 policy, because it requires parsing the protocol sent over the wire.
Unfortunately, Linkerd itself doesn’t support L7 policies yet, and while Cilium does, that implementation doesn’t play nicely with Linkerd. In a future release (probably 2.11) Linkerd itself will support L7 policies, and hopefully this gap between Cilium and Linkerd will be fixed.
For now, we’re going to restrict ourselves to L3 / L4 policies. Let’s see how this works in practice.
Getting ready
What you’ll need to follow along:
- Kind, which we’ll use as a Kubernetes sandbox environment
- A modern Linkerd release
- A modern Cilium release
Creating the test cluster
As a first step, we’ll configure our kind cluster via a configuration file. Make sure you disable the default CNI and replace it with Cilium:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
disableDefaultCNI: true
Installing Cilium
Now that we’ve got the cluster up and running, let’s install Cilium. Here are the steps:
# Add the Cilium repo
helm repo add cilium https://helm.cilium.io/
# Load the image onto the cluster
kind load docker-image cilium/cilium:v1.9.0
helm install cilium cilium/cilium --version 1.9.0 \
--namespace kube-system \
--set nodeinit.enabled=true \
--set kubeProxyReplacement=partial \
--set hostServices.enabled=false \
--set externalIPs.enabled=true \
--set nodePort.enabled=true \
--set hostPort.enabled=true \
--set bpf.masquerade=false \
--set image.pullPolicy=IfNotPresent \
--set ipam.mode=kubernetes
To monitor the progress of the installation, use kubectl -n kube-system get pods –watch.
Installing the sample workloads
To showcase what Cilium can do, we’ll use Podinfo and Slowcooker to simulate a client issuing requests to a backend API service:
# Create the test namespace
kubectl create ns cilium-linkerd
# Install podinfo
kubectl apply -k github.com/stefanprodan/podinfo//kustomize -n cilium-link
Now that we have a server, it’s time to install the client:
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: client
namespace: cilium-linkerd
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: buoyantio/slow_cooker:1.3.0
command:
- "/bin/sh"
args:
- "-c"
- |
sleep 5 # wait for pods to start
cat <<EOT >> url_list
http://podinfo:9898/env
http://podinfo:9898/version
http://podinfo:9898/env
http://podinfo:9898/metrics
http://podinfo:9898/healthz
http://podinfo:9898/readyz
http://podinfo:9898/headers
EOT
/slow_cooker/slow_cooker @url_list
ports:
- containerPort: 9999
EOF
Applyinging ingress policies
The workloads are running now, so let’s go ahead and apply a Layer 4 ingress policy based on labels that restrict the packets that reach our Podinfo workload.
cat <<EOF | kubectl apply -f -
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "ingress-policy"
namespace: cilium-linkerd
specs:
- endpointSelector:
matchLabels:
app: podinfo
ingress:
- fromEndpoints:
- matchLabels:
app: client
toPorts:
- ports:
- port: "9898"
protocol: TCP
- endpointSelector:
matchLabels:
app: podinfo
ingress:
- fromEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": linkerd
EOF
This policy has two ingress rules that apply to services labeled app: podinfo:
- The server can accept traffic from workloads labelled with app: client only on port 9898. All other ports are blocked.
- Workloads from the Linkerd namespace can communicate with the server.
The second rule is essential for the correct operation of Linkerd. Much of the functionalities like tap and top rely on the control plane components connecting to the proxy sidecar that runs in the meshed workloads. If this connectivity is blocked by Cilium rules, some of the Linkerd features might not work as expected.
Applying egress policies
Our Podinfo server is now conforming to the network policies. To allow our client pod to only communicate with the Podinfo backend we can use an Egress Cilium policy:
cat <<EOF | kubectl apply -f -
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "egress-policy"
namespace: cilium-linkerd
specs:
- endpointSelector:
matchLabels:
app: client
egress:
- toEndpoints:
- matchLabels:
"app": podinfo
- endpointSelector:
matchLabels:
app: client
egress:
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": kube-system
"k8s:k8s-app": kube-dns
- endpointSelector:
matchLabels:
app: client
egress:
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": "linkerd"
EOF
This policy has three egress rules that apply to workloads labeled with app: client:
- The server can initiate traffic to workloads labelled with app: podinfo.
- Initiate traffic to kube-dns for DNS resolution.
- Unblock traffic to Linkerd components.
Here again, outgoing traffic to Linkerd components is essential. The Linkerd proxy uses the identity and destination services to obtain TLS certificates and perform service discovery. If this connectivity is blocked, the proxy will not be able to work correctly, rendering the meshed workload unusable.
Observing traffic with Linkerd
Now that our traffic is obeying ingress and egress policies, we can go ahead and install Linkerd following the installation guide. Ready? Then, let’s mesh our workloads:
kubectl get deploy -n cilium-linkerd podinfo -oyaml | linkerd inject – | kubectl apply -f –
Once workloads are meshed, we can see the requests being issued to Podinfo:
$ linkerd top deployment/podinfo --namespace cilium-linkerd
Source Destination Method Path Count Best Worst Last Success Rate
client-5c69b9d757-tzbng podinfo-5fc5cb5f59-j96td GET /env 2 531µs 847µs 847µs 100.00%
client-5c69b9d757-tzbng podinfo-5fc5cb5f59-cbt98 GET /metrics 2 2ms 2ms 2ms 100.00%
client-5c69b9d757-tzbng podinfo-5fc5cb5f59-cbt98 GET /env 1 507µs 507µs 507µs 100.00%
client-5c69b9d757-tzbng podinfo-5fc5cb5f59-j96td GET /healthz 1 664µs 664µs 664µs 100.00%
client-5c69b9d757-tzbng podinfo-5fc5cb5f59-j96td GET /readyz 1 469µs 469µs 469µs 100.00%
client-5c69b9d757-tzbng podinfo-5fc5cb5f59-j96td GET /headers 1 586µs 586µs 586µs 100.00%
client-5c69b9d757-tzbng podinfo-5fc5cb5f59-j96td GET /version 1 491µs 491µs 491µs 100.00%
Similarly, we can observe the live stream of all requests going out of the client workload:
$ linkerd tap deployment/client --namespace cilium-linkerd
req id=5:0 proxy=out src=10.244.0.215:54990 dst=10.244.0.211:9898 tls=true :method=GET :authority=podinfo:9898 :path=/env
rsp id=5:0 proxy=out src=10.244.0.215:54990 dst=10.244.0.211:9898 tls=true :status=200 latency=1069µs
end id=5:0 proxy=out src=10.244.0.215:54990 dst=10.244.0.211:9898 tls=true duration=30µs response-length=1008B
Note that the tls=true indicator shows that mTLS is applied to all traffic between workloads. To ensure that the policies work:
$ kubectl exec deploy/client -n cilium-linkerd -- curl -s podinfo:9898
{
"hostname": "podinfo-5fc5cb5f59-j96td",
"version": "5.0.3",
"revision": "",
"color": "#34577c",
"logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif",
"message": "greetings from podinfo v5.0.3",
"goos": "linux",
"goarch": "amd64",
"runtime": "go1.15.3",
"num_goroutine": "8",
"num_cpu": "4"
}
Reaching an alternative destination from the ones allowed is not possible:
# The request will simply hang...
$ kubectl exec deploy/client -n cilium-linkerd -- curl 'https://postman-echo.com/get?foo1=bar1&foo2=bar2'
Congrats! At this point you’ve successfully enforced L3/L4 policies using Cilium on a Linkerd-enabled cluster.
What’s next?
In this post, we’ve demonstrated how to use Cilium and Linkerd together, and how to apply L3/L4 policies in a Linkerd-enabled cluster. Everything in this blog post can be used in production today. In upcoming releases, Linkerd will add L7 support, we’ll be able to extend these same ideas to protocol-specific policies as well. Until then, go forth and make L3/L4 policies with Linkerd and Cilium!
Buoyant makes Linkerd awesome
Buoyant is the creator of Linkerd and of Buoyant Cloud, the fully automated, Linkerd-powered platform health dashboard for Kubernetes. Today, Buoyant helps companies around the world adopt Linkerd, and provides commercial support for Linkerd as well as training and services. If you’re interested in adopting Linkerd, don’t hesitate to reach out!