Skip to content

Cilium ​

As I disabled the default cni (container network interface) of k3s, I need to install one. Please welcome Cilium !

Cilium is a "all-in-one" networking component for my kubernetes stack. Basically when you installed a kubernetes cluster, you will need a cni and a load balancer component like metallb or klipper-lb (the one installed by default in k3s for example).

The load balancer component is useful when you need an entrypoint in your kubernetes cluster.

For a short presentation, Cilium use eBPF to provide high-performance networking, security, and observability for Kubernetes. Since eBPF operates directly in the Linux kernel, it replaces traditional tools like iptables or IPVS. This makes the kube-proxy component unnecessary, as it relies on iptables for traffic routing and load balancing.

Cilium also integrates with Envoy Proxy to enable Layer 7 (L7) features, such as advanced HTTP/gRPC traffic routing, observability, and policy enforcement. Envoy complements Cilium's eBPF-based L3/L4 capabilities by providing application-layer intelligence and functionality.

Installation ​

The installation of Cilium requires the Helm chart located at https://helm.cilium.io/.

Below is a minimal configuration for the values.yaml file:

yaml
cilium:
  ipam:
    operator:
      # See option --cluster-cidr : https://docs.k3s.io/cli/server#networking 
      # -- The following value must match the cidr configure in k3s. 
      # -- Default value of k3s cidr: 10.42.0.0/16
      clusterPoolIPv4PodCIDRList: 10.42.0.0/16

  # Replace kube-proxy with Cilium
  kubeProxyReplacement: true

  # -- Enable Load balancer feature
  # Configure LoadBalancer IP Address Management (LB-IPAM) for Cilium
  # It needs the feature of L2 announcement and BGP Control Plan 
  # responsible for load balancing and/or advertisement of these IPs. 
  # https://docs.cilium.io/en/stable/network/lb-ipam/ 
  # https://docs.cilium.io/en/stable/network/l2-announcements/#l2-announcements
  l2announcements:
    enabled: true
  bgpControlPlane:
    enabled: true

  # -- Enable ExternalIPs service support.
  externalIPs:
    enabled: true

Then, to make L2 anouncement working, you need to create a CiliumL2AnnouncementPolicy.

yaml
# https://docs.cilium.io/en/stable/network/l2-announcements/#troubleshooting
# Ensure you have at least one policy configured, L2 announcements will not work without a policy.
apiVersion: "cilium.io/v2alpha1"
kind: CiliumL2AnnouncementPolicy
metadata:
  name: policy1
spec:
  interfaces:
  # Adjust depending on your k8s nodes interface
  - ^eth[0-9]+
  # - ^eno[0-9]+
  externalIPs: true
  loadBalancerIPs: true

Finally, to assign an external IP to the load balancer service, you need to specify an IP pool range by creating a CiliumLoadBalancerIPPool. In my case, since my local network CIDR is 192.168.1.0/24, I allocate the IP pool in the range from 192.168.1.20 to 192.168.1.30. So when I create a service type LoadBalancer, the external IP will be in that range.

yaml
apiVersion: "cilium.io/v2alpha1"
kind: CiliumLoadBalancerIPPool
metadata:
  name: "default-pool"
spec:
  # reserve the first and last IPs of CIDRs for the gateway and broadcast
  allowFirstLastIPs: "No"
  blocks:
  - start: "192.168.1.20"
    stop: "192.168.1.30"
  # If you want to choose a different CIDR, you can specify it as follows. 
  # Be aware that your router should know how to route this network !
  # - cidr: "10.0.0.1/24"

To learn more how this external IP works, check the official documentation here.

TIP

To know which kubernetes node is handle the external IP, you can run the following command:

bash
kubectl -n kube-system get lease

INFO

One cool feature is that, since this external IP behaves like a VIP, it also supports failover if a Kubernetes node goes down. 🤯 Awesome !

And voila !

Released under the MIT License.