Skip to content

Kube-OVN and Cilium Integration

Integrating Kube-OVN with Cilium in Chaining Mode - Coming Soon

Overview

Kube-OVN supports integration with Cilium, an eBPF-based networking and security component, using CNI Chaining mode. This integration combines Kube-OVN's rich network abstractions, such as subnet isolation and overlay networking, with Cilium's advanced monitoring, granular security policies, and application-layer observability. By leveraging the strengths of both solutions, this setup enhances performance, ensures robust security, and provides better multi-tenancy, making it ideal for complex Kubernetes workloads.


Steps to Integrate Kube-OVN with Cilium in Chaining Mode

Step 1: Create Namespace

  • Create a namespace kube-system

Step 2: Create Add-on with kube-ovn CNI

  • To integrate Kube-OVN with Cilium, first create an add-on using the namespace kube-system

⚠️ Important Note

Add the following labels to the Kube-OVN add-on: - Key: rafay.type and Value: cni - Key: rafay.cni.name and Value: kube-ovn - Upload the Kube-OVN Helm chart and its values file - Update the following values in the Kube-OVN values file:

Enable_NP=false
CNI_CONFIG_priority=10

Step 3: Create Add-on with Chaining Yaml

  • Create another add-on for chaining yaml by selecting the Type K8s YAML and using the namespace kube-system

Auto Approve Nodes

  • Upload the Chaining YAML file, make the required changes, and apply the updates

Auto Approve Nodes

Here is the editable chaining YAML configuration:

apiVersion: v1
kind: ConfigMap
metadata:
  name: cni-configuration
  namespace: kube-system
data:
  cni-config: |-
    {
      "name": "generic-veth",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "kube-ovn",
          "server_socket": "/run/openvswitch/kube-ovn-daemon.sock",
          "ipam": {
              "type": "kube-ovn",
              "server_socket": "/run/openvswitch/kube-ovn-daemon.sock"
          }
        },
        {
          "type": "portmap",
          "snat": true,
          "capabilities": {"portMappings": true}
        },
        {
          "type": "cilium-cni"
        }
      ]
    }

Step 4: Create Add-on with Cilium CNI

  • Create one more add-on for Cilium using the namespace kube-system
  • Upload the Cilium Helm chart and its values file

⚠️ Important Note

Cilium Values for Networking Setup

Update the following values in the Cilium values file and apply the changes

 cni.chainingMode=generic-veth \
 cni.customConf=true \
 cni.configMap=cni-configuration \
 routingMode=native \
 enableIPv4Masquerade=false \
 devices="eth+ ovn0 genev_sys_6081 vxlan_sys_4789" \
 enableIdentityMark=false

Configuration Guide for Cilium CNI based on Versions:

For Cilium version 1.16.3 or later, use the following configuration in the values file:

k8sServiceHost: "auto"  
k8sServicePort: "6443"  

For older versions, use the following configuration in the values file:

k8sServiceHost: "k8master.service.consul"  
k8sServicePort: "6443"  

Step 5: Create Blueprint

  • Once the three add-ons are created, create a blueprint
  • Add all the add-ons to the Blueprint and deploy it to the cluster

Auto Approve Nodes


Day 2 Operation

To integrate Kube-OVN with Cilium on Day 2 operations, the Blueprint-based Kube-OVN CNI must be deployed in the provisioned cluster. Perform the following steps:

⚠️ Important Note

Kube-OVN Controller Arguments

Update the kube-ovn-controller Deployment file with the below arguments using the command edit deploy kube-ovn-controller -n kube-system

args:
  - --enable-np=false
args:
- --cni-conf-name=10-kube-ovn.conflist

Below is an example illustrating how the args are edited:

Auto Approve Nodes

  • Once the args are added, update the configuration name (10-kube-ovn.conflist) for Kube-OVN on each node by copying the values from the original file to the 10-kube-ovn.conflist file
  • Create an add-on with chaining-yaml as defined in Step 3
  • Create an add-on with cilium and update the following values in the Cilium values file as shown in Step 4

⚠️ Important Note

Cilium Values for Networking Setup

Update the following values in the Cilium values file and apply the changes

 cni.chainingMode=generic-veth \
 cni.customConf=true \
 cni.configMap=cni-configuration \
 routingMode=native \
 enableIPv4Masquerade=false \
 devices="eth+ ovn0 genev_sys_6081 vxlan_sys_4789" \
 enableIdentityMark=false

Configuration Guide for Cilium CNI based on Versions:

For Cilium version 1.16.3 or later, use the following configuration in the values file:

k8sServiceHost: "auto"  
k8sServicePort: "6443"  

For older versions, use the following configuration in the values file:

k8sServiceHost: "k8master.service.consul"  
k8sServicePort: "6443"  

  • Create a new version of the cluster blueprint and add these three (3) add-ons to the blueprint

Auto Approve Nodes

  • Update the cluster blueprint with the new version

Auto Approve Nodes

View the deployment of all three add-ons to the cluster as demonstrated below

Auto Approve Nodes

Once the deployment is successful, retrieve the pod details as shown below to view the running Kube-OVN and Cilium CNIs

Auto Approve Nodes