CNI Customization
CNI Providers for Kubernetes¶
Networking is a critical component that ensures smooth communication between containers, services, and applications. To make this process simpler and more flexible, Kubernetes clusters support various Container Network Interface (CNI) providers. These providers enable the configuration of networking plugins that help manage networking resources and communication across containers.
Upstream Kubernetes clusters, in particular, support three prominent CNI providers: Cilium, Calico, and Kube-OVN. Each of these providers has its own unique set of features, allowing users to select the best networking solution for their specific needs.
đź’ˇ Tip: Users can add any CNI when creating an addon; however, Cilium, Calico, and Kube-OVN are thoroughly qualified and tested to work seamlessly.
Users can choose and customize their preferred CNI provider's configuration through multiple interfaces, including UI, Terraform, RCTL, and API, ensuring seamless integration with their Kubernetes environments.
Calico¶
Calico is renowned for its flexibility in IP address management and is widely used to connect virtual machines or containers. It offers robust networking capabilities through its CNI plugin, which integrates seamlessly with Kubernetes, enabling users to efficiently manage networking across their containerized environments.
Cilium¶
Cilium uses eBPF (Extended Berkeley Packet Filter) technology to provide high-performance networking, enhanced security, and deep network visibility. It is especially beneficial for users looking for advanced load balancing, security policies, and observability in cloud-native environments.
⚠️ Important Note: Configuration Guide for Cilium CNI based on Versions
For Cilium version 1.16.3 or later, use the following configuration in the values file:
k8sServiceHost: "auto" k8sServicePort: "6443"
For older versions, use the following configuration in the values file:
k8sServiceHost: "k8master.service.consul" k8sServicePort: "6443"
Kube-OVN¶
Kube-OVN integrates OVN (Open Virtual Network) with Kubernetes, providing a solution for advanced networking setups. It supports both overlay and underlay networking, centralized IP management, and advanced network policies, making it ideal for users who need scalable, reliable, and highly customizable network configurations.
Why Is This Useful?¶
The introduction of multiple CNI providers allows users to choose a networking solution that best fits their specific requirements, whether it’s for performance, security, or flexibility. For instance, users who prioritize high security and advanced network observability might prefer Cilium, while those requiring simplified IP address management could lean toward Calico.
Additionally, Kube-OVN is a great choice for users who need more complex network setups, such as integration with OVN for large-scale networking environments that also require detailed control over network policies. The ability to choose between these options means that users are not forced into a one-size-fits-all solution and can instead fine-tune their clusters to meet the demands of their workloads.
CNI Customization - Coming Soon
CNI Customization¶
Previously, users were limited to selecting Container Network Interfaces (CNIs) from a predefined CNI list, which did not allow for any customization of CNI values. With the latest enhancement, users now have the flexibility to customize CNI values by leveraging addons and blueprints.
⚠️ Important Note: Required Labels for CNI Add-ons
When creating a new add-on, it is mandatory to include the following labels to ensure proper configuration:
key: rafay.type
andvalue: cni
key: rafay.cni.name
and one of the following values:
value: cilium
value: calico
value: kube-ovn
There are two approaches available for customizing CNI values:
Attaching Helm Chart and Value Files¶
Users can upload the required Helm chart and value files for their chosen CNI through any interface—UI, RCTL, API, or Terraform. This method provides flexibility to customize the CNI values based on specific environment requirements. It’s important to note that adding labels is mandatory when creating an CNI add-on, as these labels are essential for proper configuration.
Here is an example of creating an add-on with the Kube-OVN CNI. Similarly, add-ons can be created with Cilium and Calico CNIs
To customize Kube-ovn CNI values using Helm charts and value files, follow these steps:
Step 1: Create a Namespace¶
- Create a Namespace
Step 2: Create a Add-On¶
- Create a New Add-on with the previously created namespace and click Create
Example Labels for This Case:
- key: rafay.type
and value: cni
- key: rafay.cni.name
and value: kube-ovn
- Once the labels are added, click New Version
- Provide the version name, and upload the kube-ovn helm chart along with the values file
- To customize the values of the Kube-OVN CNI, click the edit icon and modify the required values in the editor, as shown below
🔑 Key Point:
When using Kube-OVN on HA clusters, add the master node IPs (comma-separated) in thevalues.yaml
file before provisioning. Ensure that the replica count matches the number of IPs provided.⚠️ Important Note: CIDR Configuration for Cilium and Calico CNIs
For Cilium CNI, the
clusterPoolIPv4PodCIDRList
field in the Helm values file must match thePod Subnet
(Cluster Networking Pod Subnet) specified during the cluster provisioningFor Calico CNI, the
cidrs: [ <Node-IP-cidr> ]
field in the Calico Helm values file must match the same Pod Subnet
- Click Update if any modifications are made
- Click Save Changes to complete the add-on creation
Here is the newly created add-on with the Kube-OVN CNI Helm charts and value files.
Step 3: Create a Blueprint¶
- Once the add-on is created, create a blueprint
- Provide the required details in the General section and click Configure Add-Ons
- Select the newly created add-on kube-ovn and the corresponding version from the drop-down
- Click Save Changes
The blueprint is now created with the Kube-OVN CNI add-on
Step 4: Create a Cluster¶
During cluster creation, select the newly created blueprint kube-ovn, provide the other configuration details as required, and proceed with cluster provisioning.
Important
When adding a CNI as part of a blueprint, the usual CNI selection step during cluster provisioning will be skipped. Even if the user selects a different CNI in the advanced settings alongside the blueprint CNI, this selection will be ignored, and the blueprint CNI configuration will take priority and be applied to the cluster
Once the cluster is created, run the command kubectl get pods -A
to view the Kube-OVN CNI running status, as shown in the image below.
Using Predefined Add-On CNI Catalogs¶
For users who prefer a simpler approach, Cilium, Calico, and Kube-OVN are available through the add-on catalog. In this case, users don’t need to upload individual Helm charts and value files, as these CNIs are already packaged with the necessary resources in the catalog. Users can create a namespace and simply select the desired CNI package from the add-on catalog while creating the add-on, streamlining the process and ensuring all required files are included.
Once the add-on with the required CNI is created, add the mandatory labels, and it becomes available for use during blueprint creation. Users can incorporate this add-on when defining a new blueprint (proceed from Step 3), which can then be applied during cluster creation to deploy the customized CNI settings.
Day-2 Operations¶
In addition, users now have the option to modify CNI values as part of Day 2 operations. To do this, create a new version of the addon by uploading an updated Helm chart and value files or by editing the existing value file. Below is an example where the ENABLE_ECMP
value is modified from true
to false
. Click Update to apply the changes
Update the cluster blueprint with the new addon version
Now, update the cluster blueprint to deploy the latest or modified Kube-OVN CNI to the cluster. This enhancement provides added flexibility for ongoing network configuration adjustments that were previously unavailable.
Once the blueprint is updated, users can view the status of the latest blueprint deployment to the cluster, as shown below
Migration to Blueprint-Based CNI Configuration¶
For older upstream clusters currently using Cilium or Calico without a blueprint-based setup, migrating to a blueprint-based configuration for any upgrades or other Day 2 operations requires ensuring that the addon name remains the same to maintain compatibility. Add the required labels, including rafay.type: cni
and rafay.cni.name: cilium or calico
, based on the CNI type in use. Additionally, set the namespace for the addon to kube-system
.