Deploying Custom CNI (Kube-OVN) in Rafay MKS Upstream Kubernetes Cluster Using the Blueprint Add-On Approach¶
In continuation of our Part 1 intro blog on the Kube-OVN CNI, this is Part 2, where we will cover how easy it is to manage CNI configurations using Rafay's Blueprint Add-On approach.In the evolving cloud-native landscape, networking requirements are becoming more complex, with platform teams needing enhanced control and customization over their Kubernetes clusters. Rafay's support for custom, compatible CNIs allows organizations to select and deploy advanced networking solutions tailored to their needs. While there are several options available, this blog will focus specifically on deploying the Kube-OVN CNI. Using Rafay’s Blueprint Add-On approach, we will guide you through the steps to seamlessly integrate Kube-OVN into an upstream Kubernetes cluster managed by Rafay’s Managed Kubernetes Service.
Our upcoming release, scheduled for December in the production environment, introduces several new features and enhancements. Each of these will be covered in separate blog posts. This particular blog focuses on the support and process for deploying Kube-OVN as the primary CNI on an upstream Kubernetes cluster.
Watch a video showcasing how users can customize and configure Kube-OVN as the primary CNI on Rafay MKS Kubernetes clusters.
What is Kube-OVN?¶
Kube-OVN is a feature-rich, Kubernetes-native Container Network Interface CNI that delivers advanced networking capabilities such as subnet management, network isolation, and load balancing. It’s particularly suitable for organizations that require granular control over their network configurations and traffic policies. It brings SDN features to Kubernetes, which is crucial for organizations looking for better control, flexibility, and scalability in their network architecture.
Common Use Cases¶
Here are some common scenarios where Kube-OVN CNI can be effectively used:
-
Multi-Tenant Clusters: For organizations running multi-tenant Kubernetes clusters, Kube-OVN provides robust network isolation features, ensuring that workloads from different tenants cannot communicate unless explicitly allowed.
-
High-Availability and Load Balancing: Use Kube-OVN for scenarios requiring load balancing and traffic optimization, such as large-scale microservices deployments where efficient routing and traffic management are crucial.
-
Data-Intensive Applications: For workloads like AI/ML, data analytics, or any application requiring high-throughput and low-latency networking, Kube-OVN’s advanced traffic management capabilities provide an optimal solution.
-
IPv6-Enabled Environments: In cases where dual-stack or pure IPv6 networking is required, Kube-OVN offers seamless support, allowing for modern networking practices and future-proofing the cluster’s infrastructure.
Steps to Deploy Kube-OVN CNI Using the Blueprint Add-On Approach¶
Step 1: Create a Kube-OVN Add-On¶
- Start by creating a Kube-OVN add-on in the Rafay Platform.
- Fetch the Kube-OVN Helm chart:
- Option 1: Download it from the Kube-OVN GitHub Releases Page.
- Option 2: Use Helm to fetch the chart:
- Replace
<chart-version>
with the specific version you want to use.
- Replace
- Upload the downloaded
Kube-OVN
Helm chart to the Rafay Platform. - Use the following
values.yaml
configuration file to specify the parameters required for your Kube-OVN deployment:
Important: After defining your desired configuration in values.yaml
, ensure you add the following mandatory labels to the Kube-OVN add-on within the Rafay Platform:
rafay.type: cni
rafay.cni.name: kube-ovn
These labels are crucial for Rafay to identify and manage the Kube-OVN add-on effectively.
global:
registry:
address: docker.io/kubeovn
imagePullSecrets: []
images:
kubeovn:
repository: kube-ovn
dpdkRepository: kube-ovn-dpdk
vpcRepository: vpc-nat-gateway
tag: v1.12.26
support_arm: true
thirdparty: true
image:
pullPolicy: IfNotPresent
namespace: kube-system
replicaCount: 1
MASTER_NODES: ""
networking:
NET_STACK: ipv4
ENABLE_SSL: false
NETWORK_TYPE: geneve
TUNNEL_TYPE: geneve
IFACE: ""
DPDK_TUNNEL_IFACE: "br-phy"
EXCLUDE_IPS: ""
POD_NIC_TYPE: "veth-pair"
vlan:
PROVIDER_NAME: "provider"
VLAN_INTERFACE_NAME: ""
VLAN_NAME: "ovn-vlan"
VLAN_ID: "100"
ENABLE_EIP_SNAT: true
EXCHANGE_LINK_NAME: false
DEFAULT_SUBNET: "ovn-default"
DEFAULT_VPC: "ovn-cluster"
NODE_SUBNET: "join"
ENABLE_ECMP: false
ENABLE_METRICS: true
NODE_LOCAL_DNS_IP: ""
PROBE_INTERVAL: 180000
OVN_NORTHD_PROBE_INTERVAL: 5000
OVN_LEADER_PROBE_INTERVAL: 5
OVN_REMOTE_PROBE_INTERVAL: 10000
OVN_REMOTE_OPENFLOW_INTERVAL: 180
OVN_NORTHD_N_THREADS: 1
ENABLE_COMPACT: false
func:
ENABLE_LB: true
ENABLE_NP: true
ENABLE_EXTERNAL_VPC: true
HW_OFFLOAD: false
ENABLE_LB_SVC: false
ENABLE_KEEP_VM_IP: true
LS_DNAT_MOD_DL_DST: true
LS_CT_SKIP_DST_LPORT_IPS: true
ENABLE_BIND_LOCAL_IP: true
SECURE_SERVING: false
U2O_INTERCONNECTION: false
ENABLE_TPROXY: false
ENABLE_IC: false
ENABLE_NAT_GW: true
OVSDB_CON_TIMEOUT: 3
OVSDB_INACTIVITY_TIMEOUT: 10
ipv4:
POD_CIDR: "10.16.0.0/16"
POD_GATEWAY: "10.16.0.1"
SVC_CIDR: "10.96.0.0/12"
JOIN_CIDR: "100.64.0.0/16"
PINGER_EXTERNAL_ADDRESS: "114.114.114.114"
PINGER_EXTERNAL_DOMAIN: "alauda.cn."
ipv6:
POD_CIDR: "fd00:10:16::/112"
POD_GATEWAY: "fd00:10:16::1"
SVC_CIDR: "fd00:10:96::/112"
JOIN_CIDR: "fd00:100:64::/112"
PINGER_EXTERNAL_ADDRESS: "2400:3200::1"
PINGER_EXTERNAL_DOMAIN: "google.com."
dual_stack:
POD_CIDR: "10.16.0.0/16,fd00:10:16::/112"
POD_GATEWAY: "10.16.0.1,fd00:10:16::1"
SVC_CIDR: "10.96.0.0/12,fd00:10:96::/112"
JOIN_CIDR: "100.64.0.0/16,fd00:100:64::/112"
PINGER_EXTERNAL_ADDRESS: "114.114.114.114,2400:3200::1"
PINGER_EXTERNAL_DOMAIN: "google.com."
performance:
MODULES: "kube_ovn_fastpath.ko"
RPMS: "openvswitch-kmod"
GC_INTERVAL: 360
INSPECT_INTERVAL: 20
OVS_VSCTL_CONCURRENCY: 100
debug:
ENABLE_MIRROR: false
MIRROR_IFACE: "mirror0"
cni_conf:
CHECK_GATEWAY: true
LOGICAL_GATEWAY: false
CNI_CONFIG_PRIORITY: "01"
CNI_CONF_DIR: "/etc/cni/net.d"
CNI_BIN_DIR: "/opt/cni/bin"
CNI_CONF_FILE: "/kube-ovn/01-kube-ovn.conflist"
kubelet_conf:
KUBELET_DIR: "/var/lib/kubelet"
log_conf:
LOG_DIR: "/var/log"
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
HYBRID_DPDK: false
HUGEPAGE_SIZE_TYPE: hugepages-2Mi
HUGEPAGES: 1Gi
DPDK: false
DPDK_VERSION: "19.11"
DPDK_CPU: "1000m"
DPDK_MEMORY: "2Gi"
ovn-central:
requests:
cpu: "300m"
memory: "200Mi"
limits:
cpu: "3"
memory: "4Gi"
ovs-ovn:
requests:
cpu: "200m"
memory: "200Mi"
limits:
cpu: "2"
memory: "1000Mi"
kube-ovn-controller:
requests:
cpu: "200m"
memory: "200Mi"
limits:
cpu: "1000m"
memory: "1Gi"
kube-ovn-cni:
requests:
cpu: "100m"
memory: "100Mi"
limits:
cpu: "1000m"
memory: "1Gi"
kube-ovn-pinger:
requests:
cpu: "100m"
memory: "100Mi"
limits:
cpu: "200m"
memory: "400Mi"
kube-ovn-monitor:
requests:
cpu: "200m"
memory: "200Mi"
limits:
cpu: "200m"
memory: "200Mi"
- Save the
values.yaml
file and ensure the configuration matches your cluster's networking requirements.
Step 2: Use the Kube-OVN Add-On in a Blueprint¶
- Go to the Blueprints section in the Rafay Platform and create a new blueprint.
- Attach the Kube-OVN add-on you created in Step 1 to this blueprint.
- Configure any additional addon's as needed for your cluster.
Step 3: Create the Cluster Using the Blueprint¶
- With the blueprint ready, create a new cluster using this blueprint.
- Select
CNI Provider
asCNI-via-Blueprint
, indicating that the primary CNI is deployed and managed through the blueprint configuration. - This will ensure that the Kube-OVN CNI is deployed as part of the cluster provisioning process.
- Monitor the deployment and verify that Kube-OVN is successfully integrated and operational in your cluster.
Step 4: Verify the Deployment of Kube-OVN¶
- Once the Blueprint is applied, verify Kube-OVN deployment.
Day 2 Operations with Kube-OVN Using Rafay’s Blueprint Add-On¶
Rafay’s Blueprint Add-On approach makes Day 2 operations, like upgrades and configuration changes, seamless and efficient.
Upgrading Kube-OVN CNI Version¶
- Create a New Add-On Version: Upload the new Kube-OVN Helm chart and updated
values.yaml
to the Rafay Platform. - Update the Blueprint: Associate the new add-on version in a new blueprint version.
- Apply the Blueprint: Update the cluster with the new blueprint, and Rafay will handle the upgrade smoothly.
Configuration Changes¶
To modify settings like ENABLE_ECMP
, simply update the values.yaml
, create a new add-on version, update the blueprint, and apply it to the cluster.
With this approach, Rafay ensures easy and automated Day 2 operations for Kube-OVN, minimizing complexity and downtime.
Conclusion¶
Deploying a custom CNI like Kube-OVN in an upstream Kubernetes cluster using Rafay's Blueprint Add-On approach brings flexibility and control to your networking setup. By leveraging Rafay's Managed Kubernetes Service, you can automate and standardize the deployment of advanced networking features seamlessly.
Thanks to readers of our blog who spend time reading our product blogs and suggest ideas.
-
Free Org
Sign up for a free Org if you want to try this yourself with our Get Started guides.
-
Live Demo
Schedule time with us to watch a demo in action.
-
Rafay's AI/ML Products
Learn about Rafay's offerings in AI/ML Infrastructure and Tooling
-
Upcoming Events
Meet us in-person in the Rafay booth in one of the upcoming events