Skip to content

Overview

In Kubernetes, there are four distinct types of networking requirements that need to be addressed.

Networking Requirement Description
Container to Container Addressed by pod and local host comms
Pod to Pod Addressed by CNIs
Pod to Service Addressed by Services
External to Service Addressed by Services

Container Network Interface aka CNI allows for configuration of container networking when containers are created or destroyed. The CNI's goal is to create a generic plugin-based networking solution for containers.

Although the Kubernetes networking model requires certain network features, it is inherently pluggable allowing for flexibility and choice of implementation.

Several projects (CNI plugins) have been released to address specific operating environments. These plugins ensure that Kubernetes networking requirements are satisfied and at the same time provide the networking capabilities required by the cluster administrators. The CNI plugin is responsible wiring up the container i.e. it needs to do all the work to get the container on the network.


CNI and Cluster Types

Upstream Kubernetes

The controller provides a zero touch integration for CNIs for upstream Kubernetes clusters on bare metal or VM or pre-packaged or clouds. All the CNI components are automatically deployed and configured by the Controller during the cluster provisioning process. The following CNIs are supported:


Amazon EKS

The Amazon VPC Container Network Interface (CNI) plugin is used for Amazon EKS Clusters provisioned by the controller.


Imported Clusters

The existing CNI in the imported cluster is not changed/updated by the Kubernetes Management Operator.


Viewing CNI Resources

We provide multiple mechanisms to view and monitor the status of CNI resources deployed on the managed Kubernetes Cluster. The CNI resources will be available in the "kube-system" namespace ensuring that only privileged cluster administrators have the ability to view and make changes to this critical cluster resource.


k8s Dashboards

Users can view/monitor the CNI resources operational on a managed Kubernetes Cluster using the k8s resources dashboard in the Web Console.

  • Click on the cluster name
  • Click on the Resources
  • Click on Pods
  • Search for canal

You should see something like the screenshot below. Notice that the "canal" pod contains two containers:

  • calico and
  • flannel

CNI Resources via Dashboard

To view the Canal pod's dashboard, click on the canal pod name. You will be presented with a detailed dashboard for the canal pod.

Canal Pod Dashboard


KubeCTL

Users can also view the CNI resources operational on the Upstream Kubernetes cluster using kubeCTL by issuing the following command.

kubectl -n kube-system get po --selector=k8s-app=canal

Users can also view the CNI resources using the browser based, integrated KubeCTL shell in the Web Console. An illustrative screenshot shown below.

CNI Resources via KubeCTL


About Canal

Canal is another name for "Calico + Flannel" where Calico is used to setup to handle policy management and Flannel to manage the network itself. This combination brings in Calico's support for the NetworkPolicy feature of Kubernetes, while utilizing Flannel's UDP-based network traffic to provide for an easier setup experience that works in a wider variety of host network environments without special configuration.


About Flannel

Flannel is a popular and reliable CNI plugin. It is also one of the most mature examples of networking fabric for container orchestration systems. It is intended to allow for better inter-container and inter-host networking.

Flannel can use the Kubernetes cluster’s existing etcd cluster to store its state information using the API to avoid having to provision a dedicated data store.

  • Flannel configures a Layer 3 IPv4 overlay network.
  • A large internal network is created that spans across every node within the cluster.
  • Within this overlay network, each node is given a subnet to allocate IP addresses internally.
  • As pods are provisioned, the Docker bridge interface on each node allocates an address for each new container.
  • Pods within the same host can communicate using the Docker bridge, while pods on different hosts will have their traffic encapsulated in UDP packets by flanneld for routing to the appropriate destination.

Encapsulation and Routing

Flannel supports different types of backends for encapsulation and routing. The controller configures and enables the default approach (recommended) VXLAN because it offers great performance and does not require manual intervention.


About Calico

Calico's network policy capabilities layered on top of Flannel's base networking help provide additional security and control.

Calico’s rich network policy model makes it easy to Lock Down communication so the only traffic that flows is the traffic you want to flow. Think of Calico’s security enforcement as wrapping each of your workloads with its own personal firewall that is dynamically re-configured in real time as you deploy new services or scale your application up or down.

Calico’s policy engine can enforce the same policy model at the host networking layer and (if using Istio & Envoy) at the service mesh layer, protecting your infrastructure from compromised workloads and protecting your workloads from compromised infrastructure.