Skip to content

Install

Overview

By default in Kubernetes today, pods have unbound ingress/egress capabilities with any namespace or even the Internet. Many organizations therefore may want to enforce a set of security policies that can be used to:

  • Establish a default set of rules for zero-trust, for example, deny all ingress by default unless needed
  • Isolate different users’ or customers’ workloads within a given cluster for micro-segmentation

Cilium is a popular CNI (Container Network Interface) in the market today because of its eBPF implementation that allows it to perform at a higher scale and gain deep insights into your network and security enforcement at the kernel level. However, one of the key capabilities of Cilium that makes it extremely useful is its ability to chain with other CNIs, allowing customers to not disrupt their existing network configuration while still being able to use some of the key capabilities of Cilium such as network policies.


What Will You Do

In this exercise,

  • You will create a cluster blueprint with a "Cilium" add-on.
  • You will then apply this cluster blueprint to the managed cluster to have Cilium running in chained mode for network policy enforcement.
  • Along with Cilium, you will be installing Hubble with its relay and user interface components.

Important

This tutorial describes the steps to create and use a Cilium-based blueprint using the Web Console. The entire workflow can also be fully automated and embedded into an automation pipeline.


Assumptions

  • You have already provisioned or imported an AWS EKS cluster using the controller and that it is in a healthy state.
  • The primary CNI in use by the EKS Cluster is AWS-CNI.

Step 1: Create Cilium Add-On From Catalog

In this example, we will be using Cilium 1.11.7. Cilium is available in the Rafay public catalog making its deployment straightforward. Follow the below steps to create the add-on.

  • Login into the Web Console and navigate to your Project as an Org Admin or Infrastructure Admin
  • Under Infrastructure, select "Namespaces" and create a new namespace called "cilium-ns". Select Wizard for the Type.
  • Next in the Wizard, in the general section click save and go to placement. In the placement section, make sure to select the cluster you are using for this exercise and then click save & go to publish. Then click publish.
  • Under Infrastructure, go to Add-Ons and click New-Add on -> Create New Add-On From Catalog. Select Cilium version 1.11.7 or above.
  • Put “cilium-add-on” for the name and select “cilium-ns” to select the namespace you just created earlier. Validate that the Add-On has been successfully created.
  • Next you will be asked to enter a version. Enter “version 1”. Make sure cilium is selected for the helm chart.
  • For the Helm chart installation, a YAML file needs to be supplied. In order for the chained mode to work along with deploying the right bits and pieces the following need to be installed/configured appropriately: a) Cilium agent and operator b) Enable chained mode with AWS-CNI as the primary CNI. c) IPv4 Tunneling disabled to make sure general network traffic is handled by default AWS-CNI. d) IPv4 Masquerading disabled to make sure IPAM is handled by default AWS-CNI. e) Hubble as an observability tool with its UI and Relay components.

Thankfully, for this recipe, we have bundled all of this into a values.yaml file which you can download here:

Values YAML file

  • Click upload files and upload the values.yaml file you just downloaded. Then click "SAVE CHANGES"

Create Cilium add-on


Step 2: Create the blueprint

Now, we are ready to assemble a custom cluster blueprint using the Cilium add-on.

  • Under Infrastructure, select "Blueprints"
  • Create a new blueprint called "cilium-blueprint"
  • Select "New Version" and enter "version 1"
  • Under Add-Ons, select "ADD MORE" and chose the "cilium-add-on" add-on created in Step 1. Make sure you are using "version 1" for the version.

Add Cilium add-on to blueprint

  • Click "SAVE CHANGES"

Cilium blueprint configuration


Step 3: Apply Blueprint

Now, we are ready to apply this blueprint to a cluster.

  • Click on Options for the target Cluster in the Web Console.
  • Under Infrastructure, go clusters and scroll down to yours. Make sure your cluster’s reachable, control plane is healthy, it is operational, and that the blueprint sync is successful.
  • Select "Update Blueprint" and select "cilium-blueprint" from the dropdown and for the version select "version 1" from the dropdown.
  • Click on "Save and Publish".

This will start the deployment of the add-ons configured in the cilium blueprint to the targeted cluster. Click on the arrow next to the "Blueprint Sync" status to see the status of the blueprint sync. The blueprint sync process can take a few minutes. Once complete, the cluster will display the current cluster blueprint details and whether the sync was successful or not.

Cilium blueprint sync


Step 4: Validate Cilium installation

  • Under Infrastructure, go to clusters and find your cluster. Click on pods
  • Go to the namespace drop-down menu and select “cilium-ns”
  • You should see one of each component/pod per node you have in your cluster and the status should be green with QoS labeled as best effort
    • Cilium-operator
    • Cilium agent along with Cilium CLI on each node
    • Hubble-relay
    • Hubble-ui

If they are in a crash back loop state that means installation has failed.

Cilium components


Step 5: Enable Port-Forwarding

In order to access the Hubble interface, we will need to enable access to the frontend application using port-forward. To do this, we will download and use the Kubeconfig with the KubeCTL CLI

kubectl port-forward -n cilium-ns svc/hubble-ui --address 0.0.0.0 --address :: 12000:80

Forwarding from 0.0.0.0:12000 -> 8081
Forwarding from [::]:12000 -> 8081
Handling connection for 12000

Step 6: View Data

You can now access the Hubble interface by visiting the following link. http://0.0.0.0:12000

Hubble Dashboard


Recap and Next steps

Congratulations! You have successfully created a custom cluster blueprint with the Cilium add-on and applied it to a cluster.

Next, you can do the following to leverage Cilium network policy enforcement and Hubble for observability: