Overview
The Network Policy service is an out-of-the-box offering that allows platform teams and developers to build zero-trust network security and visibility into their Kubernetes environments without sacrificing governance, access control and performance. Some of the key advantages of this solution include:
- Standardized installation and deployment of network policies across a fleet of clusters via blueprinting including the ability to define defaults to enable a Day 0 zero trust model
- Access controls to network flows based on assigned roles and assets. This allows platform teams to unblock developers from viewing the traffic for their respective applications/namespaces while still maintaining RBAC controls
- Real time visibility and historical network flows for service monitoring and application debugging
- Assignment of policies to multiple levels of infrastructure including clusters and namespaces and ability to fine-tune based on application needs
Visit us here for a quick Network Policy Manager Demo
The offering broadly includes two key features:
- Network Policy management via Cilium including cluster-wide and namespace-scoped policies
- Network Visibility (Layer 4) that is access-controlled based on role
Network Policy Overview¶
Using the console, API, RCTL, or GitOps, an admin can create network policies of the following types:
- Cluster: these policies are scoped at the cluster level and should be used to enforce default sets of rules across the cluster
- Namespace: these policies are scoped at the namespace level and should be used to protect individual pods or applications in a given workspace
The creation of cluster-wide policies versus namespace policies have different workflows and require certain roles. See the RBAC section below to learn more. These policies can then be assigned to the appropriate assets and standardized across your project infrastructure.
Network Visibility Overview¶
Users have real-time and historical visibility into network traffic flows. This can be used to:
- Validate network access to certain pods/namespaces based on application requirements
- Validate network policy rules are in effect by visualizing network traffic flows
- Troubleshoot applications when network connectivity is down based on real-time and historical workflows
Cilium Integration¶
Starting with the 2.6 release, the dependency on the Contained Network Interface (CNI) Cilium for network policy enforcement will be deprecated in the Rafay platform. The key changes include:
- No tight coupling with Cilium for managed network policy. The network policy enforcement will be handled by the primary CNI in the cluster or any other plugin (like Cilium in chaining mode) installed as an addon.
- Rules and policies facilitates the management and deployment of policies to the cluster.
- No traffic visibility through Rafay’s network policy dashboard. Users can deploy their own addon (like Cilium’s Hubble component) for visibility.
Network Policy Enforcement Recommendations by Cluster Type and Primary CNI¶
Cluster Type | Primary CNI | Supported? (Deprecated) | Recommendation |
---|---|---|---|
AKS | azure | Yes | Use azure’s policy enforcement capability (OR) Deploy Cilium as custom addon in chaining mode |
calico | Yes | Use calico’s policy enforcement capability (OR) Deploy Cilium as custom addon in chaining mode | |
kubenet | No | No | |
azure-cni-overlay | No | Use azure-cni-overlay with cilium as dataplane (OR) Deploy Cilium as custom addon in chaining mode | |
EKS | aws-cni | Yes | Use aws-cni’s policy enforcement capability (OR) Deploy Cilium as custom addon in chaining mode |
calico | Yes | Use calico’s policy enforcement capability (OR) Deploy Cilium as custom addon in chaining mode | |
MKS | calico | Yes | Use calico’s policy enforcement capability (OR) Deploy Cilium as custom addon in chaining mode |
canal | Yes | Deploy Cilium as custom addon in chaining mode | |
cilium | No | Use Cilium’s policy enforcement |
To deploy Cilium as a custom addon in chaining mode, refer Cilium in Chaining-Mode for Network Policy
RBAC¶
The following table lists the roles that can access specific components of the Network Policy Management Service.
Feature | Roles |
---|---|
Cluster Network Policies | Infra Admin, Org Admin |
Namespace Network Policies | Workspace Admin, Project Admin, Org Admin |
For more information on what these roles do generally, see the roles documentation.
Pre-requisites & Considerations¶
Support is based on a combination of cluster type and primary CNI.
-
In order for Network Policy Management to work, because Cilium is installed in chaining mode, admins have to be running a specific primary CNI in their cluster. The primary CNI in the cluster cannot be Cilium as Cilium is installed as a secondary CNI to do network policy enforcement.
-
The following cluster type/CNI combinations are supported (created or imported clusters):
Cluster Type | Primary CNI | Supported |
---|---|---|
Amazon EKS | AWS-CNI | YES |
Amazon EKS | Calico | YES |
Azure AKS | Azure-CNI | YES |
Azure AKS | Kubenet + Calico | YES |
“Upstream Kubernetes” on Bare Metal and VM Environments | Calico | YES |
Microk8s | Calico | YES |
-
The Monitoring & Visibility Add-On (Prometheus) is required for viewing traffic flows in the network visibility dashboard.
-
For any pods/workloads that existed pre deployment of Cilium/Network Policy Manager onto the cluster, those pods/workloads must be RESTARTED in order for policies to take effect. New pods/workloads do NOT need to be restarted.