Optimizing Amazon's VPC CNI for your EKS Clusters Made Easy with Rafay¶
Amazon Elastic Kubernetes Service (EKS) is a managed service that you can use to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes. Kubernetes clusters require a Container Network Interface (CNI) that is responsible for cluster networking. One of the options available with EKS is the Amazon VPC CNI, which allows your Kubernetes Pods to utilize IP Addresses defined within your VPCs Subnets. While this provides more control and flexibility to businesses, it also comes with its own set of challenges.
While the benefits of managing and customizing the Amazon VPC CNI on EKS are significant, it’s important to note that the process can be challenging and time consuming, particularly if you lack experience with kubernetes or Amazon's VPC and its resources.
This is where Rafay’s EKS integration can come in handy. In this blog, we'll explore how Rafay’s Platform can address pain points and simplify the management process.
Simple Cluster Management Workflows¶
One of the biggest advantages of using Rafay’s Kubernetes Platform solution for managing EKS clusters and it's CNI is an intuitive, user-friendly web console Rafay provides an easy-to-use UI that makes it simple to configure, provision and manage your EKS clusters CNI regardless of where it's deployed. You can create, scale, and manage clusters and it's CNI with just a few clicks. Managing the lifecycle of the Amazon VPC CNI on your EKS clusters can be a challenging task, particularly when you need to update the add-on or take advantage of complex functionality like Custom Networking. The platform simplifies this process by providing an easy-to-use UI for managing EKS add-ons running on your clusters. You can update the CNI with just a few clicks or with a one-line change in the cluster's spec file, eliminating the need for manual intervention.
Lifecycle Management¶
Creating and managing your EKS clusters managed add-ons using the console, AWS CLI, or eksctl can be error prone and time-consuming, especially when you need to replicate this across your fleet of clusters. The Rafay platform eliminates this by supporting lifecycle operations for the Amazon VPC CNI via IaC or declaratively. You only need to provide a single input, and the Platform will kick off the upgrade workflow. This automated workflow will save users a lot of time and effort.
addons:
- name: vpc-cni
version: v1.12.2-eksbuild.2
Amazon VPC CNI Optimizations¶
The Amazon VPC CNI by default will allocate an entire ENI and corresponding IP Addressed in the warm pool. If your cluster does not have a lot of turnover and your address space is limited you can decrease what is allocated to the warm pool by tuning the following variables:
- WARM_IP_TARGET
- MINIMUM_IP_TARGET
- WARM_ENI_TARGET
When your self-managing the Amazon VPC CNI add-on these settings are not preserved after updating requiring you to backup and restore these settings which can be missed. Workflows that allow you declaratively manage and persist these settings across updates is on the horizon.
Custom Networking¶
In the CNIs default configuration, all pods are assigned an IP Address from the subnets utilized by the cluster. This can quickly exhaust IP Addresses in your subnets as we scale out our cluster. In scenarios where it's not possible to recreate or extend the VPC's CIDR block, they can deploy their worker nodes and pod network to a newly created, non-routable secondary CIDR block (e.g. 100.64.0.0/10) within the VPC. EKS Custom Networking allows you to provision clusters to utilize the subnets in the secondary CIDR block for the pod network defined in an ENIConfig custom resource. The Platform provides an intergation with Custom Networking that simplifies the management of this complex functionality. This can save you a lot of time and effort in identifying and resolving issues.
apiVersion: infra.k8smgmt.io/v3
kind: Cluster
metadata:
name: my-eks-cluster
project: my-project
spec:
blueprintConfig:
name: default
cloudCredentials: my-cloud-credential
config:
addons:
- name: aws-ebs-csi-driver
version: latest
managedNodeGroups:
- amiFamily: AmazonLinux2
desiredCapacity: 1
iam:
withAddonPolicies:
autoScaler: true
instanceType: t3.large
maxSize: 6
minSize: 1
name: my-ng
privateNetworking: true
version: "1.23"
volumeSize: 80
volumeType: gp3
metadata:
name: my-eks-cluster
region: us-west-2
tags:
email: rafay@rafay.co
env: dev
version: "1.23"
network:
cni:
name: aws-cni
params:
customCniCrdSpec:
us-west-2a:
- securityGroups:
- sg-0f502b379c12735ce
subnet: subnet-081ff5e370607fafa
us-west-2c:
- securityGroups:
- sg-0f502b379c12735ce
subnet: subnet-0d336d3350d55a986
us-west-2d:
- securityGroups:
- sg-0f502b379c12735ce
subnet: subnet-0a4548dabae4b34cb
vpc:
clusterEndpoints:
privateAccess: true
publicAccess: false
nat:
gateway: Single
subnets:
private:
subnet-083bf5944d5ecb3dd:
id: subnet-083bf5944d5ecb3dd
subnet-0bce0fb4a1f682e13:
id: subnet-0bce0fb4a1f682e13
subnet-0f4534f41b98dd7be:
id: subnet-0f4534f41b98dd7be
public:
subnet-0238aec96d29bc809:
id: subnet-0238aec96d29bc809
subnet-0ad39284a3ed57cfe:
id: subnet-0ad39284a3ed57cfe
subnet-0fb450e17506bd15d:
id: subnet-0fb450e17506bd15d
proxyConfig: {}
type: aws-eks
Summary¶
Managing the CNI and its advanced features of your EKS cluster can be a challenging task, particularly for those who are not familiar with Amazon's EKS or VPC. A platform like Rafay for managing EKS clusters can dramatically simplify and streamline the management process with web console, IaC, or a declarative model that makes the Rafay’s Platform Solution an attractive option for enterprises.
Try It Out¶
If you want to try this out yourself, sign up for a Free Org/Tenant and check out our Getting Started Guide and documentation. We also have a YouTube video that covers the lifecycle management of Amazon EKS Anywhere clusters on bare metal.
The official repository for Amazon VPC CNI