In this part of the self-paced exercise, you will provision an Amazon EKS cluster which contains two (2) managed nodes groups. The first managed node group will be comprised of Linux on-demand compute instances and run the system level resources while the second managed node group will be comprised of Windows instances and run application workloads. The cluster will use the minimal blueprint.
In this step, we will provision an EKS cluster through the web console. We will first deploy the cluster with a single node group for system level resources. We will then add the Windows node group to the cluster.
Navigate to the "defaultproject" project in your Org
Select Infrastructure -> Clusters
Click "New Cluster"
Select "Create a New Cluster"
Click "Continue"
Select "Public Cloud"
Select "AWS"
Select "Amazon EKS"
Enter a cluster name
Click "Continue"
Select a previously created Cloud Credential
Select the AWS Region for the cluster
Click "Save Changes"
Click "Edit" on the "Node Group Settings" section
Enter a node group name for the system resources node group
Select "Managed Node Group"
Select "Custom" for "Instance Type"
Enter "t3.large" for the "Custom Instance Type"
Set the "Desired Nodes" to "1"
Set the "Nodes Min" to "1"
Click "Add Key-Value Label" in the "Labels" subsection
Enter "nodes" for the Key
Enter "system" for the Value
Click "Add Taint" in the "Taints" subsection
Click "Create Key-Value-Effect Taint"
Enter "components" for the Key
Enter "system" for the Value
Select "NoSchedule" for the Effect
Click "Save"
Expand the "System Components Placement" section
Click "Add Toleration"
Click "Create Key-Value-Effect Toleration"
Enter "components" for the Key
Enter "system" for the Value
Select "NoSchedule" for the Effect
Click "Save"
Click "Add Key-Value Node Selector" in the "Node Selectors" subsection
Once provisioning is complete, you should see the cluster in the web console with two nodes.
Click on the kubectl link and type the following command
kubectl get nodes -o wide
You should see output similar to the following showing the Linux and Windows nodes.
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-192-168-58-114.us-west-2.compute.internal Ready <none> 80m v1.24.7-eks-fb459a0 192.168.58.114 35.92.212.8 Amazon Linux 2 5.4.226-129.415.amzn2.x86_64 containerd://1.6.6
ip-192-168-80-118.us-west-2.compute.internal Ready <none> 30m v1.24.7-eks-fb459a0 192.168.80.118 44.234.144.188 Windows Server 2019 Datacenter 10.0.17763.3887 containerd://1.6.6
Now we will verify the system resources are running on the on-demand Linux node group.
Click "Nodes" in the tree at the top of the page to return to the nodes tab
Locate the "Node ID" of the node in the "managed-system" node group
Click on the "Resources" tab
Click on "Pods" in the left side window
Select "rafay-system" from the namespace drop down menu
Click the gear icon on the right side of the page and select "Node"
You will see that all of the system components are running on the "managed-system" node that was previously identified.
In this step, we will create a YAML based workload and publish the workload to the cluster. The workload will create a configmap to enable Windows Support on the cluster.
By default, the amazon-vpc-cni doesn't have Windows support enabled, and to deploy Windows pods so it can have its own VPC IP, we need to enable it in the EKS control plane. To do this, we will create a configmap in the cluster.
Navigate to the project in your Org where the cluster is located.
In this step, we will create the declarative cluster specification file and use the RCTL CLI to provision the cluster from the specification file. We will first deploy the cluster with a single node group for system level resources. We will then add the Windows node group to the cluster.
Save the below specification file to your computer as "eks-windows-cluster.yaml". Note, the highlighted sections in the spec will need to be updated to match your environment.
apiVersion:infra.k8smgmt.io/v3kind:Clustermetadata:# The name of the clustername:eks-windows-cluster# The name of the project the cluster will be created inproject:defaultprojectspec:blueprintConfig:# The name of the blueprint the cluster will usename:minimal# The version of the blueprint the cluster will useversion:latest# The name of the cloud credential that will be used to create the cluster cloudCredentials:aws-cloud-credentialconfig:# The EKS addons that will be applied to the clusteraddons:-name:kube-proxyversion:latest-name:vpc-cniversion:latest-name:corednsversion:latestmanagedNodeGroups:# The AWS AMI family type the nodes will use-amiFamily:AmazonLinux2# The desired number of nodes that can run in the node groupdesiredCapacity:1iam:withAddonPolicies:# Enables the IAM policy for cluster autoscalerautoScaler:true# The AWS EC2 instance type that will be used for the nodesinstanceType:t3.large# The labels applied to the nodes in the node grouplabels:nodes:system# The maximum number of nodes that can run in the node group maxSize:2# The minimum number of nodes that can run in the node groupminSize:1# The name of the node group that will be created in AWSname:managed-system# Apply taints to the node group to allow only system resources to be scheduled on these nodestaints:-effect:NoSchedulekey:componentsvalue:systemmetadata:# The name of the clustername:eks-windows-cluster# The AWS region the cluster will be created inregion:us-west-2# The Kubernetes version that will be installed on the clusterversion:latestvpc:# AutoAllocateIPV6 requests an IPv6 CIDR block with /56 prefix for the VPCautoAllocateIPv6:falseclusterEndpoints:# Enables private access to the Kubernetes API server endpointsprivateAccess:true# Enables public access to the Kubernetes API server endpointspublicAccess:false# The CIDR that will be used by the cluster VPC cidr:192.168.0.0/16# Configure the scheduler to only place system resources on the managed-system node group systemComponentsPlacement:nodeSelector:nodes:systemtolerations:-effect:NoSchedulekey:componentsoperator:Equalvalue:systemtype:aws-eks
Update the following sections of the specification file with details to match your environment
Update the name section with the name of the cluster to be created and the project section with the name of the Rafay project you previously created
name:eks-windows-clusterproject:defaultproject
Update the cloudCredentials section with the name of the AWS cloud credential that was previously created
cloudCredentials:aws-cloud-credential
Update the name and region sections with the cluster name and the AWS region where the cluster will be located
metadata:name:eks-windows-clusterregion:us-west-2
Save the updates that were made to the file
Open Terminal (on macOS/Linux) or Command Prompt (Windows) and navigate to the folder where you saved the file
Execute the following command to provision the cluster from the specification file previously saved
./rctl apply -f eks-windows-cluster.yaml
Login to the web console
Navigate to your project
Select Infrastructure -> Clusters
Click on the cluster name to monitor progress
Provisioning the infrastructure will take approximately 45 minutes to complete. The final step in the process is the blueprint sync.
Once provisioning is complete, you should see the cluster in the web console with two nodes.
Click on the kubectl link and type the following command
kubectl get nodes -o wide
You should see output similar to the following showing the Linux and Windows nodes.
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-192-168-58-114.us-west-2.compute.internal Ready <none> 80m v1.24.7-eks-fb459a0 192.168.58.114 35.92.212.8 Amazon Linux 2 5.4.226-129.415.amzn2.x86_64 containerd://1.6.6
ip-192-168-80-118.us-west-2.compute.internal Ready <none> 30m v1.24.7-eks-fb459a0 192.168.80.118 44.234.144.188 Windows Server 2019 Datacenter 10.0.17763.3887 containerd://1.6.6
Now we will verify the system resources are running on the on-demand Linux node group.
Click "Nodes" in the tree at the top of the page to return to the nodes tab
Locate the "Node ID" of the node in the "managed-system" node group
Click on the "Resources" tab
Click on "Pods" in the left side window
Select "rafay-system" from the namespace drop down menu
Click the gear icon on the right side of the page and select "Node"
You will see that all of the system components are running on the "managed-system" node that was previously identified.
In this step, we will create a namespace using the RCTL CLI. The namespace will be used to deploy workloads in future steps.
Save the below specification file to your computer as "namespace.yaml". Note, the highlighted sections in the spec will need to be updated to match your environment.
In this step, we will create a YAML based workload and publish the workload to the cluster. The workload will create a configmap to enable Windows Support on the cluster.
By default, the amazon-vpc-cni doesn't have Windows support enabled, and to deploy Windows pods so it can have its own VPC IP, we need to enable it in the EKS control plane. To do this, we will create a configmap in the cluster by deploying a workload.
Save the below specification file to your computer as "configmap.yaml".
Save the below specification file to your computer as "configmap-workload.yaml". Note, the highlighted sections in the spec will need to be updated to match your environment.
Update the following section of the specification file with the name of the previously created namespace.
namespace:windows
Update the following section of the specification file with the name of the project where the workload should be created. This should be the same project as the cluster you are using.
project:defaultproject
Update the following section of the specification file with the name of the cluster where the workload will be deployed.
clusters:eks-windows-cluster
Update the following section of the specification file with the name of the workload specification file that was created earlier in this step.
payload:configmap.yaml
Save the updates that were made to the file
Execute the following command to create the workload from the declarative spec file.
./rctl create workload configmap-workload.yaml
Login to the web console
Navigate to the project in your Org where the cluster is located
Select Applications -> Workloads
You will see the YAML workload has been created, but has not been published
Execute the following command to publish the workload to the cluster
./rctl publish workload windows-configmap
In the web console, you will see the workload is now published.
Additionally, you can use the Zero Trust KubeCTL access to check the configmap on the cluster.
Navigate to the project in your Org where the cluster is located.