Part 5: Upgrade
What Will You Do¶
In this part of the self-paced exercise, you will upgrade the Kubernetes version on your Amazon EKS cluster utilizing the declarative cluster specification we used to create the cluster
Step 1: Cluster Spec¶
- Open Terminal (on macOS/Linux) or Command Prompt (Windows) and navigate to the folder where you forked the Git repository
- Navigate to the folder "
/getstarted/karpenter/cluster"
The "cluster.yaml" file contains the declarative specification for our Amazon EKS Cluster.
Cluster Details¶
In the cluster spec file, we define the Kuberetes version for the control plane and the nodegroups of the cluster.
version: "1.30"
The following items will need to be updated to 1.30.
apiVersion: infra.k8smgmt.io/v3
kind: Cluster
metadata:
name: karpenter-cluster
project: defaultproject
spec:
blueprintConfig:
name: karpenter-blueprint
version: v1
cloudCredentials: aws-cloud-credential
config:
addons:
- name: kube-proxy
version: latest
- name: vpc-cni
version: latest
- name: coredns
version: latest
- configurationValues: |-
controller:
tolerations:
- effect: NoSchedule
key: nodeInfra
operator: Exists
name: aws-ebs-csi-driver
version: latest
iam:
serviceAccounts:
- attachPolicy:
Statement:
- Action:
- ec2:CreateLaunchTemplate
- ec2:CreateFleet
- ec2:RunInstances
- ec2:CreateTags
- iam:PassRole
- iam:CreateInstanceProfile
- iam:TagInstanceProfile
- iam:AddRoleToInstanceProfile
- iam:RemoveRoleFromInstanceProfile
- iam:DeleteInstanceProfile
- ec2:DeleteLaunchTemplate
- ec2:TerminateInstances
- ec2:DescribeLaunchTemplates
- ec2:DescribeSpotPriceHistory
- ec2:DescribeImage
- ec2:DescribeImages
- ec2:DescribeInstances
- ec2:DescribeSecurityGroups
- ec2:DescribeSubnets
- ec2:DescribeInstanceTypes
- ec2:DescribeInstanceTypeOfferings
- ec2:DescribeAvailabilityZones
- ssm:GetParameter
- eks:DescribeCluster
- pricing:DescribeServices
- pricing:GetAttributeValues
- pricing:GetProducts
- iam:GetInstanceProfile
Effect: Allow
Resource: '*'
Version: "2012-10-17"
metadata:
name: karpenter
namespace: karpenter
withOIDC: true
identityMappings:
arns:
- arn: "arn:aws:iam::<ACCOUNT-NUMBER>:role/KarpenterNodeRole-Rafay"
group:
- system:bootstrappers
- system:nodes
username: system:node:{{EC2PrivateDNSName}}
managedNodeGroups:
- amiFamily: AmazonLinux2
desiredCapacity: 1
instanceType: t3.large
labels:
nodes: infra
maxSize: 2
minSize: 0
name: infra-nodegroup
taints:
- effect: NoSchedule
key: nodeInfra
version: "1.30"
metadata:
name: karpenter-cluster
region: us-west-2
tags:
cluster-name: karpenter-cluster
email: <EMAIL>
env: <ENV>
version: "1.30"
vpc:
autoAllocateIPv6: false
cidr: 192.168.0.0/16
clusterEndpoints:
privateAccess: true
publicAccess: false
systemComponentsPlacement:
nodeSelector:
node: infra
tolerations:
- effect: NoSchedule
key: nodeInfra
operator: Exists
type: aws-eks
Step 2: Upgrade Cluster¶
- Type the command below to start the Upgrade of the EKS cluster
rctl apply -f cluster.yaml
If there are no errors, you will be presented with a "Task ID" that you can use to check progress/status. Note that this step can take ~30-45 minutes to complete.
[
{
"tasksetId": "d2wyryk",
"tasksetOperations": [
{
"operationName": "ClusterUpgrade",
"resourceName": "karpenter-cluster",
"operationStatus": "PROVISION_TASK_STATUS_INPROGRESS"
}
],
"tasksetStatus": "PROVISION_TASKSET_STATUS_INPROGRESS",
"comments": "Configuration is being applied to the cluster"
},
{
"tasksetId": "emp9j02",
"tasksetOperations": [
{
"operationName": "BlueprintUpdation",
"resourceName": "karpenter-cluster",
"operationStatus": "PROVISION_TASK_STATUS_SUCCESS"
}
],
"tasksetStatus": "PROVISION_TASKSET_STATUS_COMPLETE",
"comments": "Configuration is applied to the cluster successfully"
},
{
"tasksetId": "7kr6npk",
"tasksetOperations": [
{
"operationName": "NodegroupScaling",
"resourceName": "infra-nodegroup",
"operationStatus": "PROVISION_TASK_STATUS_SUCCESS"
},
{
"operationName": "BlueprintUpdation",
"resourceName": "karpenter-cluster",
"operationStatus": "PROVISION_TASK_STATUS_SUCCESS"
}
],
"tasksetStatus": "PROVISION_TASKSET_STATUS_COMPLETE",
"comments": "Configuration is applied to the cluster successfully"
},
{
"tasksetId": "d27990k",
"tasksetOperations": [
{
"operationName": "BlueprintUpdation",
"resourceName": "karpenter-cluster",
"operationStatus": "PROVISION_TASK_STATUS_SUCCESS"
}
],
"tasksetStatus": "PROVISION_TASKSET_STATUS_COMPLETE",
"comments": "Configuration is applied to the cluster successfully"
},
{
"tasksetId": "gkj55z2",
"tasksetOperations": [
{
"operationName": "BlueprintUpdation",
"resourceName": "karpenter-cluster",
"operationStatus": "PROVISION_TASK_STATUS_FAILED",
"errorSummary": "rpc error: code = Unknown desc = error in getting blueprint-karpenter-blueprint version-latest object"
}
],
"tasksetStatus": "PROVISION_TASKSET_STATUS_FAILED",
"comments": "There were problem(s) while applying configuration to the cluster"
}
]
- Navigate to the "defaultproject" project in your Org
- Click on Infrastructure -> Clusters. You should see something like the following
- Click on the cluster name and then navigate to the Upgrade Jobs tab, the upgrade can take ~45 minutes
Step 3: Verify Cluster¶
Once the upgrade is complete, you should see the cluster in the web console with the updated K8s version
- Click on the kubectl link and type the following command
kubectl get nodes
You should see something like the following
NAME STATUS ROLES AGE VERSION
ip-192-168-10-52.us-west-2.compute.internal Ready <none> 32m v1.30.2-eks-1552ad0
ip-192-168-76-220.us-west-2.compute.internal Ready <none> 33m v1.30.2-eks-1552ad0
Recap¶
Congratulations! At this point, you have successfully upgraded an Amazon EKS cluster with a managed nodegroup and additional nodes managed by Karpenter in your AWS account using the RCTL CLI.