Skip to content

Part 4: Workload

What Will You Do

In this part of the self-paced exercise, you will deploy a test workload to your Amazon EKS cluster that will be used to change the load on the cluster and trigger Karpenter to scale the cluster up and down.


Step 1: Deploy Workload

In this step, you will create a workload on the cluster using the "inflate-workload.yaml" file which contains the declarative specification for our test workload.

The following items may need to be updated/customized if you made changes to these or used alternate names.

  • project: "defaultproject"
  • clusters: "karpenter-cluster"
name: inflate-workload
namespace: default
project: defaultproject
type: NativeYaml
clusters: karpenter-cluster
payload: inflate.yaml
  • Open Terminal (on macOS/Linux) or Command Prompt (Windows) and navigate to the folder where you forked the Git repository
  • Navigate to the folder "/getstarted/karpenter/workload"
  • Type the command below
rctl create workload inflate-workload.yaml

If there were no errors, you should see a message like below

Workload created successfully

Now, let us publish the newly created workload to the EKS cluster. The workload can be deployed to multiple clusters as per the configured "placement policy". In this case, you are deploying to a single EKS cluster with the name "karpenter-cluster".

rctl publish workload inflate-workload

In the web console, click on Applications -> Workloads. You should see something like the following.

Published Workload


Step 2: Scale Workload

The test workload can be scaled to consume the resources of the cluster. Once the cluster resources are constrained, Karpenter will increase the number of nodes in the cluster as needed.

  • Navigate to Infrastructure -> Clusters
  • Click on "KUBECTL" in the cluster card
  • Verify the number of "inflate" pods is 0
kubectl get deployments
  • You should see a result like the following showing the inflate deployment with 0 pods running.
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
inflate   0/0     0            0           62s
  • Verify the number of nodes
kubectl get nodes
  • You should see a result like the following showing 2 nodes.
NAME                                           STATUS   ROLES    AGE     VERSION
ip-192-168-16-236.us-west-2.compute.internal   Ready    <none>   3h32m   v1.23.15-eks-49d8fe8
ip-192-168-50-183.us-west-2.compute.internal   Ready    <none>   3h32m   v1.23.15-eks-49d8fe8
ip-192-168-64-134.us-west-2.compute.internal   Ready    <none>   3h32m   v1.23.15-eks-49d8fe8
  • Scale the number of inflate replicas up to increase workload
kubectl scale deployment inflate --replicas 5
  • To verify that the deployment was scaled up succesfully
kubectl get pod -n default
  • You should see a result like the following showing 5 inflate pods
NAME                       READY   STATUS    RESTARTS   AGE
inflate-5688b4d994-5zpw6   0/1     Pending   0          29s
inflate-5688b4d994-7gp9f   0/1     Pending   0          29s
inflate-5688b4d994-9glvd   0/1     Pending   0          29s
inflate-5688b4d994-gzghj   0/1     Pending   0          29s
inflate-5688b4d994-l2sxz   0/1     Pending   0          29s
  • Verify the number of nodes again
kubectl get nodes
  • You should see a result like the following showing 3 nodes
NAME                                           STATUS   ROLES    AGE     VERSION
ip-192-168-105-43.us-west-2.compute.internal   Ready    <none>   85s     v1.23.15-eks-49d8fe8
ip-192-168-124-94.us-west-2.compute.internal   Ready    <none>   85s     v1.23.15-eks-49d8fe8
ip-192-168-16-236.us-west-2.compute.internal   Ready    <none>   3h34m   v1.23.15-eks-49d8fe8
ip-192-168-50-183.us-west-2.compute.internal   Ready    <none>   3h34m   v1.23.15-eks-49d8fe8
ip-192-168-64-134.us-west-2.compute.internal   Ready    <none>   3h34m   v1.23.15-eks-49d8fe8
  • Verify that the inflate pods are now in a running state with the new node resources
kubectl get pod -n default
  • You should see a result like the following showing 5 inflate pods in a running state
NAME                       READY   STATUS    RESTARTS   AGE
inflate-7d57f774d4-92njk   1/1     Running   0          87s
inflate-7d57f774d4-bm8kv   1/1     Running   0          87s
inflate-7d57f774d4-nkxg5   1/1     Running   0          87s
inflate-7d57f774d4-v8tkq   1/1     Running   0          87s
inflate-7d57f774d4-vd4xw   1/1     Running   0          87s
  • Scale the number of inflate replicas down to decrease workload
kubectl scale deployment inflate --replicas 0
  • To verify that the deployment was scaled down succesfully
kubectl get pod -n default
  • You should see results like the following showing 0 pods.
No resources found in default namespace.
  • After a few minutes, you will see that the number of nodes has scaled down to a count of 3.
kubectl get nodes
NAME                                           STATUS   ROLES    AGE     VERSION
ip-192-168-16-236.us-west-2.compute.internal   Ready    <none>   3h36m   v1.23.15-eks-49d8fe8
ip-192-168-50-183.us-west-2.compute.internal   Ready    <none>   3h36m   v1.23.15-eks-49d8fe8
ip-192-168-64-134.us-west-2.compute.internal   Ready    <none>   3h36m   v1.23.15-eks-49d8fe8

Recap

Congratulations! At this point, you have successfuly

  • Deployed and scaled a test workload to the EKS Cluster and verified that Karpenter automatically adjusted the number of nodes in the cluster