Skip to content

Part 2: Provision

What Will You Do

In this part of the self-paced exercise, you will provision an Amazon EKS cluster based on a declarative cluster specification using the minimal blueprint


Step 1: Cluster Spec

  • Open Terminal (on macOS/Linux) or Command Prompt (Windows) and navigate to the folder where you forked the Git repository
  • Navigate to the folder "/getstarted/karpenter/cluster"

The "cluster.yaml" file contains the declarative specification for our Amazon EKS Cluster.

Cluster Details

In the cluster spec file, we define a tag with the cluster name that will automatically be applied to the AWS cluster resources during cluster creation. This tag will be used by the Karpenter provisioner to identify associated resources.

cluster-name: karpenter-cluster

The following line must be updated to match the ARN of the previously created instance profile. This can be acheived by updating the ACCOUNT-NUMBER

- arn: "arn:aws:iam::<ACCOUNT-NUMBER>:role/KarpenterNodeRole-Rafay"
The following items may need to be updated/customized if you made changes to these or used alternate names.

  • name: "karpenter-cluster"
  • project: "defaultproject"
  • cloudprovider: "aws-cloud-credential"
  • name: "karpenter-cluster"
  • region: "us-west-2"
apiVersion: infra.k8smgmt.io/v3
kind: Cluster
metadata:
  name: karpenter-cluster
  project: defaultproject
spec:
  blueprintConfig:
    name: minimal
    version: latest
  cloudCredentials: aws-cloud-credential
  config:
    iam:
      withOIDC: true
      serviceAccounts:
      - attachPolicy:
          Statement:
          - Action:
            # Write Operations
            - "ec2:CreateLaunchTemplate"
            - "ec2:CreateFleet"
            - "ec2:RunInstances"
            - "ec2:CreateTags"
            - "iam:PassRole"
            - "iam:CreateInstanceProfile"
            - "iam:TagInstanceProfile"
            - "iam:AddRoleToInstanceProfile"
            - "iam:RemoveRoleFromInstanceProfile"
            - "iam:DeleteInstanceProfile"
            - "ec2:DeleteLaunchTemplate"
            - "ec2:TerminateInstances"
            # Read Operations
            - "ec2:DescribeLaunchTemplates"
            - "ec2:DescribeSpotPriceHistory"
            - "ec2:DescribeImage"
            - "ec2:DescribeImages"
            - "ec2:DescribeInstances"
            - "ec2:DescribeSecurityGroups"
            - "ec2:DescribeSubnets"
            - "ec2:DescribeInstanceTypes"
            - "ec2:DescribeInstanceTypeOfferings"
            - "ec2:DescribeAvailabilityZones"
            - "ssm:GetParameter"
            - "eks:DescribeCluster"
            - "pricing:DescribeServices"
            - "pricing:GetAttributeValues"
            - "pricing:GetProducts"
            - "iam:GetInstanceProfile"
            Effect: Allow
            Resource: "*"
          Version: "2012-10-17"
        metadata:
          name: karpenter
          namespace: karpenter  
    identityMappings:
      arns:
      - arn: "arn:aws:iam::<ACCOUNT-NUMBER>:role/KarpenterNodeRole-Rafay"
        username: system:node:{{EC2PrivateDNSName}}
        group:
        - system:bootstrappers
        - system:nodes    
    managedNodeGroups:
    - amiFamily: AmazonLinux2
      desiredCapacity: 2
      instanceTypes:
      - t3.medium
      - t3.large
      maxSize: 4
      minSize: 2
      name: managed-spot
      spot: true
    - amiFamily: AmazonLinux2
      desiredCapacity: 1
      instanceType: t3.large
      labels:
        nodes: system
      maxSize: 2
      minSize: 1
      name: managed-system
      taints:
      - effect: NoSchedule
        key: components
        value: system
    metadata:
      name: karpenter-cluster
      region: us-west-2
      tags:
        cluster-name: karpenter-cluster
        email: tim@rafay.co
        env: qa
      version: "1.27"
    network:
      cni:
        name: aws-cni
    vpc:
      autoAllocateIPv6: false
      clusterEndpoints:
        privateAccess: true
        publicAccess: false
      cidr: 192.168.0.0/16
  systemComponentsPlacement:
    nodeSelector:
      nodes: system
    tolerations:
    - effect: NoSchedule
      key: components
      operator: Equal
      value: system
  type: aws-eks

Step 2: Provision Cluster

  • Type the command below to provision the EKS cluster
rctl apply -f cluster.yaml

If there are no errors, you will be presented with a "Task ID" that you can use to check progress/status. Note that this step requires creation of infrastructure in your AWS account and can take ~30-40 minutes to complete.

{
  "taskset_id": "6kn1dlm",
  "operations": [
    {
      "operation": "ClusterCreation",
      "resource_name": "karpenter-cluster",
      "status": "PROVISION_TASK_STATUS_PENDING"
    },
    {
      "operation": "NodegroupCreation",
      "resource_name": "managed-spot",
      "status": "PROVISION_TASK_STATUS_PENDING"
    },
    {
      "operation": "NodegroupCreation",
      "resource_name": "managed-system",
      "status": "PROVISION_TASK_STATUS_PENDING"
    },
    {
      "operation": "BlueprintSync",
      "resource_name": "karpenter-cluster",
      "status": "PROVISION_TASK_STATUS_PENDING"
    }
  ],
  "comments": "The status of the operations can be fetched using taskset_id",
  "status": "PROVISION_TASKSET_STATUS_PENDING"
}
  • Navigate to the "defaultproject" project in your Org
  • Click on Infrastructure -> Clusters. You should see something like the following

Provisioning in Process

  • Click on the cluster name to monitor progress

Provisioning in Process


Step 3: Verify Cluster

Once provisioning is complete, you should see the cluster in the web console

Provisioned Cluster

  • Click on the kubectl link and type the following command
kubectl get nodes

You should see something like the following

NAME                                           STATUS   ROLES    AGE   VERSION
ip-192-168-0-191.us-west-2.compute.internal    Ready    <none>   16m   v1.23.15-eks-49d8fe8
ip-192-168-28-126.us-west-2.compute.internal   Ready    <none>   16m   v1.23.15-eks-49d8fe8
ip-192-168-80-236.us-west-2.compute.internal   Ready    <none>   16m   v1.23.15-eks-49d8fe8

Recap

Congratulations! At this point, you have successfully provisioned an Amazon EKS cluster with a managed node group and spot instances managed by Karpenter in your AWS account using the RCTL CLI.