Skip to content

Part 3: Blueprint

What Will You Do

In this part of the self-paced exercise, you will create a custom cluster blueprint with a Karpenter add-on, based on declarative specifications.


Step 1: Create Namespace

In this step, you will create a namespace for Karpenter. The "namespace.yaml" file located in /getstarted/karpenter/namespace" contains the declarative specification.

The following items may need to be updated/customized if you made changes to these or used alternate names.

  • value: karpenter-cluster
kind: ManagedNamespace
apiVersion: config.rafay.dev/v2
metadata:
  name: karpenter
  description: namespace for karpenter
  labels:
  annotations:
spec:
  type: RafayWizard
  resourceQuota:
  placement:
    placementType: ClusterSpecific
    clusterLabels:
    - key: rafay.dev/clusterName
      value: karpenter-cluster
  • Open Terminal (on macOS/Linux) or Command Prompt (Windows) and navigate to the folder where you forked the Git repository
  • Navigate to the folder "/getstarted/karpenter/namespace"
  • Type the command below
rctl create namespace -f namespace.yaml

If you did not encounter any errors, you can optionally verify if everything was created correctly on the controller.

  • Navigate to the "defaultproject" project in your Org
  • Select Infrastructure -> Namespaces
  • You should see an namesapce called "karpenter"

Namespace


Step 2: Create Addons

In this step, you will create two custom addons, one for the Karpenter Controller and a second for the Karpenter NodePool and EC2NodeClass. The specification files for this section are located in /getstarted/karpenter/addon".

The following details are used to build the Karpenter addon declarative specification.

  • "v1" because this is our first version
  • The addon is part of the "defaultproject"
  • Name of addon is "karpenter-addon"
  • The addon will be deployed to a namespace called "karpenter"
  • You will be using a "custom-values.yaml" as an override which is located in the folder "/getstarted/karpenter/addon"
apiVersion: infra.k8smgmt.io/v3
kind: Addon
metadata:
  name: karpenter-addon
  project: karpenter
spec:
  artifact:
    artifact:
      catalog: default-rafay
      chartName: karpenter
      chartVersion: v0.32.1
      valuesPaths:
      - name: file://custom-values.yaml
    options:
      maxHistory: 1
      timeout: 1m0s
    type: Helm
  namespace: karpenter
  sharing:
    enabled: false
  version: v1
  • Open Terminal (on macOS/Linux) or Command Prompt (Windows) and navigate to the folder where you forked the Git repository
  • Navigate to the folder "/getstarted/karpenter/addon"
  • Type the command below
rctl create addon version -f karpenter-addon.yaml --v3

If you did not encounter any errors, you can optionally verify if everything was created correctly on the controller.

  • Navigate to the "defaultproject" project in your Org
  • Select Infrastructure -> Addons
  • You should see an addon called "karpenter-addon"

Addon

Next, we will create the second custom addon for the Karptenter NodePool and EC2NodeClass.

Note that the name of the role we are using was created in Part 1.

Note that the "cluster-name" is set to match the name of the cluster and the AWS tags that were specified in the cluster spec file.

apiVersion: karpenter.sh/v1beta1
kind: NodePool
metadata:
  name: default
spec:
  template:
    spec:
      requirements:
        - key: kubernetes.io/arch
          operator: In
          values: ["amd64"]
        - key: kubernetes.io/os
          operator: In
          values: ["linux"]
        - key: karpenter.sh/capacity-type
          operator: In
          values: ["spot"]
        - key: karpenter.k8s.aws/instance-category
          operator: In
          values: ["t"]
        - key: karpenter.k8s.aws/instance-generation
          operator: Gt
          values: ["2"]
        - key: karpenter.k8s.aws/instance-size
          operator: In
          values: ["medium", "large", "xlarge"]
      nodeClassRef:
        name: default
  limits:
    cpu: 1000
  disruption:
    consolidationPolicy: WhenUnderutilized
    expireAfter: 720h # 30 * 24h = 720h
---
apiVersion: karpenter.k8s.aws/v1beta1
kind: EC2NodeClass
metadata:
  name: default
spec:
  tags:
    email: david@rafay.co
    env: dev
  amiFamily: AL2 # Amazon Linux 2
  role: "KarpenterNodeRole-Rafay"
  subnetSelectorTerms:
    - tags:
        cluster-name: "{{{ .global.Rafay.ClusterName }}}"
  securityGroupSelectorTerms:
    - tags:
        cluster-name: "{{{ .global.Rafay.ClusterName }}}"

The following details are used to build the provisioner addon declarative specification.

  • "v1" because this is our first version
  • The addon is part of the "defaultproject"
  • Name of addon is "provisioner-addon"
  • The addon will be deployed to a namespace called "karpenter"
  • You will be using the "nodepool.yaml" which is located in the folder "/getstarted/karpenter/addon"

The following items may need to be updated/customized if you made changes to these or used alternate names.

  • project: defaultproject
  • namespace: karpenter
kind: AddonVersion
metadata:
  name: v1
  project: defaultproject
spec:
  addon: provisioner-addon
  namespace: karpenter
  template:
    type: NativeYaml
    yamlFile: nodepool.yaml
  • Open Terminal (on macOS/Linux) or Command Prompt (Windows) and navigate to the folder where you forked the Git repository
  • Navigate to the folder "/getstarted/karpenter/addon"
  • Type the command below
rctl create addon version -f nodepool-addon.yaml

If you did not encounter any errors, you can optionally verify if everything was created correctly on the controller.

  • Navigate to the "defaultproject" project in your Org
  • Select Infrastructure -> Addons
  • You should see an addon called "nodepool-addon"

Addon


Step 3: Create Blueprint

In this step, you will create a custom cluster blueprint with the Karpenter addon and the Nodepool addon. The "blueprint.yaml" file contains the declarative specification.

  • Open Terminal (on macOS/Linux) or Command Prompt (Windows) and navigate to the folder where you forked the Git repository
  • Navigate to the folder "/getstarted/karpenter/blueprint"

The following items may need to be updated/customized if you made changes to these or used alternate names.

  • project: "defaultproject"
apiVersion: infra.k8smgmt.io/v3
kind: Blueprint
metadata:
  name: karpenter-blueprint
  project: defaultproject
spec:
  base:
    name: minimal
    version: 2.2.0
  type: custom
  customAddons:
  - name: karpenter-addon
    version: v1
  - name: nodepool-addon
    version: v1
  version: v1
  • Type the command below
rctl apply -f blueprint.yaml

If you did not encounter any errors, you can optionally verify if everything was created correctly on the controller.

  • Navigate to the "defaultproject" project in your Org
  • Select Infrastructure -> Blueprint
  • You should see an blueprint called "karpenter-blueprint

Karpenter Blueprint


Step 4: Update Cluster Blueprint

In this step, you will update the cluster to use the previously created custom blueprint with the Karpenter addon and the Provisioner addon.

  • Open Terminal (on macOS/Linux) or Command Prompt (Windows)
  • Type the command below
rctl update cluster karpenter-cluster --blueprint karpenter-blueprint --blueprint-version v1

If you did not encounter any errors, you can optionally verify if everything was created correctly on the controller.

  • Navigate to the "defaultproject" project in your Org
  • Select Infrastructure -> Clusters
  • You should see the cluster is now using the "karpenter-blueprint

Karpenter Blueprint

  • Navigate to Infrastructure -> Clusters
  • Click on "KUBECTL" in the "karpenter-cluster" cluster card
  • Type the command below
kubectl get pods --namespace karpenter
  • You should see a result like the following.
NAME                              READY   STATUS    RESTARTS   AGE
karpenter-addon-b6cb889dd-vfqtj   1/1     Running   0          6m39s
karpenter-addon-b6cb889dd-x47n9   1/1     Running   0          6m39s

Recap

As of this step, you have created a "cluster blueprint" with Karpenter and the Karpenter Provisioner as addons, and applied this blueprint to the existing cluster. You are now ready to move onto the next step where you will deploy a test workload to scale the cluster with Karpenter.