Skip to content

Part 2: Scale

This is Part 2 of a multi-part, self-paced quick start exercise that will scale the number of nodes in the AKS cluster either using the web console or the RCTL CLI.


What Will You Do

In this guide, you will:

  • Scale the number of nodes in a node pool
  • Add a "User" node pool to the cluster
  • Remove a "User" node pool from the cluster

Step 1: Scale Nodes

In this step, we will scale the number of nodes within the cluster. You can scale the number of nodes up or down, depending on your needs. In this example, we will scale the number of nodes up to 2.

  • Navigate to the previously created project in your Org
  • Select Infrastructure -> Clusters
  • Click on the cluster name of the previosuly created cluster
  • Click the "Node Pools" tab
  • Click the edit button on the existing node pool
  • Increase the "Min Count" and "Max Count" to "2"
  • Click "Save & Provision"

Scale Cluster

You will see the node pool begin to scale

Scale Cluster

After a few minutes, from the web console, we can see that the number of nodes in the node pool have scaled to 2.

Scale Cluster


Step 2: Add Node Pool

In this step, we will add a spot instance node pool to the cluster.

  • Navigate to the previously created project in your Org
  • Select Infrastructure -> Clusters
  • Click on the cluster name of the previosuly created cluster
  • Click the "Node Pools" tab
  • Click "Add Node Pool"
  • Enter a "Name" for the node pool
  • Select "User" for the "Mode"
  • Select the "K8s Version" to match the existing node pool
  • Select "Enable Spot Price"
  • Enter a "Spot Max Price"
  • Click "Save & Provision"

Add Node Pool

From the web console, we can see that the new node pool is being created. This could take up to 15 minutes to complete.

Add Node Pool

Monitor the web console until the node pool has been created

Add Node Pool


Step 3: Remove Node Pool

In this step, we will remove the spot instance node pool from the cluster.

  • Navigate to the previously created project in your Org
  • Select Infrastructure -> Clusters
  • Click on the cluster name of the previosuly created cluster
  • Click the "Node Pools" tab
  • Click the delete button on the newly created node pool
  • Click "YES to confirm the node pool deletion

From the web console, we can see that the new node pool is being removed

Verify Node Count

Monitor the web console until the node pool has been removed. You will only see one node pool remaining.

Verify Node Count


Step 1: Scale Nodes

In this step, we will scale the number of nodes within the cluster. You can scale the number of nodes up or down, depending on your needs. In this example, we will scale the number of nodes up to 2.

Download the cluster config from the existing cluster

  • Go to Infrastructure -> Clusters. Click on the settings icon of the cluster and select "Download Cluster Config"
  • Update the "count", "maxCount" and "minCount" fields to "2" in the downloaded specification file
properties:
  count: 1
  enableAutoScaling: true
  maxCount: 1
  maxPods: 40
  minCount: 1
  mode: System
  orchestratorVersion: 1.23.8
  osType: Linux
  type: VirtualMachineScaleSets
  vmSize: Standard_DS2_v2

The updated YAML file will look like this:

apiVersion: rafay.io/v1alpha1
kind: Cluster
metadata:
  name: aks-get-started-cluster
  project: aks
spec:
  blueprint: default-aks
  cloudprovider: Azure-CC
  clusterConfig:
    apiVersion: rafay.io/v1alpha1
    kind: aksClusterConfig
    metadata:
      name: aks-get-started-cluster
    spec:
      managedCluster:
        apiVersion: "2022-07-01"
        identity:
          type: SystemAssigned
        location: centralindia
        properties:
          apiServerAccessProfile:
            enablePrivateCluster: true
          dnsPrefix: aks-get-started-cluster-dns
          kubernetesVersion: 1.23.8
          networkProfile:
            loadBalancerSku: standard
            networkPlugin: kubenet
        sku:
          name: Basic
          tier: Free
        type: Microsoft.ContainerService/managedClusters
      nodePools:
      - apiVersion: "2022-07-01"
        location: centralindia
        name: primary
        properties:
          count: 2
          enableAutoScaling: true
          maxCount: 2
          maxPods: 40
          minCount: 2
          mode: System
          orchestratorVersion: 1.23.8
          osType: Linux
          type: VirtualMachineScaleSets
          vmSize: Standard_DS2_v2
        type: Microsoft.ContainerService/managedClusters/agentPools
      resourceGroupName: Rafay-ResourceGroup
  proxyconfig: {}
  type: aks
  • Execute the following command to scale the number of nodes within the cluster node pool. Note, update the file name in the below command with the name of your updated specification file.
./rctl apply -f aks-get-started-cluster-config.yaml

Expected output (with a task id):

{
  "taskset_id": "dkgy47k",
  "operations": [
    {
      "operation": "NodepoolEdit",
      "resource_name": "primary",
      "status": "PROVISION_TASK_STATUS_PENDING"
    }
  ],
  "comments": "The status of the operations can be fetched using taskset_id",
  "status": "PROVISION_TASKSET_STATUS_PENDING"
}

After a few minutes, from the web console, we can see that the number of nodes in the node pool have scaled to 2.

Verify Node Count


Step 2: Add Node Pool

In this step, we will add a spot instance node pool to the cluster. We will modify the specification file that was applied in step 1.

  • Add the following node pool configuration code to the previously applied cluster specification file. Note, update the "location" field to match your environment.
- apiVersion: "2022-07-01"
  location: centralindia
  name: pool2
  properties:
    count: 1
    enableAutoScaling: true
    maxCount: 1
    maxPods: 40
    minCount: 1
    mode: User
    orchestratorVersion: 1.23.8
    osType: Linux
    scaleSetPriority: Spot
    spotMaxPrice: 0.03
    type: VirtualMachineScaleSets
    vmSize: Standard_DS2_v2
  type: Microsoft.ContainerService/managedClusters/agentPools

The fully updated cluster specification file including the newly added spot instance node pool code will look like this:

apiVersion: rafay.io/v1alpha1
kind: Cluster
metadata:
  name: aks-get-started-cluster
  project: aks
spec:
  blueprint: default-aks
  cloudprovider: Azure-CC
  clusterConfig:
    apiVersion: rafay.io/v1alpha1
    kind: aksClusterConfig
    metadata:
      name: aks-get-started-cluster
    spec:
      managedCluster:
        apiVersion: "2022-07-01"
        identity:
          type: SystemAssigned
        location: centralindia
        properties:
          apiServerAccessProfile:
            enablePrivateCluster: true
          dnsPrefix: aks-get-started-cluster-dns
          kubernetesVersion: 1.23.8
          networkProfile:
            loadBalancerSku: standard
            networkPlugin: kubenet
        sku:
          name: Basic
          tier: Free
        type: Microsoft.ContainerService/managedClusters
      nodePools:
      - apiVersion: "2022-07-01"
        location: centralindia
        name: primary
        properties:
          count: 2
          enableAutoScaling: true
          maxCount: 2
          maxPods: 40
          minCount: 2
          mode: System
          orchestratorVersion: 1.23.8
          osType: Linux
          type: VirtualMachineScaleSets
          vmSize: Standard_DS2_v2
        type: Microsoft.ContainerService/managedClusters/agentPools
      - apiVersion: "2022-07-01"
        location: centralindia
        name: pool2
        properties:
          count: 1
          enableAutoScaling: true
          maxCount: 1
          maxPods: 40
          minCount: 1
          mode: User
          orchestratorVersion: 1.23.8
          osType: Linux
          scaleSetPriority: Spot
          spotMaxPrice: 0.03
          type: VirtualMachineScaleSets
          vmSize: Standard_DS2_v2
        type: Microsoft.ContainerService/managedClusters/agentPools
      resourceGroupName: Rafay-ResourceGroup
  proxyconfig: {}
  type: aks
  • Execute the following command to create the spot instance node pool. Note, update the file name in the below command with the name of your updated specification file.

./rctl apply -f aks-cluster-basic.yaml
Expected output (with a task id):

{
  "taskset_id": "3mxzoo2",
  "operations": [
    {
      "operation": "NodegroupCreation",
      "resource_name": "pool2",
      "status": "PROVISION_TASK_STATUS_PENDING"
    },
    {
      "operation": "NodepoolEdit",
      "resource_name": "primary",
      "status": "PROVISION_TASK_STATUS_PENDING"
    }
  ],
  "comments": "The status of the operations can be fetched using taskset_id",
  "status": "PROVISION_TASKSET_STATUS_PENDING"
}

From the web console, we can see that the new node pool is being created. This could take up to 15 minutes to complete.

Verify Node Count

Monitor the web console until the node pool has been created

Verify Node Count


Step 3: Remove Node Pool

In this step, we will remove the spot instance node pool from the cluster. We will modify the specification file that was applied in step 2. We will simply remove the code section that was added in step 2 to remove the node pool.

  • Remove the following node pool configuration code from the previously applied cluster specification file
- apiVersion: "2022-07-01"
  location: centralindia
  name: pool2
  properties:
    count: 1
    enableAutoScaling: true
    maxCount: 1
    maxPods: 40
    minCount: 1
    mode: User
    orchestratorVersion: 1.23.8
    osType: Linux
    scaleSetPriority: Spot
    spotMaxPrice: 0.03
    type: VirtualMachineScaleSets
    vmSize: Standard_DS2_v2
  type: Microsoft.ContainerService/managedClusters/agentPools

The updated cluster specification file with the removed spot instance node pool code will look like this:

apiVersion: rafay.io/v1alpha1
kind: Cluster
metadata:
  name: aks-get-started-cluster
  project: aks
spec:
  blueprint: default-aks
  cloudprovider: Azure-CC
  clusterConfig:
    apiVersion: rafay.io/v1alpha1
    kind: aksClusterConfig
    metadata:
      name: aks-get-started-cluster
    spec:
      managedCluster:
        apiVersion: "2022-07-01"
        identity:
          type: SystemAssigned
        location: centralindia
        properties:
          apiServerAccessProfile:
            enablePrivateCluster: true
          dnsPrefix: aks-get-started-cluster-dns
          kubernetesVersion: 1.23.8
          networkProfile:
            loadBalancerSku: standard
            networkPlugin: kubenet
        sku:
          name: Basic
          tier: Free
        type: Microsoft.ContainerService/managedClusters
      nodePools:
      - apiVersion: "2022-07-01"
        location: centralindia
        name: primary
        properties:
          count: 1
          enableAutoScaling: true
          maxCount: 1
          maxPods: 40
          minCount: 1
          mode: System
          orchestratorVersion: 1.23.8
          osType: Linux
          type: VirtualMachineScaleSets
          vmSize: Standard_DS2_v2
        type: Microsoft.ContainerService/managedClusters/agentPools
      resourceGroupName: Rafay-ResourceGroup
  proxyconfig: {}
  type: aks
  • Execute the following command to remove the spot instance node pool. Note, update the file name in the below command with the name of your updated specification file.

./rctl apply -f aks-cluster-basic.yaml
Expected output (with a task id):

{
  "taskset_id": "dk6z70m",
  "operations": [
    {
      "operation": "NodegroupDeletion",
      "resource_name": "pool2",
      "status": "PROVISION_TASK_STATUS_PENDING"
    }
  ],
  "comments": "The status of the operations can be fetched using taskset_id",
  "status": "PROVISION_TASKSET_STATUS_PENDING"
}

From the web console, we can see that the new node pool is being removed

Verify Node Count

Monitor the web console until the node pool has been removed. You will only see one node pool remaining.


Recap

Congratulations! At this point, you have

  • Successfully scaled a node pool to include the desired number of nodes
  • Successfully added a spot instance node pool to the cluster to take advantage of discounted compute resources
  • Successfully removed a spot instance node pool from the cluster