Skip to content

Examples

Infra GitOps using RCTL

Here are some of the examples to scale an existing cluster through Infra GitOps using RCTL

Below is a cluster config sample yaml file using System-assigned managed identity which is by default and in this azure automatically creates an identity for the AKS cluster and assigns it to the underlying Azure resources.

System-assigned managed identity Example

apiVersion: rafay.io/v1alpha1
kind: Cluster
metadata:
  name: demo-akscluster
  project: defaultproject
spec:
  blueprint: default-aks
  blueprintversion: latest
  cloudprovider: akscredentials
  clusterConfig:
    apiVersion: rafay.io/v1alpha1
    kind: aksClusterConfig
    metadata:
      name: demo-akscluster
    spec:
      managedCluster:
        additionalMetadata:
          acrProfile:
            acrName: demo_registry
            resourceGroupName: demoresoursegroup
        apiVersion: "2021-05-01"
        identity:
          type: SystemAssigned
        location: eastus2
        properties:
          apiServerAccessProfile:
            enablePrivateCluster: false
          dnsPrefix: demo-akscluster-dns
          kubernetesVersion: 1.21.2
          networkProfile:
            loadBalancerSku: standard
            networkPlugin: kubenet
            networkPolicy: calico
        sku:
          name: Basic
          tier: Free
        tags:
          role: demo
        type: Microsoft.ContainerService/managedClusters
      nodePools:
      - apiVersion: "2021-05-01"
        location: eastus2
        name: primary
        properties:
          count: 1
          enableAutoScaling: true
          maxCount: 2
          maxPods: 40
          minCount: 1
          mode: System
          nodeLabels:
            testdemo: demoworker
          orchestratorVersion: 1.21.2
          osType: Linux
          type: VirtualMachineScaleSets
          vmSize: Standard_DS2_v2
        type: Microsoft.ContainerService/managedClusters/agentPools
      resourceGroupName: demoresoursegroup
  proxyconfig: {}
  type: aks

Command to create a cluster using the config file:

./rctl apply -f <config-file.yaml>

Example:

./rctl apply -f demo-akscluster.yaml

Expected Output (with a task id):

{
  "taskset_id": "dk3lekn",
  "operations": [
    {
      "operation": "NodegroupCreation",
      "resource_name": "primary",
      "status": "PROVISION_TASK_STATUS_PENDING"
    },
    {
      "operation": "ClusterCreation",
      "resource_name": "demo-akscluster",
      "status": "PROVISION_TASK_STATUS_PENDING"
    }
  ],
  "comments": "The status of the operations can be fetched using taskset_id",
  "status": "PROVISION_TASKSET_STATUS_PENDING"
}

To know the status of the cluster creation operation, enter the below command with the generated task id

./rctl status apply dk3lekn

Expected Output

{
  "taskset_id": "dk3lekn",
  "operations": [
  {
    "operation": "NodegroupCreation",
    "resource_name": "pool1",
    "status": "PROVISION_TASK_STATUS_PENDING"
  },
  {
    "operation": "ClusterCreation",
    "resource_name": "demo-akscluster",
    "status": "PROVISION_TASK_STATUS_PENDING"
  }
  ],
  "comments": "Configuration is applied to the cluster successfully",
  "status": "PROVISION_TASKSET_STATUS_COMPLETE"

User-Assigned Managed Identity Example

Below is an example YAML configuration file for a reference AKS cluster that utilizes User-Assigned Managed Identity. With user-assigned managed identities, you have the flexibility to create and manage identities independently from the AKS cluster. These identities can then be associated with one or more AKS clusters, enabling seamless identity reuse across multiple AKS clusters or other Azure resources.

apiVersion: rafay.io/v1alpha1
kind: Cluster
metadata:
  name: azure-aks-demo
  project: demo
spec:
  blueprint: minimal
  cloudprovider: azure
  clusterConfig:
    apiVersion: rafay.io/v1alpha1
    kind: aksClusterConfig
    metadata:
      name: azure-aks-demo
    spec:
      managedCluster:
        apiVersion: "2022-07-01"
        identity:
          type: UserAssigned
          userAssignedIdentities:
            ? /subscriptions/a2252eb2-7a25-432b-a5ec-e18eba6f26b1/resourceGroups/demo/providers/Microsoft.ManagedIdentity/userAssignedIdentities/demo-mgi-cli
            : {}
        location: centralindia
        properties:
          apiServerAccessProfile:
            enablePrivateCluster: true
          dnsPrefix: azure-aks-demo-dns
          kubernetesVersion: 1.26.0
          networkProfile:
            dnsServiceIP: 10.1.0.10
            dockerBridgeCidr: 172.17.0.1/16
            loadBalancerSku: standard
            networkPlugin: azure
            serviceCidr: 10.1.0.0/16
        sku:
          name: Basic
          tier: Free
        tags:
          email: demo@rafay.co
          env: qa
        type: Microsoft.ContainerService/managedClusters
      nodePools:
      - apiVersion: "2022-07-01"
        location: centralindia
        name: primary
        properties:
          count: 1
          enableAutoScaling: true
          maxCount: 1
          maxPods: 110
          minCount: 1
          mode: System
          orchestratorVersion: 1.26.0
          osType: Linux
          type: VirtualMachineScaleSets
          vmSize: Standard_DS2_v2
          vnetSubnetID: /subscriptions/a2252eb2-7a25-432b-a5ec-e18eba6f26b1/resourceGroups/demo/providers/Microsoft.Network/virtualNetworks/demo-vnet1/subnets/default
        type: Microsoft.ContainerService/managedClusters/agentPools
      resourceGroupName: demo
  proxyconfig: {}
  type: aks

In this example, we have configured the AKS cluster to use a User Assigned Managed Identity.

  • type: UserAssigned: This line indicates that a User Assigned Managed Identity is being used for the AKS cluster.

  • userAssignedIdentities: This specifies the path to the User Assigned Managed Identity that will be associated with the AKS cluster. In this case, the identity is located at /subscriptions/a2252eb2-7a25-432b-a5ec-e18eba6f26b1/resourceGroups/demo/providers/Microsoft.ManagedIdentity/userAssignedIdentities/demo-mgi-cli: {}

To create a cluster using this config file, you can use the same rctl apply command.


Azure CNI Overlay Example

apiVersion: infra.k8smgmt.io/v3
kind: Cluster
metadata:
  modifiedAt: "2024-05-14T05:34:52.871381Z"
  name: demo-overlay-aks
  project: default
spec:
  blueprintConfig:
    name: minimal
  cloudCredentials: azure
  config:
    kind: aksClusterConfig
    metadata:
      name: demo-overlay-aks
    spec:
      managedCluster:
        apiVersion: "2024-01-01"
        identity:
          type: SystemAssigned
        location: centralindia
        properties:
          apiServerAccessProfile:
            enablePrivateCluster: true
          dnsPrefix: demo-overlay-aks-dns
          enableRBAC: true
          kubernetesVersion: 1.29.4
          networkProfile:
            dnsServiceIP: 10.0.0.10
            loadBalancerSku: standard
            networkPlugin: azure
            networkPluginMode: overlay
            podCidr: 10.244.0.0/16
            serviceCidr: 10.0.0.0/16
          powerState:
            code: Running
        sku:
          name: Base
          tier: Free
        type: Microsoft.ContainerService/managedClusters
      nodePools:
      - apiVersion: "2024-01-01"
        name: primary
        properties:
          count: 1
          enableAutoScaling: true
          maxCount: 1
          maxPods: 110
          minCount: 1
          mode: System
          orchestratorVersion: 1.29.4
          osType: Linux
          type: VirtualMachineScaleSets
          vmSize: Standard_B4ms
        type: Microsoft.ContainerService/managedClusters/agentPools
      resourceGroupName: demo-rg
  type: aks

In this example, we have configured the Azure CNI Overlay.

  • apiVersion: "2024-01-01": This parameter should be set to a date on or after January 1, 2024, when configuring Azure CNI in overlay mode
  • networkPluginMode: overlay: This line indicates that the network plugin mode for the Azure CNI is set to "overlay"
  • podCidr: 192.168.0.0/16:: podCidr is mandatory when configuring Azure CNI to overlay mode. This parameter specifies the CIDR block that will be used for assigning IP addresses to pods within the overlay network

Important

  • After the release of apiVersion 2023-02-01, SKU Tier Paid is deprecated with Premium. For older clusters, the previously configured Tier will remain visible in the configuration specifications.
  • After the release of apiVersion 2023-02-01, the dockerBridgeCIDR setting in network configurations of managedClusters is deprecated. For older clusters, the previously configured dockerBridgeCIDR will remain visible in the configuration specifications.
  • Ensure that the apiVersion in the managedCluster configuration matches the apiVersion in the nodepool configuration to prevent potential failures during nodepool operations.

Auto Upgrade for Cluster and Nodepools

Auto-upgrades can be performed as both Day-0 and Day-2 operations via RCTL, Terraform and GitOps. Below is an example of an AKS cluster for Managed Auto-upgrade configuration.

apiVersion: infra.k8smgmt.io/v3
kind: Cluster
metadata:
  modifiedAt: "2024-06-23T09:32:23.454579Z"
  name: demo-aks-cluster
  project: defaultproject
spec:
  blueprintConfig:
    name: default-aks
  cloudCredentials: aks1
  config:
    kind: aksClusterConfig
    metadata:
      name: demo-aks-cluster
    spec:
      maintenanceConfigurations:
      - apiVersion: "2024-01-01"
        name: aksManagedAutoUpgradeSchedule
        properties:
          maintenanceWindow:
            durationHours: 6
            schedule:
              weekly:
                dayOfWeek: Monday
                intervalWeeks: 1
            startDate: "2024-06-23"
            startTime: "12:00"
        type: Microsoft.ContainerService/managedClusters/maintenanceConfigurations
      managedCluster:
        apiVersion: "2024-01-01"
        identity:
          type: SystemAssigned
        location: centralindia
        properties:
          apiServerAccessProfile:
            enablePrivateCluster: false
          autoUpgradeProfile:
            nodeOsUpgradeChannel: None
            upgradeChannel: patch
          dnsPrefix: demo-aks-cluster-dns
          enableRBAC: true
          kubernetesVersion: 1.27.9
          networkProfile:
            loadBalancerSku: standard
            networkPlugin: kubenet
            networkPolicy: calico
          powerState:
            code: Running
        sku:
          name: Base
          tier: Free
        type: Microsoft.ContainerService/managedClusters
      nodePools:
      - apiVersion: "2024-01-01"
        name: primary
        properties:
          count: 2
          enableAutoScaling: true
          maxCount: 2
          maxPods: 110
          minCount: 2
          mode: System
          orchestratorVersion: 1.27.9
          osType: Linux
          type: VirtualMachineScaleSets
          vmSize: Standard_B4ms
        type: Microsoft.ContainerService/managedClusters/agentPools
      resourceGroupName: demo-rg
  type: aks

In this example, we have configured the maintenanceConfiguration and autoUpgradeProfile (nodeOsUpgradeChannel, and upgradeChannel). Users can specify the name as either aksManagedAutoUpgradeSchedule, aksManagedNodeOSUpgradeSchedule, or default, as per the requirement.

Planned Maintenance Configuration

AKS supports scheduled auto-upgrades through planned maintenance configurations. This feature allows for automatic execution of both AKS-initiated and user-initiated maintenance operations according to a chosen cadence. While scheduled maintenance can be used to time automatic upgrades, enabling or disabling planned maintenance does not affect the availability of automatic upgrades.

Node OS Images and K8s Version

AKS offers multiple auto-upgrade channels for timely node-level OS security updates. These channels provide flexibility and a customized strategy for managing node-level OS security.

  • nodeOsUpgradeChannel:

The multiple auto-upgrade channels for NodeImageVersion are:

  • None: No automatic upgrades are performed
  • Unmanaged: Allows manual control over upgrades
  • NodeImage: Offers updates to the entire node image
  • SecurityPatch: Focuses on timely security updates

  • upgradeChannel:

This parameter enables simultaneous auto-upgrade of the Kubernetes version for both the control plane and attached node pools. The supported values are rapid, stable, patch, node-image, none. This ensures clusters are consistently updated with the latest features and patches from AKS.

Important

  • To configure the Maintenance Configuration, the API version must be equal to or greater than 2023-05-01
  • The SecurityPatch channel is not supported on Windows OS node pools
  • Setting the upgradeChannel to 'node-image' automatically sets the nodeOSUpgradeChannel to 'NodeImage' if the apiVersion is 2023-11-02-preview or later
  • nodeOsUpgradeChannel is supported starting from API version 2023-06-01 and later
  • To perform Day 2 operations on the autoUpgradeProfile, please specify both upgradeChannel and nodeOsUpgradeChannel.

Cluster Sharing

For cluster sharing, add a new block to the cluster config (Rafay Spec) as highlighted in the below config file

apiVersion: rafay.io/v1alpha1
kind: Cluster
metadata:
  name: aks-democluster
  project: defaultproject
spec:
  blueprint: bp-aks
  blueprintversion: v5
  cloudprovider: cp_aks
  sharing:
      enabled: true
      projects:
      - name: "demoproject1"
      - name: "demoproject2"
  clusterConfig:
    apiVersion: rafay.io/v1alpha1
    kind: aksClusterConfig
    metadata:
      name: aks-democluster
    spec:
      managedCluster:
        apiVersion: "2021-05-01"
        identity:
          type: SystemAssigned
        location: centralindia
        properties:
          apiServerAccessProfile:
            enablePrivateCluster: false
          dnsPrefix: aks-dns-demo
          kubernetesVersion: 1.22.11
          networkProfile:
            loadBalancerSku: standard
            networkPlugin: kubenet
        sku:
          name: Basic
          tier: Free
        type: Microsoft.ContainerService/managedClusters
      nodePools:
      - apiVersion: "2021-05-01"
        location: centralindia
        name: primary
        properties:
          count: 1
          enableAutoScaling: true
          maxCount: 1
          maxPods: 40
          minCount: 1
          mode: System
          orchestratorVersion: 1.22.11
          osType: Linux
          type: VirtualMachineScaleSets
          vmSize: Standard_DS2_v2
        type: Microsoft.ContainerService/managedClusters/agentPools
      resourceGroupName: demo_group
  proxyconfig: {}
  type: aks

You can also use the wildcard operator "*" to share the cluster across projects

sharing:
    enabled: true
    projects:
    - name: "*"

Notes: When passing the wildcard operator, users cannot pass other projects name

To remove any cluster sharing from the project(s), remove that specific project name(s) and run the apply command


Update Blueprint

Make the required change for Blueprint and use the command

./rctl apply -f demo-akscluster.yaml

Expected output (with a task id):

{
  "taskset_id": "g29wek0",
  "operations": [
    {
      "operation": "BlueprintUpdation",
      "resource_name": "demo-akscluster",
      "status": "PROVISION_TASK_STATUS_PENDING"
    }
  ],
  "comments": "The status of the operations can be fetched using taskset_id",
  "status": "PROVISION_TASKSET_STATUS_PENDING"
}

To know the status of the Blueprint apply operation, enter the below command with the generated task id

./rctl status apply g29wek0

Expected Output

{
  "taskset_id": "g29wek0",
  "operations": [
    {
      "operation": "BlueprintUpdation",
      "resource_name": "demo-akscluster",
      "status": "PROVISION_TASK_STATUS_SUCCESS"
    }
  ],
  "comments": "Configuration is applied to the cluster successfully",
  "status": "PROVISION_TASKSET_STATUS_COMPLETE"

Update Cloud Credential

Make the required change for Cloud Credential and use the command

./rctl apply -f demo-akscluster.yaml

Expected output (with a task id):

{
  "taskset_id": "j2q9jm9",
  "operations": [
    {
      "operation": "CloudProviderUpdation",
      "resource_name": "demo-akscluster",
      "status": "PROVISION_TASK_STATUS_PENDING"
    }
  ],
  "comments": "The status of the operations can be fetched using taskset_id",
  "status": "PROVISION_TASKSET_STATUS_PENDING"
}

To know the status of the Cloud Credential apply operation, enter the below command with the generated task id

./rctl status apply j2q9jm9

Expected output

{
  "taskset_id": "j2q9jm9",
  "operations": [
    {
      "operation": "CloudProviderUpdation",
      "resource_name": "demo-akscluster",
      "status": "PROVISION_TASK_STATUS_SUCCESS"
    }
  ],
  "comments": "Configuration is applied to the cluster successfully",
  "status": "PROVISION_TASKSET_STATUS_COMPLETE"

Cluster Labels

Users can update Cluster Labels via RCTL using the below Cluster Configuration Yaml file

apiVersion: rafay.io/v1alpha1
kind: Cluster
metadata:
  labels:
    newrole: cluslab1
    roles: worker1
  name: newrole_1
  project: defaultproject
spec:
  blueprint: default-aks
  blueprintversion: latest
  cloudprovider: aks-cloudcred
  clusterConfig:
    apiVersion: rafay.io/v1alpha1
    kind: aksClusterConfig
    metadata:
      name: demo-akscluster
...

Command to apply the labels to the cluster:

./rctl apply -f <cluster-config.yaml>

Example:

./rctl apply -f demo-akscluster.yaml

Expected output (with a task id):

{
 "taskset_id": "lk5dwme",
 "operations": [
  {
   "operation": "ClusterLabelsUpdation",
   "resource_name": "demo-akscluster",
   "status": "PROVISION_TASK_STATUS_PENDING"
  }
 ],
 "comments": "The status of the operations can be fetched using taskset_id",
 "status": "PROVISION_TASKSET_STATUS_PENDING"
}

To know the status of the Cluster Label apply operation, enter the below command with the generated task id

./rctl status apply lk5dwme

Expected output:

{
  "taskset_id": "lk5dwme",
  "operations": [
    {
      "operation": "ClusterLabelsUpdation",
      "resource_name": "demo-akscluster",
      "status": "PROVISION_TASK_STATUS_SUCCESS"
    }
  ],
  "comments": "Configuration is applied to the cluster successfully",
  "status": "PROVISION_TASKSET_STATUS_COMPLETE"