Skip to content

Clusters

Clusters and workloads are deployed in the Customer's Org in the context of a Project. Users can use RCTL to fully automate the lifecycle management of clusters. Specifically, the operations listed in the table below can be fully automated using RCTL.

Resource Create Get Update Delete
Cluster YES YES YES YES

Dry Run

To obtain information about the operations that will be performed during cluster provisioning, use the dry run command. By executing the dry run command, you can preview the operations that would take place without actually applying them. Dry run is supported for the cluster types EKS, AKS, GKE, and MKS.

./rctl apply -f <cluster_filename.yaml> --dry-run

Example

./rctl apply -f cluster_demo.yaml --dry-run
{
  "operations": [
    {
      "operationName": "ClusterCreation",
      "resourceName": "aksresource-2"
    },
    {
      "operationName": "NodegroupCreation",
      "resourceName": "primary"
    },
    {
      "operationName": "BlueprintSync",
      "resourceName": "aksresource-2"
    }
  ]
}

Create/Update Cluster

Declarative

Use the below to create/update a cluster in your project. You can also import a cluster into the Project based on a version controlled cluster spec that you can store in a Git repository. This enables users to develop automation for reproducible infrastructure.

./rctl apply -f clusterspec.yaml

Below are illustrative examples of the cluster specification YAML file that you can utilize to import the cluster.

EKS

apiVersion: rafay.io/v1alpha1
kind: Cluster
metadata:
  name: imported-eks-cluster-apply-01
  project: default
spec:
  blueprint: minimal
  blueprintversion: Latest
  clusterConfig:
    kubernetesProvider: EKS
    provisionEnvironment: CLOUD
    clusterlocation: "aws/us-west-2"
  type: imported

AKS

apiVersion: rafay.io/v1alpha1
kind: Cluster
metadata:
  name: imported-eks-cluster-apply-01
  project: default
spec:
  blueprint: minimal
  blueprintversion: Latest
  clusterConfig:
    kubernetesProvider: AKS
    provisionEnvironment: CLOUD
    clusterlocation: azure/centralindia
  type: imported

Other

kind: Cluster
metadata:
  name: <CLUSTER>
  project: <PROJECT>
spec:
  type: imported
  blueprint: minimal
  blueprintversion: Latest
  location: sanjose-us

Note: Explicitly specify the location when using the CLI method, as there is no default location available via CLI and TF.

After creating a cluster using above "cluster specification" for imported clusters, the user must retrieve the "bootstrap" YAML file using the below command.

./rctl get clusterbootstrap <imported cluster name> >> cluster_bootstrap.yaml

Use "kubectl" to apply the bootstrap YAML file on your existing clusters to import it into the controller.

Note: Every cluster requires a different bootstrap YAML file. Reusing the same file across clusters is not possible

  kubectl apply -f cluster_bootstrap.yaml

Here is another example of a cluster unified YAML specification that you can utilize to create an EKS cluster through the controller using the RCTL apply command.

./rctl apply  -f clusterspec.yaml

You can refer to the following sample clusterspec.yaml for reference.

apiVersion: infra.k8smgmt.io/v3
kind: Cluster
metadata:
  name: demo-cluster
  project: default
spec:
  blueprintConfig:
    name: demo-bp
    version: v1
  cloudCredentials: demo_aws
  config:
    managedNodeGroups:
    - amiFamily: AmazonLinux2
      desiredCapacity: 1
      iam:
        withAddonPolicies:
          autoScaler: true
      instanceType: t3.xlarge
      maxSize: 2
      minSize: 0
      name: managed-ng-1
      version: "1.22"
      volumeSize: 80
      volumeType: gp3
    metadata:
      name: demo-cluster
      region: us-west-2
      version: "1.22"
    network:
      cni:
        name: aws-cni
        params:
          customCniCrdSpec:
            us-west-2a:
            - securityGroups:
              - sg-09706d2348936a2b1
              subnet: subnet-0f854d90d85509df9
            us-west-2b:
            - securityGroups:
              - sg-09706d2348936a2b1
              subnet: subnet-0301d84c8b9f82fd1
    vpc:
      clusterEndpoints:
        privateAccess: false
        publicAccess: true
      nat:
        gateway: Single
      subnets:
        private:
          subnet-06e99eb57fcf4f117:
            id: subnet-06e99eb57fcf4f117
          subnet-0509b963a387f7fc7:
            id: subnet-0509b963a387f7fc7
        public:
          subnet-056b49f76124e37ec:
            id: subnet-056b49f76124e37ec
          subnet-0e8e6d17f6cb05b29:
            id: subnet-0e8e6d17f6cb05b29
  proxyConfig: {}
  type: aws-eks

List Clusters

Use this command to retrieve the list of clusters available in the configured project. In the example shown below, there are four clusters in this project.

../rctl get cluster
+------------------------+-----------+-----------+---------------------------+
| NAME                   | TYPE      | OWNERSHIP | PROVISION STATUS          |
+------------------------+-----------+-----------+---------------------------+
| demo_1                 | azure-aks | self      | INFRA_CREATION_INPROGRESS |
+------------------------+-----------+-----------+---------------------------+
| demo_2                 | azure-aks | self      | INFRA_CREATION_INPROGRESS |
+------------------------+-----------+-----------+---------------------------+
| demo_3                 | imported  | self      |                           |
+------------------------+-----------+-----------+---------------------------+
| demo_4                 | amazon-eks| self      | INFRA_CREATION_INPROGRESS |
+------------------------+-----------+-----------+---------------------------+

Use the below command to retrieve the v3 clusters details available in the configured project. In the example shown below, there are four clusters in this project.

./rctl get cluster --v3
+------------------------+-------------------------------+-----------+----------+-----------+---------------------------+
| NAME                   | CREATED AT                    | OWNERSHIP | TYPE     | BLUEPRINT | PROVISION STATUS          |
+------------------------+-------------------------------+-----------+----------+-----------+---------------------------+
| demo_1                 | 2023-06-05 10:54:08 +0000 UTC | self      | aks      | minimal   | INFRA_CREATION_INPROGRESS |
+------------------------+-------------------------------+-----------+----------+-----------+---------------------------+
| demo_2                 | 2023-06-05 10:57:59 +0000 UTC | self      | aks      | minimal   | INFRA_CREATION_INPROGRESS |
+------------------------+-------------------------------+-----------+----------+-----------+---------------------------+
| demo_3                 | 2023-06-02 11:10:25 +0000 UTC | self      | imported |           |                           |
+------------------------+-------------------------------+-----------+----------+-----------+---------------------------+
| demo_4                 | 2023-06-02 10:40:52 +0000 UTC | self      | aks      | minimal   |                           |
+------------------------+-------------------------------+-----------+----------+-----------+---------------------------+

Get Cluster Info

Use this command to retrieve the a specific cluster available in the configured project.

./rctl get cluster <cluster-name>

Below is the illustrative example of the "demo-spot-eks" cluster information of the current project:

./rctl get cluster demo-spot-eks

+---------------+-----------------------------+-----------------------------+---------+--------+---------------+
|     NAME      |         CREATED AT          |         MODIFIED AT         |  TYPE   | STATUS |   BLUEPRINT   |
+---------------+-----------------------------+-----------------------------+---------+--------+---------------+
| demo-spot-eks | 2020-08-11T16:54:25.750659Z | 2020-09-23T04:05:00.720032Z | aws-eks | READY  | eks-blueprint |
+---------------+-----------------------------+-----------------------------+---------+--------+---------------+
Or you can use below command to get more cluster information in json or yaml format

./rctl get cluster <cluster-name> -o json
./rctl get cluster <cluster-name> -o yaml


Delete Cluster

Authorized users can automate the deletion of an existing cluster in the configured project using RCTL. This command check for any resources in the cloud associated to that cluster, delete those resources, and then the object

./rctl delete cluster <cluster-name>

Force Delete Cluster

Use the below force flag only to remove the cluster object from the controller database

./rctl delete cluster <cluster-name> --force

Important

Use Force Delete option only when no resources are associated with the cluster in the cloud but the controller entry still exist


Download Cluster Spec

Users can download the declarative specification (config) for their cluster from the controller using the command below.

./rctl get cluster config <cluster name> -o yaml

Download Kubeconfig

Users can use RCTL to download the Kubeconfig for clusters in the configured project. All access will be performed via the Controller's Zero Trust Kubectl access proxy.

./rctl download kubeconfig [flags]

By default, a unified Kubeconfig for all clusters in the project is downloaded. If required, users can download the Kubeconfig for a selected cluster.

./rctl download kubeconfig --cluster <cluster-name>

Refer here for the deprecated RCTL commands