Clusters
This is a legacy version
For the latest Clusters CLI information, see the Clusters CLI topic.
Clusters and workloads are deployed in the Customer's Org in the context of a Project. Users can use RCTL to fully automate the lifecycle management of clusters. Specifically, the operations listed in the table below can be fully automated using RCTL.
Resource | Create | Get | Update | Delete |
---|---|---|---|---|
Cluster | YES | YES | YES | YES |
Create Cluster¶
Declarative¶
You can also import a cluster into the Project based on a version controlled cluster spec that you can store in a Git repository. This enables users to develop automation for reproducible infrastructure.
./rctl create cluster -f cluster-spec.yml
An illustrative example of the cluster spec YAML file is shown below
kind: Cluster
metadata:
# set the name of the cluster
name: demo-imported-cluster-01
# specific the project name to create the cluster
project: defaultproject
# cluster labels
labels:
env: dev
type: ml-workloads
spec:
# type can be "imported"
type: imported
# location, can be custom or predefined
location: aws/eu-central-1
# blueprint below is optional, if not specified, default value is "default"
blueprint: default
# blueprintversion below is optional, if not specified, latest version in the blueprint will be used"
blueprintversion: v1
Unified/Split YAML
Both unified and split yaml specs are supported to create cluster(s) via RCTL
- Unified YAML
Below is an example of cluster unified yaml spec
kind: Cluster
metadata:
name: rctl-amd-arm-managed-prod
project: prod-test
spec:
blueprint: default
cloudprovider: aws-secret
cniprovider: aws-cni
proxyconfig: {}
type: eks
---
apiVersion: rafay.io/v1alpha5
kind: ClusterConfig
metadata:
name: rctl-amd-arm-managed-prod
region: us-west-2
version: "1.21"
managedNodeGroups:
- amiFamily: AmazonLinux2
desiredCapacity: 1
iam:
withAddonPolicies:
autoScaler: true
imageBuilder: true
instanceType: t4g.xlarge
maxSize: 1
minSize: 1
name: managed-arm64-rctl-prod
version: "1.21"
volumeSize: 80
volumeType: gp3
- amiFamily: AmazonLinux2
desiredCapacity: 1
iam:
withAddonPolicies:
autoScaler: true
imageBuilder: true
instanceType: t3.xlarge
maxSize: 1
minSize: 1
name: managed-amd64-rctl-prod
version: "1.21"
volumeSize: 80
volumeType: gp3
vpc:
cidr: 192.168.0.0/16
clusterEndpoints:
privateAccess: true
publicAccess: false
nat:
gateway: Single
- Split YAML
Below is an example of cluster split yaml spec
kind: Cluster
metadata:
name: demo-cluster
project: defaultproject
spec:
blueprint: default
cloudprovider: demo-aws
cniprovider: aws-cni
proxyconfig: {}
type: eks
---
apiVersion: rafay.io/v1alpha5
kind: ClusterConfig
managedNodeGroups:
- amiFamily: AmazonLinux2
desiredCapacity: 3
iam:
withAddonPolicies:
autoScaler: true
instanceType: t3.xlarge
labels:
app: infra
dedicated: "true"
maxSize: 3
minSize: 0
name: ng-f813b069
version: "1.22"
volumeSize: 80
volumeType: gp3
metadata:
name: demo-cluster
region: us-west-2
version: "1.22"
vpc:
cidr: 192.168.0.0/16
clusterEndpoints:
privateAccess: true
publicAccess: true
nat:
gateway: Single
Imperative¶
Use this command to create a cluster object in the configured project in your Organization. You can optionally also specify the cluster blueprint during this step.
./rctl create cluster imported qa-cluster -l sanjose
./rctl create cluster imported prod-cluster2 -l sanjose -b prodblueprint
List Clusters¶
Use this command to retrieve the list of clusters available in the configured project. In the example shown below, there are four clusters in this project.
./rctl get cluster
+--------------------------------+----------+
| NAME | TYPE |
+--------------------------------+----------+
| rafaypoc-eks-existing-vpc-cicd | aws-eks |
| demo-spot-eks | aws-eks |
| demo-vmware-sjc | manual |
| demo-aks-east | imported |
+--------------------------------+----------+
Get Cluster Info¶
Use this command to retrieve the a specific cluster available in the configured project.
./rctl get cluster <cluster-name>
Below is the illustrative example of the "demo-spot-eks" cluster information of the current project:
./rctl get cluster demo-spot-eks
+---------------+-----------------------------+-----------------------------+---------+--------+---------------+
| NAME | CREATED AT | MODIFIED AT | TYPE | STATUS | BLUEPRINT |
+---------------+-----------------------------+-----------------------------+---------+--------+---------------+
| demo-spot-eks | 2020-08-11T16:54:25.750659Z | 2020-09-23T04:05:00.720032Z | aws-eks | READY | eks-blueprint |
+---------------+-----------------------------+-----------------------------+---------+--------+---------------+
./rctl get cluster <cluster-name> -o json
./rctl get cluster <cluster-name> -o yaml
Delete Cluster¶
Authorized users can automate the deletion of an existing cluster in the configured project using RCTL.
./rctl delete cluster <cluster-name>
Update Cluster Blueprint¶
Use this command to use RCTL to update the cluster blueprint associated with a given cluster.
./rctl update cluster <cluster-name> -blueprint <blueprint-name>
Download Cluster Spec¶
Users can download the declarative specification (config) for their cluster from the controller using the command below.
./rctl get cluster config <cluster name> -o yaml
Download Kubeconfig¶
Users can use RCTL to download the Kubeconfig for clusters in the configured project. All access will be performed via the Controller's Zero Trust Kubectl access proxy.
./rctl download kubeconfig [flags]
By default, a unified Kubeconfig for all clusters in the project is downloaded. If required, users can download the Kubeconfig for a selected cluster.
./rctl download kubeconfig --cluster <cluster-name>
Wait Flag¶
RCTL provides an option for the users to wait and block the long-running operations. When an automation pipeline logic becomes extremely convoluted, enabling the --wait flag helps to block and keep pulling the cluster ready status.
Supported Operations¶
Resource | Create | Upgrade | Delete |
---|---|---|---|
Cluster (AKS, EKS and MKS) | YES | YES | YES |
Nodegroup (AKS and EKS) | YES | YES | YES |
Below is an example with a wait flag that provides an option to wait until the EKS cluster is in ready status
./rctl create cluster eks eks-cluster demo-credential --region us-west-2 --node-ami-family AmazonLinux2 --wait