CLI
For purposes of automation, it is strongly recommended that users create "version controlled" declaraive cluster specification files to provision and manage the lifecycle of Kubernetes clusters.
Important
Users need to use only a single command (rctl apply -f cluster_spec.yaml) for both provisioning and ongoing lifecycle operations. The controller will automatically determine the required changes and seamlessly map them to the associated action (e.g. add nodes, remove nodes, upgrade Kubernetes, update blueprint etc).
Create Cluster¶
Declarative¶
You can create an Upstream k8s cluster based on a version controlled cluster spec that you can manage in a Git repository. This enables users to develop automation for reproducible infrastructure.
./rctl apply -f <cluster file name.yaml>
An illustrative example of the split cluster spec YAML file for MKS is shown below
apiVersion: infra.k8smgmt.io/v3
kind: Cluster
metadata:
name: test-mks
project: defaultproject
labels:
check1: value1
check2: value2
spec:
blueprint:
name: default
version: latest
config:
autoApproveNodes: true
dedicatedMastersEnabled: false
highAvailability: false
kubernetesVersion: v1.25.2
location: sanjose-us
network:
cni:
name: Calico
version: 3.19.1
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
nodes:
- arch: amd64
hostname: ip-172-31-61-40
operatingSystem: Ubuntu20.04
privateip: 172.31.61.40
roles:
- Master
- Worker
- Storage
ssh:
ipAddress: 35.86.208.181
port: "22"
privateKeyPath: mks-test.pem
username: ubuntu
type: mks
Important
Illustrative examples of "cluster specifications" are available for use in this Public Git Repository.
Once the rctl create command is executed successfully, following actions will be done:
- Create cluster on the controller
- Download conjurer & credentials
- SCP conjurer & credentials to node
- Run conjurer
- Configure role, interface
- Start provision
Note
At this time only SSH key based authentication is supportted to scp into the nodes
Provision Status¶
During cluster provisioing, status can be monitored as shown below.
./rctl get cluster <cluster-name> -o json | jq .status
The above command will return READY when the provision is complete.
Add Nodes¶
Users can add nodes on the cluster and update the config yaml file with the below command
./rctl apply -f <cluster-filename.yaml>
Example:
Add the below node details in the yaml file under the nodes key
- hostname: rctl-mks-1
operatingSystem: "Ubuntu18.04"
arch: amd64
privateIP: 10.109.23.6
roles:
- Worker
- Storage
labels:
key1: value1
key2: value2
taints:
- effect: NoSchedule
key: app
value: infra
ssh:
privateKeyPath: "ssh-key-2020-11-11.key"
ipAddress: 10.109.23.6
userName: ubuntu
port: 22
Use the below command to update the yaml file and add the nodes to the cluster
./rctl apply -f <cluster_filename.yaml>
Once the rctl update command is executed succesfully, following actions will be done:
- Download conjurer & credentials
- SCP conjurer & credentials to node
- Run conjurer
- Configure role, interface
- Start provision
For more examples of MKS cluster spec, refer here
Node Provision Status¶
Once the node is added, Provision will trigger and provision status can be monitored as shown below.
rctl get cluster <cluster-name> -o json | jq -r -c '.nodes[] | select(.hostname=="<hostname of the node>") | .status'
The above command will return READY when the provision is complete.
K8s Upgrade Strategy¶
To upgrade the nodes, incorporate the strategy parameters into the specification, whether opting for a concurrent or sequential approach. Here is an illustrative configuration file where the corresponding parameters have been integrated.
Important
1 |
|
apiVersion: infra.k8smgmt.io/v3
kind: Cluster
metadata:
name: qc-mks-cluster-1
project: test-project
spec:
blueprint:
name: default
config:
autoApproveNodes: true
kubernetesVersion: v1.26.5
kubernetesUpgrade:
strategy: concurrent
params:
workerConcurrency: "80%"
network:
cni:
name: Calico
version: 3.24.5
ipv6:
podSubnet: 2001:db8:42:0::/56
serviceSubnet: 2001:db8:42:1::/112
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
nodes:
- arch: amd64
hostname: mks-node-1
operatingSystem: Ubuntu20.04
privateip: 10.0.0.81
roles:
- Worker
- Master
ssh: {}
- arch: amd64
hostname: mks-node-2
operatingSystem: Ubuntu20.04
privateip: 10.0.0.155
roles:
- Worker
ssh: {}
- arch: amd64
hostname: mks-node-3
operatingSystem: Ubuntu20.04
privateip: 10.0.0.169
roles:
- Worker
ssh: {}
- arch: amd64
hostname: mks-node-4
operatingSystem: Ubuntu20.04
privateip: 10.0.0.196
roles:
- Worker
ssh: {}
- arch: amd64
hostname: mks-node-5
operatingSystem: Ubuntu20.04
privateip: 10.0.0.115
roles:
- Worker
ssh: {}
- arch: amd64
hostname: mks-node-6
operatingSystem: Ubuntu20.04
privateip: 10.0.0.159
roles:
- Worker
ssh: {}
proxy: {}
type: mks
For the Concurrent strategy, assign a value to workerConcurrency, whereas in the case of the Sequential strategy, workerConcurrency is not required. Refer K8s Upgrade page for more information.
Delete Cluster¶
Users can delete one or more clusters with a single command
./rctl delete cluster <mkscluster-name>
(or)
./rctl delete cluster <mkscluster1-name> <mkscluster2-name>