CLI
Google Kubernetes Engine (GKE) is a fully managed Kubernetes service provided by Google Cloud. An integration is developed with GKE to ensure that users can provision GKE Clusters in any region and Google Cloud project using CLI (RCTL).
Create Cluster Via RCTL¶
Step 1: Cloud Credentials¶
Use the below command to create a GCP credential via RCTL
./rctl create credential gcp credentials-name <Location of credentials JSON File>
On successful creation, use this credential in the cluster config file to create a GKE cluster
Step 2: Create Cluster¶
Users can create the cluster based on a version controlled cluster spec that you can store in a Git repository. This enables users to develop automation for reproducible infrastructure.
./rctl apply -f cluster-spec.yml
An illustrative example of the cluster spec YAML file for GKE with region type Regional and having two (2) node pools is shown below
apiVersion: infra.k8smgmt.io/v2
kind: Cluster
metadata:
name: gke-cluster
project: default-project
spec:
blueprint:
name: default
version: latest
cloudCredentials: gke-cred
config:
controlPlaneVersion: "1.22"
location:
region:
region: us-east1
zone: us-east1-b
type: regional
name: gke-cluster
network:
enableVPCNativeTraffic: true
maxPodsPerNode: 75
name: default
networkAccess:
privacy: public
nodeSubnetName: default
nodePools:
- machineConfig:
bootDiskSize: 100
bootDiskType: pd-standard
imageType: COS_CONTAINERD
machineType: e2-medium
name: default-nodepool
nodeMetadata:
gceInstanceMetadata:
- key: org-team
value: qe-cloud
kubernetesLabels:
- key: nodepool-type
value: default-np
nodeVersion: "1.22"
size: 2
- machineConfig:
bootDiskSize: 60
bootDiskType: pd-standard
imageType: COS_CONTAINERD
machineType: e2-medium
name: pool2
nodeMetadata:
gceInstanceMetadata:
- key: org-team
value: qe-cloud
kubernetesLabels:
- key: nodepool-type
value: nodepool2
nodeVersion: "1.22"
size: 2
project: project1
security:
enableLegacyAuthorization: true
enableWorkloadIdentity: true
type: Gke
On successful provisioning, you can view the cluster details as shown below
For more GKE cluster spec examples, refer here
Cluster Sharing¶
For cluster sharing, add a new block to the cluster config (Rafay Spec) as highlighted in the below config file
apiVersion: infra.k8smgmt.io/v2
kind: Cluster
metadata:
labels:
rafay.dev/clusterName: demo-gke-cluster
rafay.dev/clusterType: gke
name: demo-gke-cluster
project: defaultproject
spec:
blueprint:
name: minimal
version: latest
cloudCredentials: demo-cred
config:
controlPlaneVersion: "1.24"
location:
type: zonal
zone: us-west1-c
name: demo-gke-cluster
network:
enableVPCNativeTraffic: true
maxPodsPerNode: 110
name: default
networkAccess:
privacy: public
nodeSubnetName: default
nodePools:
- machineConfig:
bootDiskSize: 100
bootDiskType: pd-standard
imageType: COS_CONTAINERD
machineType: e2-standard-4
name: default-nodepool
nodeMetadata:
nodeTaints:
- effect: NoSchedule
key: k1
nodeVersion: "1.24"
size: 3
- machineConfig:
bootDiskSize: 100
bootDiskType: pd-standard
imageType: COS_CONTAINERD
machineType: e2-standard-4
name: pool2
nodeVersion: "1.24"
size: 3
project: dev-382813
sharing:
enabled: true
projects:
- name: "demoproject1"
- name: "demoproject2"
type: Gke
You can also use the wildcard operator "*" to share the cluster across projects
sharing:
enabled: true
projects:
- name: "*"
Notes: When passing the wildcard operator, users cannot pass other projects name
To remove any cluster sharing from the project(s), remove that specific project name(s) and run the apply command
Delete Cluster¶
Delete cluster will clean up the resources in Google Cloud as well
./rctl delete cluster <cluster_name>