Provisioning
Cloud Credentials¶
The controller needs to be configured with GKE Credentials in order to programmatically create and configure required GCP infrastructure. These credentials securely managed as part of a cloud credential in the Controller.
The creation of a cloud credential is a "One Time" task. It can then be used to create clusters in the future when required. Refer GKE Credentials for additional instructions on how to configure this.
Important
To guarantee complete isolation across Projects (e.g. BUs, teams, environments etc.,), cloud credentials are associated with a specific project. These can be shared with other projects if necessary.
Prerequisites¶
Users must have the below setup in the GCP Console
-
Create Service Account with the below Roles:
- Compute Admin
- Kubernetes Engine Admin
- Service Account User
-
APIs on Google Cloud Platform
Enable the following APIs on your Google Cloud platform to provision a GKE cluster
- Cloud Resource Manager API: Used for validating user’s GCP project
- Compute Engine API: Used for validating and accessing various resources like zones, regions etc,. on GCP that are used by the GKE cluster
- Kubernetes Engine API
-
Cluster in a VPC network
- Ensure the firewall allows HTTP and HTTPs traffic
- Create the subnet that you want to use before you create the cluster
- GCP VPC is global but subnet should be in the same region as your target cluster
High Level Steps¶
The image below describes the high level steps to provision and manage GKE clusters using the controller.
sequenceDiagram
autonumber
participant user as User/Pipeline
participant rafay as Controller
participant boot as Bootstrap Node
participant gke as GKE Cluster
user->>rafay: Provision GKE Cluster (UI, CLI)
note over boot, gke: GCP Project
rect rgb(191, 223, 255)
note right of rafay: For Every New GKE Cluster
rafay->>boot: Provision Bootstrap VM in GCP Project
rafay->>boot: Apply GKE cluster spec
boot->>gke: Provision GKE Cluster
boot->>gke: Pivot CAPI mgmt resources
boot->>gke: Apply Cluster Blueprint
gke->>rafay: Establish Control Channel with Controller
rafay->>boot: Deprovision Bootstrap Node
gke->>rafay: GKE Cluster Ready
end
rafay->>user: GKE Cluster Provisioned
Self Service UI¶
The controller provides users with a "UI Wizard" type experience to configure, provision and manage GKE clusters. The wizard prompts the user to provide critical cluster configuration details organized into logical sections:
- General
- Network Settings
- NodePools
- Security
- Feature
- Advanced
Create Cluster¶
- Click Clusters on the left panel and the Clusters page appears
- Click New Cluster
- Select Create a New Cluster and click Continue
- Select the Environment Public Cloud
- Select the Cloud Provider GCP and Kubernetes Distribution GCP GKE
- Provide a cluster name and click Continue
Constraints
- a. The cluster name should not exceed 40 characters
- b. Always begin with a letter. The name cannot start with a number or any other character
- c. The cluster name should not end with a hyphen ("-")
General (Mandatory)¶
General section is mandatory to create a cluster
- Select the Cloud Credential from the drop-down created with GCP credentials
- Enter the required GCP Project ID name
- Select a Location Type, either Zonal or Regional.
- On selecting Zonal, select a zone
- On selection Regional, select a Region and Zone
- Select a K8s version
- Select a Blueprint Type and version.
Important
Use the GCP Project ID and not the Project Name.
Network (Mandatory)¶
This section allows to customize the network settings
- Provide a Network Name and Node Subnet
Notes: Use the name for the network and node subnet. Do not use the CIDR.
- Select a Cluster Privacy, Private or Public and provide the relevant details
Important
On selecting cluster privacy Private, minimum one (1) cloud NAT must exist in the project where GKE cluster is being created
-
Optionally, enter the Pod Address Range and Service Address Range.
- If not providing any value for Pod Address Range, each node in GKE receives a /24 alias IP range of 256 addresses for hosting the Pods that run on it.
- If not providing any value for Service Address Range, service (cluster IP) addresses are taken from the cluster's subnet's secondary IP address range for Services. This range must be large enough to provide an address for all the Kubernetes Services you host in your cluster
-
Enter the count for Max Pods Per Node
NodePools¶
By default, a new cluster will be created with at least one node pool
- To add more node pools, click Add Node Pool
- Provide the required details and click Save
Security (Optional)¶
This section allows to customize the Security Settings
- Enable Enable Workload Identity to connect securely to Google APIs from Kubernetes Engine workloads
- Enable Enable Google Groups for RBAC to grant roles to all members of a Google Workspace group. On enabling this option, enter the required group name
- Enable Enable Legacy Authorization to support in-cluster permissions for existing clusters or workflows and this prevents full RBAC support
- Provide Client Certificate to authenticate to the cluster endpoint
Feature Setting (Optional)¶
Enable the required features
Advance Setting (Optional)¶
Proxy Configuration
Optionally, users can provide Proxy Configuration details.
- Select Enable Proxy if the cluster is behind a forward proxy.
- Configure the http proxy with the proxy information (ex: http://proxy.example.com:8080)
- Configure the https proxy with the proxy information (ex: http://proxy.example.com:8080)
- Configure No Proxy with Comma separated list of hosts that need connectivity without proxy. Provide the network segment range selected for provisioning clusters in the vCenter (ex: 10.108.10.0/24)
- Configure the Root CA certificate of the proxy if proxy is terminating non MTLS traffic
- Enable TLS Termination Proxy if proxy is terminating non MTLS traffic and it is not possible to provide the Root CA certificate of the proxy.
Once all the required config details are provided, perform the below steps
- Click Save Changes and proceed to cluster provisioning
- The cluster is ready for provision. Click Provision
Provision Progress¶
Once the user clicks on Provision, the system begins to go through a list of conditions for a successful provisioning as shown below
Successful Provisioning¶
Once all the steps are complete, the cluster is successfully provisioned as per the specified configuration. Users can now view and manage the GKE Cluster in the specified Project in the Controller. On successfully provisioning, the user can view the dashboards
Download Config¶
Administrators can download the GKE Cluster's configuration either from the console or using the RCTL CLI
Failed Provisioning¶
Cluster provisioning can fail if the user had misconfigured the cluster configuration (e.g. wrong cloud credentials) or encountered soft limits in their GCP account for resources. When this occurs, the user is presented with an intuitive error message. Users are allowed to edit the configuration and retry provisioning
Refer to Troubleshooting to learn about potential failure scenarios.
Important
Ability to pause/resume provisioning is only available in Preview
Pause/Resume Provisioning¶
During cluster provision, if an error occurs or provisioning fails due to any configuration issues, users can pause provisioning, rectify the issues and resume the cluster provisioning
- On receiving any error as shown below, click Pause Provision
- Once the configuration details are rectified, click Resume Provision as shown below
Note: This process cleans up the resources that are not required