Skip to content

Requirements

Here are the pre-requisites for installation of the self hosted controller in a GKE cluster.


Management VM for Installation

Setup a VM in GCP which will be leveraged for doing administration work to set up the controller in GKE.

VM Prerequisites

Ubuntu/CentOS VM with 100G Disk


System Requirements

Execute the below commands in the node where you run radm commands

sudo tee -a /etc/yum.repos.d/google-cloud-sdk.repo << EOM
[google-cloud-sdk]
name=Google Cloud SDK
baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el8-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOM

sudo yum -y install google-cloud-sdk
#Run the below command both as a root user and normal user.
gcloud init --console-only
#For versions 379.0.0-1 and above. For below versions use gcloud init
#Click on the link that the above command generated and authenticate it using your mail id and copy the code generated on the browser and paste it on the terminal.
sudo su
gcloud init --console-only
#For versions 379.0.0-1 and above. For below versions use gcloud init
#Click on the link that the above command generated and authenticate it using your mail id and copy the code generated on the browser and paste it on the terminal.
sudo yum -y install google-cloud-sdk-app-engine-go
sudo yum -y install kubectl
gcloud services enable file.googleapis.com
gcloud services enable sqladmin.googleapis.com

FS=<nfs-fileserver name>
PROJECT=<project name>
ZONE=us-west2-a

Note: If helm is not installed, execute the below commands
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

DNS Record Creation

Installation of the self hosted controller requires wildcard records as mentioned below. In the below examples, replace company.example.com with the desired domain. DNS records for the wildcard FQDN should point to the controller nodes’ IP addresses.

*.company.example.com

In case, wildcard DNS is not available, individual records as below are needed.

  1. api.
  2. console.
  3. fluentd-aggr.
  4. ops-console.
  5. rcr.
  6. regauth.
  7. *.core.
  8. *.core-connector.
  9. *.kubeapi-proxy.
  10. *.user.
  11. *.cdrelay.

Logo (Optional)

Company logo of size less than 200KB in png format for white labeling and branding purposes.


X.509 Certificates (Optional)

The controller uses TLS for secure communication. As a result, X.509 certificates are required to secure all endpoints. Customers are expected to provide a trusted CA signed wildcard certificate for the target DNS (e.g. *.c.example.com)

For non-prod/internal to org scenarios, If signed certificates are not available then the controller can generate self-signed certificates automatically. This can be achieved by setting the “generate-self-signed-certs” key to “True” in config.yaml during installation.


Email Addresses

The installation also requires below email addresses.

  • An email address for super user authentication to the controller’s admin
  • An email address for receiving support emails from the controller
  • An email address for receiving alerts and notifications (Optional)

Creation of GKE Cluster

  • Step 1: Login to google cloud and select project -> create project and click on New Project

GKE

  • Step 2: Enter the project details and click on Create

GKE

  • Step 3: Search for Kubernetes Engine and click on Create button

GKE

  • Step 4: On cluster creation page from configure the cluster as GKE Standard

GKE

  • Step 5: Enter the Cluster basic details and click on Node Pool

GKE

  • Step 6: Edit the Node pool name if necessary and click on Nodes

GKE

  • Step 7: Select Image as Container Optimized OS with Docker (cos), machine type as e2-standard-8 and Boot disk size as 500 and click on Security

GKE

  • Step 8: Select Allow Full access to All APIs and disable the Enable integrity monitoring

GKE

  • Step 9: Click on Networking. Check the options Enable VPC-native traffic routing and Enable Kubernetes Network Policy

GKE

  • Step 10: Click on Security. Check the options Enable Shielded GKE Nodes and Enable Workload Identity

GKE

  • Step 11: Click on Features.
  • Uncheck Enable Cloud Logging and Enable Cloud Monitoring option
  • Select the Enable Compute Engine Persistent Disk CSI Driver option

  • Step 12: Click on Create button to create GKE cluster

GKE

Configure gcloud

Execute below commands to configure gcloud

gcloud config set project ${PROJECT}
gcloud container clusters get-credentials <cluster name> --region <region> --project ${PROJECT}
cp ~/.kube/config gke-config.yaml

Create Postgres Database

  • Step 1: Search for SQL and click on Create Instance button

GKE

  • Step 2: Click on Choose Postgres SQL

GKE

  • Step 3: Enter instance information.Update the settings based on your requirements. Selecting ‘Multiple Zone’ for HA is recommended

GKE

  • Step 4:
  • Click on “SHOW CONFIGURATION OPTIONS”.
  • Enable Private IP under CONNECTIONS and click Set Up Connection
  • Select Use an automatically selected IP range under the Allocate IP range and click Continue

GKE

  • Step 5:
  • Click New network from the Public IP section under Authorized networks
  • Enter the name and the public IP of the node to run radm commands - Name field is not mandatory to have the exact node name
  • Click Done and Create Database Instance

GKE

Configure Postgres Database

Execute the below commands in node where you run radm commands

gcloud filestore instances create ${FS} --project=${PROJECT}   --zone=${ZONE} --tier=STANDARD --file-share=name="volumes",capacity=1TB   --network=name="default"

FSADDR=$(gcloud filestore instances describe ${FS} \
  --project=${PROJECT} \
  --zone=${ZONE} \
  --format="value(networks.ipAddresses[0])")

ACCOUNT=$(gcloud config get-value core/account)

kubectl create clusterrolebinding core-cluster-admin-binding --user {ACCOUNT} --clusterrole cluster-admin

#Install the nfs-client helm chart
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/

helm install nfs-cp nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=${FSADDR} --set nfs.path=/volumes --set storageClass.accessModes=ReadWriteMany -n nfs-client-provisioner --create-namespace --kubeconfig <config file from gke cluster> <gke-config.yaml

Example: helm install nfs-cp nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=${FSADDR} --set nfs.path=/volumes --set storageClass.accessModes=ReadWriteMany -n nfs-client-provisioner --create-namespace --kubeconfig gke-config.yaml

Configure GKE Firewall

Configure GKE Firewall to allow istio admission webhook

-Execute the below command by replacing “” with the actual cluster name you created in GKE.

gcloud compute firewall-rules list --filter="name~gke-<clustername>-[0-9a-z]*-master"
  • The above command will give the name of the firewall to update the rules. Execute below command by replacing the “” with the output from the above command to update the firewall rules.
gcloud compute firewall-rules update <firewall rule name> --allow tcp:10250,tcp:443,tcp:15017