Requirements
Note
See the Self Hosted Controller v1.24 for the latest process.
The Self Hosted Controller can be installed in a Google Cloud environment. This allows users to host and manage the controller in their own cloud environment.
The prerequisites for the self hosted controller are:
- A tool or app for creating the GKE cluster. For example, a Google GCP virtual machine.
- A GKE Kubernetes cluster.
- A database. For example, a Google Postgres database.
- A network attached storage system. For example, a Google Filestore.
- A DNS for the controller.
Management VM for Installation¶
Setup a virtual machine in Google Cloud which will be leveraged for doing administration work to set up the controller in Google GKE.
VM Prerequisites
- Operating System: CentOS 7
- CPU: 4 cores
- RAM: 8 GB
- Storage: 500 GB
VM Setup¶
Create a Repo file
Copy and paste the following command in the node where you run radm commands. The tee
command creates the google-cloud-sdk.repo
file and displays the contents.
- In the Google Cloud console, search for and click on VM Instances.
- Open the SSH window for the virtual machine.
- Copy and paste the following command into the terminal.
- Press Enter.
sudo tee -a /etc/yum.repos.d/google-cloud-sdk.repo << EOM
[google-cloud-sdk]
name=Google Cloud SDK
baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el8-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOM
Install Google Cloud SDK
Run the following command to install the Google Cloud SDK.
Note
The installation may take some time.
sudo yum -y install google-cloud-sdk
Initialize GCloud
Run the following command as a normal user. Using the --console-only
flag prevents the command from launching a web browser on the VM. A URL is created and used to authorize initializing GCloud.
-
Run the init command.
gcloud init --console-only
-
Select
Log in with a new account
. It should be option 2.Note
The above command is for versions 379.0.0-1 and above. For earlier versions, use gcloud init.
-
When asked if you want to continue, type
y
and press Enter. -
Copy the URL and paste it into a web browser.
-
You will be asked to log in to a Google account. Use the Google account used for Google Cloud Platform.
-
You will be asked to give Google Cloud SDK access to your Google account. Accept the conditions.
-
Copy the Authorization Code, paste it into the terminal for the VM, then press Enter.
-
Select the cloud project to use. This is the project with the Kubernetes cluster. Type in the project number, press Enter, then confirm the action.
-
Optionally, select which Google Compute Engine zone to use. This is the zone with the Kubernetes cluster. Type in the zone number, then press Enter.
Initialize GCloud as Root
Run the following command as a root user.
-
Run as a root user.
sudo su
-
Run the init command in the VM SSH window.
gcloud init --console-only
-
Select
Log in with a new account
. It should be option 2.Note
The above command is for versions 379.0.0-1 and above. For earlier versions, use gcloud init.
-
When asked if you want to continue, type
y
and press Enter. -
Copy the URL and paste it into a web browser.
-
Log in to the Google account used for Google Cloud Platform.
-
Accept the conditions.
-
Copy the Authorization Code and paste it into the terminal for the VM.
-
Select the cloud project to use. This is the project with the Kubernetes cluster. Type in the project number, press Enter, then confirm the action.
-
Optionally, select which Google Compute Engine zone to use. Type in the zone number, then press Enter.
-
Run the following command to exit as root user.
exit
Install Services
Run the following commands to install the GCloud services.
sudo yum -y install google-cloud-sdk-app-engine-go
sudo yum -y install kubectl
gcloud services enable file.googleapis.com
gcloud services enable sqladmin.googleapis.com
Add NFS Information
Run the following commands to add file server information to the VM. This file server will be created during the installation process.
FS=<nfs-fileserver name>
PROJECT=<project name>
ZONE=us-west2-a
Example:
FS=projectnfs
PROJECT=controller-358320
ZONE=us-central1-c
Install Helm
If helm is not installed, execute the following commands.
Note
OpenSSL is required to run ./get_helm.sh
. Run sudo yum install openssl
to install OpenSSL.
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
DNS Record Creation¶
Installation of the self hosted controller requires DNS records as mentioned below. In the below examples, replace company.example.com with the desired domain. DNS records should point to the controller nodes’ IP addresses.
The following is an example
*.company.example.com
The following individual records should be allowed. For Google Cloud DNS, add these as Record Sets.
api.<company.example.com>
console.<company.example.com>
fluentd-aggr.<company.example.com>
ops-console.<company.example.com>
rcr.<company.example.com>
peering.<company.example.com>
regauth.<company.example.com>
*.core.<company.example.com>
*.core-connector.<company.example.com>
*.kubeapi-proxy.<company.example.com>
*.user.<company.example.com>
*.cdrelay.<company.example.com>
ui.<company.example.com>
Logo (Optional)¶
Company logo of size less than 600 KB in png format for white labeling and branding purposes.
X.509 Certificates (Optional)¶
The controller uses TLS for secure communication. As a result, X.509 certificates are required to secure all endpoints. Customers are expected to provide a trusted CA signed wildcard certificate for the target DNS (e.g. *.company.example.com
)
For non-prod/internal to org scenarios, If signed certificates are not available then the controller can generate self-signed certificates automatically. This can be achieved by setting the “generate-self-signed-certs” key to “True” in config.yaml during installation.
Email Addresses¶
The installation also requires below email addresses.
- An email address for super user authentication to the controller’s admin
- An email address for receiving support emails from the controller
- An email address for receiving alerts and notifications (Optional)
Creation of GKE Cluster¶
Create New Project¶
- In Google Cloud, click on the project name in the top menu bar. The Select a project window opens.
- Click New Project.
- Enter the project details.
- Click Create.
- Enable the Compute Engine API.
Create K8s cluster¶
- Search for and select the Kubernetes Engine.
- Click Create.
- Note: If the Kubernetes Engine API is not enabled, click Enable.
Select GKE Standard¶
- On the Create Cluster page, for the GKE Standard, click Configure.
Enter cluster details¶
- Enter the basic details for the cluster.
Update Node Pool¶
- Under Node Pool, click on a node pool name. The Node Pool details display.
- Edit the Node Pool name, if necessary.
Update Node¶
- Under the node pool name, click Nodes.
- For the Image Type, select Container Optimized OS with containerd (cos_containerd). If necessary, confirm the selection.
- For the Machine Type, select e2-standard-16.
- For Boot Disk, set the size to 500GB.
Update Security¶
- Under Node Pools, click on Security.
- Select Allow full access to all Cloud APIs.
- Disable Enable integrity monitoring (uncheck the box).
Update Network¶
- Under Cluster, click on Networking.
- Select Enable VPC-native traffic routing (uses alias IP) and Enable Kubernetes Network Policy.
- Disable Enable HTTP load balancing.
Update Cluster Security¶
- Under Cluster, click on Security.
- Check the options Enable Shielded GKE Nodes and Enable Workload Identity
Update Cluster Features¶
- Under Cluster, click on Features.
- Deselect Enable Cloud Logging and Enable Cloud Monitoring (uncheck the boxes).
- Select Enable Compute Engine Persistent Disk CSI Driver.
Finalize and Create¶
- Click on Create button to create GKE cluster.
Configure gcloud¶
Execute below commands on the Google GCP VM to configure gcloud. This is the virtual machine you created at the beginning of this exercise.
sudo yum install google-cloud-sdk-gke-gcloud-auth-plugin
gcloud config set project ${PROJECT}
gcloud container clusters get-credentials <cluster name> --region <region> --project ${PROJECT}
cp ~/.kube/config gke-config.yaml
Example:
sudo yum install google-cloud-sdk-gke-gcloud-auth-plugin
gcloud config set project ${PROJECT}
gcloud container clusters get-credentials cluster-1 --region us-central1-c --project ${PROJECT}
cp ~/.kube/config gke-config.yaml
Create Postgres Database¶
Create Instance¶
- In Google Cloud, search for and select SQL.
- Click Create Instance.
Choose PostgresSQL¶
- Click Choose PostgresSQL.
Enter instance information¶
- Enter instance information. Update the settings based on your requirements.
- (Optional) Selecting ‘Multiple Zone’ for HA is recommended.
Network Connection¶
- Click on SHOW CONFIGURATION OPTIONS.
- Enable Private IP under CONNECTIONS.
- Select default under Network.
- Click Set Up Connection. If necessary, enable the Networking API.
- Select Use an automatically selected IP range under the Allocate IP range and click Continue.
- Click Create Connection. This can take a few minutes.
Authorized Network¶
- Click Add Network under Authorized networks.
- Enter the name and the public IP of the node to run radm commands (the GCP VM). The Name field is not mandatory to have the exact node name.
- Click Done and Create Instance.
Create a Filestore Instance¶
Execute the below commands in node where you run radm commands.
gcloud filestore instances create ${FS} --project=${PROJECT} --zone=${ZONE} --tier=STANDARD --file-share=name="volumes",capacity=1TB --network=name="default"
Show Filestore Instance
Run the following command to show metadata for a Filestore instance.
gcloud filestore instances describe ${FS} --location=${ZONE}
Set Filestore IP Address
Run the following command to set the Filestore instance IP address to FSADDR. FSADDR is used in another command.
FSADDR=$(gcloud filestore instances describe ${FS} \
--project=${PROJECT} \
--zone=${ZONE} \
--format="value(networks.ipAddresses[0])")
To check the FSADDR, run echo $FSADDR
. The IP address for the filestore displays.
Configure Get-Value
Run the following command to set the ACCOUNT variable.
ACCOUNT=$(gcloud config get-value core/account)
Cluster Role
Run the following command to bind the kubectl commands to the cluster.
kubectl create clusterrolebinding core-cluster-admin-binding --user {ACCOUNT} --clusterrole cluster-admin
Install the NFS-Client Helm chart
Run the following commands to install the NFS-client helm chart.
-
Add the Helm repo.
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
-
Install the Helm chart.
helm install nfs-cp nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=${FSADDR} --set nfs.path=/volumes --set storageClass.accessModes=ReadWriteMany -n nfs-client-provisioner --create-namespace --kubeconfig <config file from gke cluster> <gke-config.yaml>
Example:
helm install nfs-cp nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=${FSADDR} --set nfs.path=/volumes --set storageClass.accessModes=ReadWriteMany -n nfs-client-provisioner --create-namespace --kubeconfig gke-config.yaml
Create Service Account for External DNS¶
Create a service account for the external DNS and bind it to the external DNS service account created by RADM in the Kubernetes cluster.
Note
After running the echo command, note the address. This will be used when installing the controller.
sa_name="test-external-dns-sa"
sa_display_name="test external dns sa"
gcloud iam service-accounts create $sa_name --display-name="$sa_display_name"
sa_email=$(gcloud iam service-accounts list --format='value(email)' --filter="displayName:$sa_display_name")
echo $sa_email
PROJECT_ID=$(gcloud config get-value project)
gcloud projects add-iam-policy-binding $PROJECT_ID --member="serviceAccount:$sa_email" --role=roles/dns.admin
gcloud iam service-accounts add-iam-policy-binding "$sa_email" --member="serviceAccount:$PROJECT_ID.svc.id.goog[kube-system/rafay-external-dns-sa]" --role=roles/iam.workloadIdentityUser
Example:
sa_name=test-sa-account
sa_display_name="test external dns sa"
gcloud iam service-accounts create $sa_name --display-name="$sa_display_name"
sa_email=$(gcloud iam service-accounts list --format='value(email)' --filter="displayName:$sa_display_name")
echo $sa_email
PROJECT_ID=$(gcloud config get-value project)
gcloud projects add-iam-policy-binding $PROJECT_ID --member="serviceAccount:$sa_email" --role=roles/dns.admin
gcloud iam service-accounts add-iam-policy-binding "$sa_email" --member="serviceAccount:$PROJECT_ID.svc.id.goog[kube-system/rafay-external-dns-sa]" --role=roles/iam.workloadIdentityUser