Install a single node Self Hosted Controller in Bare Metal/VM server environments.
- Create an instance/node with the specifications described in Infrastructure Requirements
- Create wildcard DNS entries for the Controller domains mentioned in DNS Record Creation above, and point their A record to node/LB IP addresses
- (Optional) Generate a wildcard certificate for the FQDN which is signed by a CA. Alternatively, configure the controller to use self-signed certificates
Watch a video showcasing installation of the self-hosted controller in an air-gapped environment.
Click here to download the controller installation package to the Bare Metal/VM server.
From your home directory, untar the package using the command below.
tar -xf rafay-controller-v*.tar.gz
tar -xf rafay-controller-v1.6-21.tar.gz
- Move the RADM folder.
sudo mv ./radm /usr/bin/
Customize the config.yaml¶
- Copy the config.yaml file.
cp -rp config.yaml-tmpl config.yaml
- Edit the config.yaml file.
When modifying the config.yaml file, it is recommended to update the following settings. For
spec.repo.*.path, there are multiple paths to update. Example: change
metadata.name: Name of the controller. spec.networking.interface: Interface for controller traffic [optional] spec.deployment.ha: True if its HA controller. spec.repo.*.path: Path of the tar location. There are multiple paths to update. spec.app-config.generate-self-signed-certs: Generates and uses self signed certs for incoming core traffic. spec.star-domain: Wildcard FQDN (*.example.com) spec.override-config.global.enable_hosted_dns_server: True if DNS is not available. spec.app-config.logo: Display logo in UI. spec.override-config.localprovisioner.basePath: Path for PVC volumes. spec.override-config.core-registry-path: Path for registry images. spec.override-config.etcd-path: Path where etcd data is saved.
Note: The example above uses dots instead of line breaks to simplify the example. For example,
metadata.namelooks like the following in the config.yaml file.
Start the controller¶
- Start initializing the Controller using the command shown below
sudo radm init --config config.yaml
- Once initialization is complete, copy the admin config file to the home directory to access the kube controller API from CLI.
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) -R $HOME/.kube
- Install the dependencies which are required for the controller.
sudo radm dependency --config config.yaml
- Install the controller application.
sudo radm application --config config.yaml
This will bring up all the controller services.
Note: This will take approx 20-30 mins for all pods to be up and ready.
- Before proceeding further, confirm that all pods are in running state using kubectl.
kubectl get pods -A
Try accessing the Controller UI https://console.
to verify that the installation was successful.
You should see a screen similar to the image below when you access the console.
Note: For accessing the controller UI in your local machine, add a /etc/hosts entry pointing the console FQDN to your controller IP. You can see the IP address after running the
sudo radm init --config config.yamlcommand. Example:
- Click on the “Sign Up” link to create the first organization of the self hosted controller
- Register a new account for the organization as below screenshot
- Try to login to this organization with the newly registered account on the login screen
Upload Cluster Dependencies¶
Run the following command to upload dependencies for Kubernetes cluster provisioning to the controller.
sudo radm cluster --config config.yaml
Multiple Interface Support¶
The controller supports multiple interfaces and it can be set in the config.yaml file during the initialization. The selected interface is used for all connections related to the controller application and Kubernetes. In default, the primary interface is used.
spec: networking: interface: ens3
In cases where complete interface isolation is needed, few pods which use host networks like the monitoring/metrics pods, do not adhere to the interface selection on k8s layer and still use the default interface. If complete traffic isolation on the interface is needed, then we recommend to add the below routing rules on your controller and clusters.
ip route add 10.96.0.0/12 dev <secondary-interface> ip route add 10.224.0.0/16 dev <secondary-interface>
Hosted DNS support¶
In the absence of DNS servers in the infrastructure and cluster environment, the managed clusters may not have a way to communicate with the self hosted controller. In this case, the self hosted controller can also host its own DNS server and propagate the records to the cluster.
Hosted DNS can be enabled on the config.yaml using the below flag in the controller.
override-config: global.enable_hosted_dns_server: true
For accessing the controller UI in your local machine, add a /etc/hosts entry pointing the console FQDN to your controller IP. You can see the IP address after running the
sudo radm init --config config.yaml command.
While provisioning clusters, add the “ -dns controller-ip” to the conjurer command.
tar -xjf conjurer-linux-amd64.tar.bz2 && sudo ./conjurer -edge-name="test" \ -passphrase-file="passphrase.txt" -creds-file="credentials.pem" -dns-server controller-IP