Skip to content

Installation

Install a single node Self Hosted Controller in Bare Metal/VM server environments.

Preparation

  • Create an instance/node with the specifications described in Infrastructure Requirements
  • Create wildcard DNS entries for the Controller domains mentioned in DNS Record Creation above, and point their A record to node/LB IP addresses
  • (Optional) Generate a wildcard certificate for the FQDN which is signed by a CA. Alternatively, configure the controller to use self-signed certificates
  • If host operating system is Ubuntu 22.04, following steps are mandatory.
     sudo sed -i 's/^GRUB_CMDLINE_LINUX=""$/GRUB_CMDLINE_LINUX="systemd.unified_cgroup_hierarchy=0"/' /etc/default/grub
     sudo update-grub
     sudo reboot
    
    Watch a video showcasing installation of the self-hosted controller in an air-gapped environment.

Install RADM

  • Click here to download the controller installation package to the Bare Metal/VM server.

  • From your home directory, untar the package using the command below.

tar -xf rafay-controller-v*.tar.gz

Example: tar -xf rafay-controller-v1.24-v2.tar.gz

  • Move the RADM folder.
sudo mv ./radm /usr/bin/

Customize the config.yaml

  • Copy the config.yaml file.
cp -rp config.yaml-tmpl config.yaml
  • Edit the config.yaml file.
vi config.yaml

When modifying the config.yaml file, it is recommended to update the following settings. For spec.repo.*.path, there are multiple paths to update. Example: change /home/centos/ to /home/folder_name/.

See a full example of the Config YAML file.

spec.deployment.ha:  true if its HA controller.
spec.repo.*.path: Path of the tar location
spec.app-config.generate-self-signed-certs: If true, generates and uses self signed certs for incoming core traffic. If false, then pass the base64 encoded values of fullchain cert and key in the below values
spec.console-certificates.certificate:
spec.console-certificates.key:
spec.app-config.partner.star-domain: Wildcard FQDN (*.example.com)
spec.app-config.super-user.user : Email id for super-user username.
spec.app-config.super-user.password : Password for super-user.
spec.app-config.partner.help-desk-email: Help desk mail id
spec.app-config.partner.notifications-email: notifications mail id

Note

The example above uses dots instead of line breaks to simplify the example. For example, metadata.name looks like the following in the config.yaml file.

BareMetal

Multiple Interface Support

The controller supports multiple interfaces and it can be set in the config.yaml file during initialization. The selected interface is used for all connections related to applications and Kubernetes. By default, the primary interface is used.

spec:
  networking:
    interface: ens3

In cases where complete interface isolation is needed, few pods which use host networks like the monitoring/metrics pods, do not adhere to the interface selection on K8s layer and still use the default interface. If complete traffic isolation is needed, then add the following routing rules on your controller and clusters.

ip route add 10.96.0.0/12 dev <secondary-interface>
ip route add 10.224.0.0/16 dev <secondary-interface>

Hosted DNS Support

In the absence of DNS servers in the controller and cluster environments, the cluster does not have a way to communicate to the controller. In this case, the controller can host its own DNS server and propagate the records to the cluster.

Hosted DNS support can be enabled using the following flag config.yaml file.

override-config:
  global.enable_hosted_dns_server: true

For accessing the controller user-interface on your local machine, add an /etc/hosts entry pointing to the console FQDN to your controller IP address.

123.45.67.89 console.<company.example.com>

Before provisioning the clusters, add an /etc/hosts entry on the cluster nodes pointing the console FQDN to your controller IP address to download the conjurer binary.

123.45.67.89 console.<company.example.com>

While provisioning clusters, add the "-d " to the Conjurer command.

tar -xjf conjurer-linux-amd64.tar.bz2 && sudo ./conjurer -edge-name="test"     \
-passphrase-file="passphrase.txt" -creds-file="credentials.pem" -d < Controller-IP >

Start the controller

  • Start initializing the Controller using the command shown below. If the controller is HA, run the command from any one controller instance.

    Note

    • For an HA controller, make sure HA is enabled in the config.yaml file.
    • Copy the updated config.yaml file to the other controller instances.
    • Keep the same folder path in all controller instances to store the extracted controller packages.
    sudo radm init --config config.yaml
    
  • Once initialization is complete, copy the admin config file to the home directory to access the kube controller API from CLI. This is for single node and HA controllers.

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) -R $HOME/.kube
    

    Note

    For HA, see For HA Controller (Optional) to join the other control plane and worker nodes.

  • Install the dependencies which are required for the controller. For HA, run the command only once on any control plane node.

    sudo radm dependency --config config.yaml
    
  • Install the controller application.

    sudo radm application --config config.yaml
    

    This will bring up all the controller services.

    Note

    This will take approx 20-30 mins for all pods to be up and ready.

  • Before proceeding further, confirm that all pods are in running state using kubectl.

    kubectl get pods -A
    

For HA Controller (Optional)

The HA controller requires a minimum of three control planes to maintain high availability.

After initialization is complete, the output of the init command contains the radm join command which has to be executed on the control plane nodes only. Below is an example.

radm join 123.45.67.89:6443 --token abcpfq.itrb2oi9c4l13123 \
--discovery-token-ca-cert-hash  sha256:453fa8d4624dfab0cd5xxxxxxxxxxxxxxx2b5f1490 \
--control-plane --certificate-key e216b87f10325315xxxxxxxxxxx6bb815586dd3db --config config.yaml

After initialization is complete, the output of the init command contains the radm join command which has to be executed on the worker nodes only. Below is an example.

radm join 123.45.67.89:6443 --token abcpfq.itrb2oi9c4l13123 \
--discovery-token-ca-cert-hash  sha256:453fa8d4624dfab0cd5xxxxxxxxxxxxxxx2b5f1490 \

After running the radm join command on all nodes, the controller quorum is formed and can be confirmed by listing the nodes.

kubectl get nodes

Access Console

  • Try accessing the Controller UI https://console. to verify that the installation was successful.

  • You should see a screen similar to the image below when you access the console.

Note

For accessing the controller UI in your local machine, add a /etc/hosts entry pointing the console FQDN to your controller IP. You can see the IP address after running the sudo radm init --config config.yaml command. Example: 123.456.789.012 console.<company.example.com>.

BareMetal

  • Click on the “Sign Up” link to create the first organization of the self hosted controller
  • Register a new account for the organization as below screenshot

BareMetal

  • Try to login to this organization with the newly registered account on the login screen

Upload Cluster Dependencies

Run the following command on any control plane node only once to enable support for Kubernetes cluster provisioning from the Air Gap Controller and upload dependencies for Kubernetes cluster provisioning to the controller.

sudo radm cluster --config config.yaml