Skip to content

Installation

Here are the detailed instructions for installation of Self Hosted Controller in EKS environments.


Before Installation

  • Create an instance/node with the specifications described in the requirements.
  • Create wildcard DNS entries for the controller domains mentioned in the requirements, and point their A record to node/load balancer IP addresses.
  • (Optional) Generate a wildcard certificate for the FQDN which is signed by a certificate authority. Alternatively, configure the controller to use self-signed certificates.

Install RADM Services

Download and install the controller installation package.

  • Click here to login and download the controller installation package to the instance.
  • Optionally, verify the package with MD5SUM or SHA256SUM.

    https://rafay-airgap-controller.s3.us-west-2.amazonaws.com/1.5.w2/MD5SUM 
    https://rafay-airgap-controller.s3.us-west-2.amazonaws.com/1.5.w2/SHA256SUM
    
  • From your home directory, untar the package using the command below

    tar -xf rafay-controller-*
    

Example:

tar -xf rafay-controller-1.13-19-gke.tar.gz
  • Copy and edit the config.yaml file.

    sudo mv ./radm/usr/bin/
    cp -rp config.yaml-gke-tmpl config.yaml
    vi config.yaml
    
  • Customize the config.yaml. The following settings should be updated.

    metadata.name: Name of the controller.
    spec.networking.interface: Interface for controller traffic [optional]
    spec.deployment.ha: True if its HA controller.
    spec.repo.*.path: Path of the tar location
    spec.app-config.generate-self-signed-certs: Generates and uses self  signed certs for incoming core traffic.
    spec.star-domain: Wildcard FQDN (*.example.com)
    spec.override-config.global.enable_hosted_dns_server: True if DNS is not available.
    spec.app-config.logo: Display logo in UI.
    spec.override-config.localprovisioner.basePath: Path for PVC volumes.
    spec.override-config.core-registry-path: Path for registry images.
    spec.override-config.etcd-path: Path where etcd data is saved.
    spec.override-config.global.external_lb: set to True to use external LB.
    spec.override-config.global.use_instance_role: set to True to provision EKS clusters using controller IAM role. Refer section 5.1.
    If the above instance role is not used, we can use the below parameter for adding cross account ID and credentials.
    spec.override-config.global.secrets.aws_account_id
    spec.override-config.global.secrets.aws_access_key_id  spec.override-config.global.secrets.aws_secret_access_key
    
  • Initialize the controller using the following command.

    Note

    For an HA controller, make sure HA is enabled in the config.yaml file.

    sudo radm init --config config.yaml
    
  • After initialization is complete, copy the admin config file to the home directory to access the kube controller API and CLI.

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) -R $HOME/.kube
    

For HA Controller (Optional)

The HA controller requires a minimum of three masters to maintain high availability.

After initialization is complete, the output of the init command contains the radm join command which has to be executed on the master and worker nodes. Below is an example.

radm join 123.45.67.89:6443 --token abcpfq.itrb2oi9c4l13123 \
--discovery-token-ca-cert-hash  sha256:453fa8d4624dfab0cd5xxxxxxxxxxxxxxx2b5f1490 \
--control-plane --certificate-key e216b87f10325315xxxxxxxxxxx6bb815586dd3db --config config.yaml

After running the radm join command on all nodes, the controller quorum is formed and can be confirmed by listing the nodes.

kubectl get nodes

Install Controller Dependencies

Install the controller dependencies.

sudo radm dependency --config config.yaml

Installing Controller Apps

Install the controller application.

sudo radm application --config config.yaml

This will bring up all of the controller services. This process can take 20-30 minutes for all pods to be up and ready.

To confirm that all pods are in a running state, use the following command.

kubectl get pods -A

Accessing the Web Console

Try accessing the self hosted controller https://console. to verify that the installation was successful.

  • A screen appears similar to the image below when accessing the UI

EKS

  • Click the Sign Up link to create the first Organization of the self hosted controller

  • Register a new account for the organization as below screenshot

EKS

  • Try to login to this Organization with the newly registered account on the login screen

Upload Cluster Dependencies

Run the below command to enable support for Kubernetes cluster provisioning from the self hosted controller and upload dependencies for Kubernetes cluster provisioning to the controller.

sudo ./radm cluster --config config.yaml --kubeconfig <gke cluster config file>

Example:

sudo ./radm cluster --config config.yaml --kubeconfig gke-config.yaml

Setup EKS Cluster Provisioning

With the self hosted controller accessible through the console URL, use the following for setting up the controller for EKS cluster provisioning.

Provision EKS Clusters Using Controller Instance IAM

Use the controller instance IAM role to provision EKS clusters in the same AWS account as the controller.

  • The following parameter must be enabled in the config.yaml file to use this feature.

    global.use_instance_role: true

  • If this feature is enabled after deploying the controller, rerun the RADM application command.

    sudo radm application --config config.yaml

  • Create the following IAM policy for the controller EC2 instance to allow STS, PassRole, and CloudFormation access.

    {
    "Version": "2012-10-17",
    "Statement": [
        {
        "Sid": "VisualEditor0",
        "Effect": "Allow",
        "Action": "sts:*",
        "Resource": "*"
        },
        {
        "Sid": "cloudformation",
        "Effect": "Allow",
        "Action": [
            "cloudformation:*"
        ],
        "Resource": "*"
        },
        {
        "Sid": "iam",
        "Effect": "Allow",
        "Action": [
            "iam:PassRole"
        ],
        "Resource": "*"
        }
    ]
    
    }
    
  • Create an IAM policy to use the provisioning EKS clusters. See IAM Policy for more information about the IAM policy.

  • After creating two policies, create a new IAM role for the controller EC2 instance to use for the EKS cluster provisioning. Choose use case EC2.

EKS

  • Under policies, choose the above created STS and EKS policies, and attach them to the role.

EKS

  • Provide the role name and create the IAM role.

EKS

  • After the role is created, edit the trust relationship of the IAM role for trusting the controller EC2 instances.

EKS

  • Edit the trust relationship of the IAM role by replacing the Principal with the controller EC2 instances Account ID and Instance ID.

    Example: arn:aws:sts::<accountid>:assumed-role/aws:ec2-instance/<instance id>

EKS

  • Attach the IAM role to the EC2 instance of the controller. Now the controller is trusted using the IAM instance role to provision EKS clusters in the same AWS account as the controller EC2 instance.

  • See Create Cloud Credentials for creating the cloud credentials in the console to use for EKS cluster provisioning.

  • See Create Cluster for EKS cluster provisioning in the same AWS account with the controller using the above cloud credentials.

Use AWS Account ID, Access Key, and Secret to Provision EKS Clusters

Use the controller AWS Account ID, Access Key, and Secret to provision EKS clusters in the same AWS account as the controller EC2 instances or in different AWS accounts.

  • Create the STS policy and name it rafay-sts-policy.

    {
    "Version": "2012-10-17",
    "Statement": [
        {
        "Sid": "VisualEditor0",
        "Effect": "Allow",
        "Action": "sts:*",
        "Resource": "*"
        }
    ]
    }
    
  • Assign the following policies to the IAM user used for delegate accounts.

  • rafay-sts-policy

  • AmazonEC2ReadOnlyAccess (AWS Managed Policy)

  • The following parameter must be disabled in the config.yaml file to use this feature.

    global.use_instance_role: false

  • Optionally, you can provide details for the IAM user account, AWS secret, and key in the config.yaml file. If you do not want to provide these keys, see Adding AWS Account, Secret, and Key using Curl.

    global.secrets.aws_account_id: "< AWS Account ID where controller is running>"
    global.secrets.aws_access_key_id: <Base64 encoded string of access key>
    global.secrets.aws_secret_access_key: <Base64 encoded string of aws secret>
    
  • If this feature is disabled after deploying the controller, rerun the RADM application command.

    sudo radm application --config config.yaml

Adding AWS Account, Secret, and Key Using Curl

  • Access the operations console for the controller using the following URL. Use the super-user credentials to login. The super-user credentials were set in the config.yaml file.

    https://ops-console.<company.example.com>

  • After login, run the following command from your local system, using curl to update the AWS key and secret to the controller.

    Note

    CSRF tokens and RSA ID can be obtained from the inspect screen in the browser after login. In the following example, replace the bold content.

    curl -X PUT 'https://ops-console.<rafay.example.com>/edge/v1/providers/rx28oml/?partner_id=rx28oml&organization_id=rx28oml' \
    -H 'authority: ops-console.<rafay.example.com>' \
    -H 'x-rafay-partner: rx28oml' \
    -H 'accept: application/json, text/plain, */*' \
    -H 'x-csrftoken: BuSAE3rVCGCwO45N8ne2nKyXiiR53ZL2xPNi6qk2MuVvKHytdH4nKGCtkZZHajN3' \
    -H 'user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.192 Safari/537.36' \
    -H 'content-type: application/json;charset=UTF-8' \
    -H 'origin: https://ops-console.<rafay.example.com>' \
    -H 'sec-fetch-site: same-origin' \
    -H 'sec-fetch-mode: cors' \
    -H 'sec-fetch-dest: empty' \
    -H 'referer: https://ops-console.<rafay.example.com>/' \
    -H 'accept-language: en' \
    -H 'cookie: logo_link=/logo.png; support_email=support@rafay.co; csrftoken=BuSAE3rVCGCwO45N8ne2nKyXiiR53ZL2xPNi6qk2MuVvKHytdH4nKGCtkZZHajN3; rsid=9zmsei147cok9qjqq2mkyxwqpps4ixr0' \
    --data-raw '{"id":"rx28oml","partner_id":"rx28oml","credentials":"{\"account_id\":\"790000000230\",\"access_id\":\"YOUR_AWS_ACCESS_KEY_ID\",\"secret_key\":\"YOUR_AWS_ACCESS_SECRET_KEY\"}","provider":1,"credential_type":0,"name": "default-partner-credentials-1","delegate_account":true, "created_at": "2021-09-25T15:08:03.356164Z"}' \
    --compressed \
    --insecure
    
  • See Credentials for EKS and follow the steps to create credentials to provision EKS clusters through the console user-interface.

  • See Create Cluster and follow the steps for EKS cluster provisioning.

Multiple Interface Support

The controller supports multiple interfaces and it can be set in the config.yaml file during initialization. The selected interface is used for all connections related to applications and Kubernetes. By default, the primary interface is used.

spec:
  networking:
    interface: ens3

In cases where complete interface isolation is needed, few pods which use host networks like the monitoring/metrics pods, do not adhere to the interface selection on K8s layer and still use the default interface. If complete traffic isolation is needed, then add the following routing rules on your controller and clusters.

ip route add 10.96.0.0/12 dev <secondary-interface>
ip route add 10.224.0.0/16 dev <secondary-interface>

Hosted DNS Support

In the absence of DNS servers in the controller and cluster environments, the cluster does not have a way to communicate to the controller. In this case, the controller can host its own DNS server and propagate the records to the cluster.

Hosted DNS support can be enabled using the following flag config.yaml file.

override-config:
  global.enable_hosted_dns_server: true

For accessing the controller user-interface on your local machine, add an /etc/hosts entry pointing to the console FQDN to your controller IP address.

123.45.67.89 console.<company.example.com>

While provisioning clusters, add the "-d " to the Conjurer command.

tar -xjf conjurer-linux-amd64.tar.bz2 && sudo ./conjurer -edge-name="test"     \
-passphrase-file="passphrase.txt" -creds-file="credentials.pem" -d < Controller-IP >