Skip to content

Part 1: Import Cluster

This is Part 1 of a multi-part, self paced quick start exercise.


What Will You Do

In part 1, you will

  • Create a new Project in your Org
  • Import your Kubernetes cluster running in Docker Desktop into this Project using a "minimal cluster blueprint"
  • Remotely access this cluster using the integrated browser based Zero Trust Kubectl

Assumptions

  • You have access to a laptop/desktop with Docker Desktop (v3.3 or higher) installed
  • You have enabled Kubernetes in Docker Desktop
  • You have access to an Org on the controller with admin privileges
  • You have a remote colleague

Docker Desktop with Kubernetes

  • Ensure you are able to kubectl to the Kubernetes cluster running in Docker Desktop. Here is an example showing what you should see.
kubectl get node -o wide

NAME             STATUS   ROLES    AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION     CONTAINER-RUNTIME
docker-desktop   Ready    master   37m   v1.19.3   192.168.65.4   <none>        Docker Desktop   5.10.25-linuxkit   docker://20.10.5

Step 1: Create Project

In this step, we will create a new project which will serve as a logically isolated "operating environment" (sub tenant) for developers.

Note

Creating a project requires "Org Admin" privileges.

  • Login into your Org as an Org Admin
  • Create a new project called "desktop"

New Project

  • Switch context to this project.

View New Project


Step 2: Import Cluster

In this step, you will import your Kubernetes cluster in Docker Desktop into this project. You will use the "minimal" blueprint which comes with just the Kubernetes Management Operator components so that only minimal resources are deployed to your cluster.


Step 2a: Create Cluster

  • Click on New Cluster and Select "Import Existing Kubernetes Cluster"
  • Select "Datacenter/Edge" for Type
  • Select "Other" for Kubernetes Distribution
  • Provide a name such as "desktop" and Continue

Select Environment


Step 2b: Configure Cluster

In this step, you will configure the cluster details

  • Optionally, select a location for the cluster
  • Select "Minimal" for cluster blueprint and Continue

Cluster Specification

At this point, the controller has everything it needs and will provide you with a cryptographically unique "cluster bootstrap" yaml file.

  • Download the bootstrap yaml file

Cluster Specification


Step 3: Import Cluster

Use kubectl to apply the "cluster bootstrap" file on your Kubernetes cluster.

kubectl apply -f desktop-bootstrap.yaml

This will deploy a number of k8s resources on the cluster, create a namespace for the k8s mgmt operator, download the container images and get all resources operational. This process can take 1-2 minutes. On the console, you will notice that the imported cluster will have registered itself and will start receiving instructions from the controller.

namespace/rafay-system created
podsecuritypolicy.policy/rafay-privileged-psp created
clusterrole.rbac.authorization.k8s.io/rafay:manager created
clusterrolebinding.rbac.authorization.k8s.io/rafay:rafay-system:manager-rolebinding created
clusterrole.rbac.authorization.k8s.io/rafay:proxy-role created
clusterrolebinding.rbac.authorization.k8s.io/rafay:rafay-system:proxy-rolebinding created
priorityclass.scheduling.k8s.io/rafay-cluster-critical created
role.rbac.authorization.k8s.io/rafay:leader-election-role created
rolebinding.rbac.authorization.k8s.io/rafay:leader-election-rolebinding created
customresourcedefinition.apiextensions.k8s.io/namespaces.cluster.rafay.dev created
customresourcedefinition.apiextensions.k8s.io/tasklets.cluster.rafay.dev created
customresourcedefinition.apiextensions.k8s.io/tasks.cluster.rafay.dev created
service/controller-manager-metrics-service created
deployment.apps/controller-manager created
configmap/connector-config created
configmap/proxy-config created
deployment.apps/rafay-connector created
service/rafay-drift created
validatingwebhookconfiguration.admissionregistration.k8s.io/rafay-drift-validate created

Step 4: Cluster Status

You can check the status of the k8s mgmt operator pods on your cluster using kubectl

kubectl get po -n rafay-system

NAME                                 READY   STATUS    RESTARTS   AGE
controller-manager-bf685d59f-kddqp   1/1     Running   0          59s
debug-client-7cb778456f-7x2nl        1/1     Running   0          59s
edge-client-56cbf89999-gh99s         1/1     Running   0          62s
rafay-connector-699d8dc5f8-6dqmt     1/1     Running   0          59s
relay-agent-84bc56d4dc-tm2kq         1/1     Running   0          60s

Once the k8s operator is operational, it will "maintain a heartbeat" with the controller and this status will reported on the console as "Healthy".

Successful Import


Step 5: Zero Trust Kubectl

Although you can access your Kubernetes cluster on your desktop locally using a terminal, your remote colleagues have no means to access your cluster remotely.

  • Ask a remote colleague with a valid account in your Org/Project to access the cluster via the web console
  • Navigate to the cluster and click on "Kubectl"
  • This will launch a web based, zero trust kubectl shell allowing your colleague to securely interact with the k8s API server

ZTKA to k8s


Recap

Congratulations! At this point, you have

  • Successfully imported an existing Kubernetes cluster on Docker Desktop to your project
  • Asked a remote user to securely access your k8s cluster behind a firewall using zero trust kubectl access (ztka)