Part 1: Create Project
This is Part 1 of a multi-part, self paced quick start exercise.
What Will You Do¶
In part 1, you will
- Create a new Project in your Org
- Import a Kubernetes cluster into this Project using a "cluster blueprint"
- Remotely access this cluster using the integrated browser based Zero Trust Kubectl
Estimated Time
Estimated time burden for this part is 15 minutes.
Step 1: Create Project¶
In this step, we will create a new project which will serve as a logically isolated "operating environment" (aka. sub tenant).
Note
Creating a project requires "Org Admin" privileges.
- Login into your Org as an Org Admin
- Create a new project called "desktop"
- Switch context to this project by clicking on it.
Step 2: Import Cluster¶
In this step, you will import your Kubernetes cluster into this project. We will use the "minimal" blueprint which comes with just the Kubernetes Management Operator components so that only minimal resources are deployed to the Kubernetes cluster.
Create¶
- Click on New Cluster and Select "Import Existing Kubernetes Cluster"
- Select "Datacenter/Edge" for Type
- Select "Other" for Kubernetes Distribution
- Provide a name such as "desktop" and Continue
Configure¶
In this step, you will provide the cluster's configuration
- Ensure that the "minimal" cluster blueprint is selected and click on Continue.
You will be provided with a cryptographically unique "cluster bootstrap" yaml file.
- Download the bootstrap yaml file
Step 3: Apply Bootstrap file¶
Use kubectl to apply the "cluster bootstrap" file on your Kubernetes cluster.
kubectl apply -f desktop-bootstrap.yaml
This will create a namespace for the k8s mgmt operator, download the container images, and register with the controller. This one time import process can take ~2 minutes and depends on the speed of your Internet connection to download the required images.
namespace/rafay-system created
serviceaccount/system-sa created
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/rafay-privileged-psp created
clusterrole.rbac.authorization.k8s.io/rafay:manager created
clusterrolebinding.rbac.authorization.k8s.io/rafay:rafay-system:manager-rolebinding created
clusterrole.rbac.authorization.k8s.io/rafay:proxy-role created
clusterrolebinding.rbac.authorization.k8s.io/rafay:rafay-system:proxy-rolebinding created
priorityclass.scheduling.k8s.io/rafay-cluster-critical created
role.rbac.authorization.k8s.io/rafay:leader-election-role created
rolebinding.rbac.authorization.k8s.io/rafay:leader-election-rolebinding created
customresourcedefinition.apiextensions.k8s.io/namespaces.cluster.rafay.dev created
customresourcedefinition.apiextensions.k8s.io/tasklets.cluster.rafay.dev created
customresourcedefinition.apiextensions.k8s.io/tasks.cluster.rafay.dev created
service/controller-manager-metrics-service-v3 created
deployment.apps/controller-manager-v3 created
configmap/connector-config-v3 created
configmap/proxy-config-v3 created
deployment.apps/rafay-connector-v3 created
service/rafay-drift-v3 created
validatingwebhookconfiguration.admissionregistration.k8s.io/rafay-drift-validate-v3 created
Step 4: Check Cluster Status¶
On the console, you will notice that the imported cluster would have registered itself and will start receiving instructions from the controller. You can also check the status of the mgmt operator pods on your cluster using kubectl.
kubectl get po -n rafay-system
You should see something like
NAME READY STATUS RESTARTS AGE
controller-manager-v3-6ccfddbb76-7xgxw 1/1 Running 0 117s
rafay-connector-v3-d9b5646dd-z2pvl 1/1 Running 0 117s
edge-client-6d9b49585-p9f29 1/1 Running 0 34s
relay-agent-f495ddcbc-zsmtd 1/1 Running 0 33s
Once the k8s operator is operational, it will "establish and maintain a heartbeat" with the controller.
Troubleshooting¶
Here are some common conditions that can cause issues with the import process.
Blocking Firewall¶
The k8s operator pods installed in your cluster need to connect out on port 443 and establish a long running mTLS based control channel to the SaaS Controller. If you see the following pods in a Pending state for several minutes, you most likely have a network firewall blocking outbound connections. Installation will not proceed.
kubectl get po -n rafay-system
NAME READY STATUS RESTARTS AGE
controller-manager-54db66978c-kp856 0/1 Pending 0 6m48s
rafay-connector-75649c86f-l876q 0/1 Pending 0 6m48s
To confirm this, you can use "kubectl logs"
kubectl logs rafay-connector<pod id> -n rafay-system
If you do not see a "connected to core" message, it is most likely a firewall or a DNS issue.
{"level":"info","ts":"2021-10-05T14:37:11.807Z","caller":"connector/connector.go:116","msg":"registering connector"}
{"level":"info","ts":"2021-10-05T14:37:11.818Z","caller":"connector/connector.go:123","msg":"registered connector"}
{"level":"info","ts":"2021-10-05T14:37:11.818Z","caller":"connector/connector.go:124","msg":"connecting to core"}
{"level":"info","ts":"2021-10-05T14:37:11.828Z","caller":"connector/connect.go:48","msg":"connecting","to":"control.rafay.dev:443"}
{"level":"info","ts":"2021-10-05T14:37:11.954Z","caller":"connector/connector.go:131","msg":"connected to core"}
Solution White list the Controller's IPs and import again.
No DNS¶
Ensure your cluster has DNS configured and enabled. This is required for the pods to resolve the SaaS Controller on the Internet in order to connect to it.
Resources¶
Ensure your cluster has sufficient resources available for pods to become operational.
Network Bandwidth¶
Ensure you have a resonable and stable connection to the Internet.
Recap¶
Congratulations! At this point, you have successfully imported an existing Kubernetes cluster to your project. You are ready to progress to the next part.