Preflight Checks
A battery of automated preflight checks are available as part of the provisioning process to ensure that nodes and the environment are compatible and configured as per minimum requirements. The following preflight test are currently performed:
# | Description and Type of Preflight Checks |
---|---|
1 | Is the node running a compatible OS and Version? |
2 | Does the node have minimum CPU Resources? |
3 | Does the node have minimum Memory Resources ? |
4 | Does the node have outbound Internet Connectivity? |
5 | Is the node able to connect to the Controller? |
6 | Is the node able to perform a DNS Lookup of Controller? |
7 | Is the node's time Synchronized with NTP? |
8 | Does the node have minimum and compatible storage? |
9 | Is there a firewall installed? |
10 | Is Kubernetes already installed on the node? |
11 | Does hostname used an underscore? (not compatible with k8s naming convention) |
12 | Is port 53 already in use? (Consul is operated as a local DNS server for service discovery) |
An illustrative example is shown below where the "preflight checks" detected an incompatible node for provisioning.
tar -xjf conjurer-linux-amd64.tar.bz2 && sudo ./conjurer -edge-name="onpremcluster" -passphrase-file="onpremcluster-passphrase.txt" -creds-file="onpremcluster-credentials.pem" -t
[+] Performing pre-tests
[+] Operating System check
[+] CPU check
[+] Memory check
[+] Internet connectivity check
[+] Connectivity check to controller registry
[+] DNS Lookup to the controller
[+] Connectivity check to the Controller
!INFO: Attempting mTLS connection to salt.core.stage.rafay-edge.net:443
[+] Multiple default routes check
[+] Time Sync check
[+] Storage check
!WARNING: No raw unformatted volume detected with more than 50GB. Cannot configure node as a master or storage node.
[+] Detected following errors during the above checks
!WARNING: System Memory 28GB is less than the required 32GB.
Checking for Fatal errors
[+] Detected following fatal errors during the above checks
!ERROR: Detected a previously installed version of Docker on this node. Please remove the prior Docker package and retry.
!ERROR: Detected a previously installed version of Kubernetes on this node. Please remove the prior Kubernetes packages (kubectl, kubeadm, kubelet,kubernetes-cni, etc.) and retry.
Latency Test on MKS Nodes¶
Latency testing is crucial for optimizing performance and reliability in MKS cluster provisioning. By measuring latency, users can proactively identify issues and optimize performance. To assess latency between the MKS node(s) and the SaaS Controller, use the below method.
Download the Speedtest CLI and perform the test on the LoadBalancer IP. Below are the SaaS Controller LB IPs:
- 52.10.6.79
- 52.42.211.235
- 35.167.70.143
Example 1: Speedtest on the IP 52.42.211.235
./speedtest 52.42.211.235
Speedtest by Ookla
Server: Next Level Infrastructure - Santa Clara, CA (id: 25606)
ISP: Comcast Cable
Idle Latency: 17.64 ms (jitter: 3.64ms, low: 15.55ms, high: 28.73ms)
Download: 567.39 Mbps (data used: 1.0 GB)
182.45 ms (jitter: 53.76ms, low: 16.63ms, high: 572.30ms)
Upload: 35.72 Mbps [==========/ ] 52% - latency: 9.42 ms ^C
Example 2: Speedtest on the IP 52.10.6.79
./speedtest 52.10.6.79
Speedtest by Ookla
Server: Bharti Airtel Ltd - Bangalore (id: 52216)
ISP: Airtel
Idle Latency: 6.44 ms (jitter: 0.72ms, low: 5.25ms, high: 6.59ms)
Download: 273.13 Mbps (data used: 335.3 MB)
215.59 ms (jitter: 62.33ms, low: 5.30ms, high: 381.79ms)
Upload: 307.48 Mbps (data used: 173.8 MB)
213.45 ms (jitter: 53.69ms, low: 6.38ms, high: 437.54ms)
Packet Loss: 0.0%