Get Started
Overview¶
This self-paced guide helps explore the platform’s capabilities for lifecycle management of MKS clusters on the Nutanix infrastructure using system templates from the template catalog.
Why Use System Templates for MKS on Nutanix?¶
System templates streamline the creation and management of MKS clusters by offering pre-configured, customizable templates. These templates:
- Ensure consistency and reduce setup time
- Enable organization administrators to enforce standards while allowing teams the flexibility to customize configurations
- Simplify workflows, such as adding approval steps or modifying network and node pool settings
- Enhance collaboration and efficiency in managing Rafay Managed Kubernetes distribution on Nutanix infrastructure
Prerequisites¶
Before proceeding, ensure the following:
- Access to Nutanix infrastructure with appropriate permissions for provisioning VMs
- Sufficient privileges to create and manage Virtual Machines on Nutanix as part of the Rafay Managed Kubernetes (MKS) cluster lifecycle
- The Rafay Agent deployed within the Nutanix infrastructure to automate provisioning and cluster management. Refer to these instructions for deploying the agent. Existing agents can also be reused
- A valid Rafay API Key for authenticating with the Rafay controller. Follow these instructions to generate an API key
- Nutanix infrastructure details including Endpoint, Port, Username, Password, Cluster Name, and Subnet Name
- Nutanix-compatible images uploaded for both Control Plane and Worker nodes
What This Template Will Do¶
By using this system template, the following actions will be automated:
- Provision Nutanix AHV Virtual Machines: Configured as Kubernetes nodes with user-defined specifications.
- Deploy Rafay Managed Kubernetes Cluster: A fully managed Kubernetes cluster running the Rafay distribution on the provisioned Nutanix VMs.
- Configure Networking and Storage: Setup of integrated CNI (e.g., Calico), CSI, and cluster add-ons as defined in the Cluster Blueprint.
- Output Kubeconfig: Upon successful deployment, a kubeconfig file will be provided to enable secure access to the cluster.
This template enables rapid provisioning and management of Kubernetes clusters on Nutanix infrastructure with minimal manual setup.
Part 1: Select and Share the Template¶
This section guides you on selecting and sharing the Rafay K8s Distro on Nutanix
system template with a project.
Step 1: Create a Project¶
- Navigate to the Home
Your Projects
section - Click Create a New Project and name it
nutanix-project
for this guide
Step 2: Select and Share the Nutanix Template¶
- As an Org Admin, go to Settings -> Template Catalog
- Select the Rafay K8s Distro on Nutanix template and click Get Started
- Provide the following details:
- A unique name for the shared template.
- A version name (e.g.,
1.0
). - Select the project to share the template with (e.g.,
nutanix-project
).
- After sharing, the platform redirects you to the selected project (
nutanix-project
) - Select the Agents from the drop-down
- Navigate back to General and Save the template as a draft or set it as an Active Version. Learn more about version management here.
Part 2: Launch the Template to Create an MKS Cluster on Nutanix¶
-
Navigate to the Environments section within the
nutanix-project
project. All the shared templates are listed here. -
Go to the Environments section within the
nutanix-project
project or the shared project. - The shared template will be listed and ready for use.
- Click Launch.
- Select the Agent. The Agent can either be added to the environment template created earlier or configured during the template launch process.
In this guide, the agent is configured at the environment template layer, so this part will be skipped during the launch to proceed further.
Fill in the required configurations for the MKS cluster:
General Configuration¶
Provide the core configurations required to deploy the upstream Kubernetes cluster. This section includes key inputs such as Project Scope, Cluster Metadata, Kubernetes Version, and customization options for Blueprint Selection, Upgrade Strategy, and Control Plane Behavior. Values can be set explicitly or templated using environment variables for dynamic, reusable configurations.
Network, Control Plane and Worker Node Configuration¶
Provide the required specifications for the cluster's networking setup, control plane nodes, and worker nodes. This section includes the Network configuration such as CNI plugin, pod subnet, service subnet, and optional Proxy Config. It also defines the Controlplane VM Count and Controlplane VM Type, as well as the Worker VM Count and Worker VM Type, with details like image name and operating system to ensure consistent VM provisioning.
📌 Note
- To create a single-node cluster, set the Worker VM Count to0
and the Controlplane VM Count to1
.
- Ensure that both Cluster Dedicated Control Plane and Cluster HA are set tofalse
in the General Configuration. This configuration results in a single-node cluster.
Nutanix Configuration¶
Specify the configuration details required to connect and deploy resources on a Nutanix Infrastructure. This section includes fields such as Nutanix Endpoint, Nutanix Port, Nutanix Username, and Nutanix Password to authenticate with the Nutanix Prism Element. Also, provide the Nutanix Cluster Name and Nutanix Subnet Name where the VMs will be provisioned. Optionally, enter the Private Key Path and Public Key Path if SSH keys are managed externally and referenced by file path.
Additional Configuration¶
The Additional Configuration section provides options to enable security and policy enforcement controls in the cluster. The Enable Kata Deployment and Enable Opa-gatekeeper Deployment parameters control the deployment of the corresponding admission controllers. When Gatekeeper is enabled, Opa Excluded Namespaces can be specified to exclude certain namespaces from policy enforcement. Policy templates can be defined using Opa Constraint Template YAML, and specific constraints can be applied through Opa Constraints YAML to enforce compliance and workload security.
Other Configuration¶
- Specify the Controller Endpoint to connect with the controller, and provide the API Key for authentication. Include the SSH private-key content to securely authenticate and access the remote machine, along with the SSH authorized-key that will be added to the remote machine’s authorized_keys file for access control.
- Once all the configuration is provided, click Save & Deploy.
- Monitor the status to complete the MKS cluster provisioning on Nutanix Infrastructure.
Refer to the Input Variables for more details on these configuration parameters.
Deleting/Destroying the MKS Cluster¶
- Navigate to the environment of the created MKS Cluster.
- Click Destroy and confirm the action by selecting Yes.
- This will delete the MKS Cluster on Nutanix along with all dependent resources created as part of the cluster.
Conclusion¶
By following these steps, you have successfully:
- Selected and shared the Nutanix system template.
- Used the template to perform lifecycle management of the MKS Cluster
These system templates simplify the provisioning and management of MKS Clusters on Nutanix infrastructure, ensure compliance with organizational standards, and provide flexibility for specific workflows.