Custom Blueprint
Customers can create and manage "Custom Cluster Blueprints" by adding "addons" to the default cluster blueprints. It is important to emphasize that this "builds on" and "extends" the "Default Cluster Blueprints" and does not replace it.
Important
You can manage the lifecycle of custom blueprints using the Web Console, or RCTL CLI or REST APIs. It is strongly recommended to automate this by integrating RCTL with your existing CI system-based automation pipeline.
Scoping¶
Other than the default cluster blueprints, which is common across all projects, all custom blueprints are scoped to a Project. This isolation boundary guarantees that there is no accidental spillover or leakage. If required, blueprints can be shared with selected or all projects.
RBAC¶
Users manage the lifecycle of cluster blueprints with an "infrastructure administrator" role in the Org.
Step 1: Create Custom Blueprint¶
As an Admin in the Web Console,
- Navigate to the Project
- Click on Blueprints under Infrastructure
- Click on New blueprint
- Provide a name and description
All custom cluster blueprints are version-controlled so that the lifecycle can be properly managed. In this example, the admin has not yet configured anything. So, no versions are available yet.
Step 2: New Version¶
- Click on New Version and use the wizard to provide details
- Provide a version number/name
- Select a Base Blueprint. The four (4) Base Blueprints are default, minimal, default-openshift (specific to OpenShift), and default-aks (specific to AKS). Each Base Blueprint has different group of system add-ons with multiple versions. Users can pick any of these Base Blueprint and customize to create a new Blueprint version
To view the Blueprint version changes (logs) performed on base blueprint, click Version Change Log. This shows the list of add-ons associated with each version
- Select Pod Security Policies (PSPs) and scoping (cluster or namespace)
- Optionally,
- Select a drift detection policy
- Enable Enable OPA Gatekeeper for this Blueprint and select a policy from the list to enforce on the required cluster
- Select custom addons and identity version of the addons
- Optionally,
- disable addons from the default blueprint (i.e. Ingress Controller)
- select the required Kube API Proxy Network from the Private KubeAPI Proxy drop-down
- Add the required Fleet Values and enable Auto Update Clusters to apply this blueprint to the fleet(s) of cluster(s).
- Click Save Changes
For more information, refer Add-Ons
- If you have plans to deploy Virtual Machine Workloads, check the VM Operator option under Managed System Add-ons. When selecting VM Operator, user receives a message "Atleast one node with label k8smgmt.io/vm-operator should be present in the cluster". This is to ensure that the cluster using this blueprint must have a node with the label k8smgmt.io/vm-operator
- Click Save Changes.
Below is an example of creating a custom blueprint called "demo-blueprint".
View Versions¶
The entire history of blueprint versions is maintained on the Controller. Admins can view details about the versions of cluster blueprints.
Use the toggle button to enable or disable the required blueprint version(s) to list out during the cluster provisioning. This option helps to disallow the users from using blueprint versions that might contain vulnerable/deprecated Add-On versions.
Disabling affects the clusters using this specific blueprint version. Click Yes to proceed
Once a version is disabled, the cluster deployed with that specific BP version is highlighted in red with an Update button. Use this button to update with the required blueprint and version
BP Version with Fleet Values¶
The history of blueprint versions with fleet values is maintained on the controller.
Publish to Fleet¶
Blueprints with Fleet values show a publish icon.
- Click the publish icon to manually publish the selected blueprint version to the fleet(s). Once a blueprint version is published to a fleet, a job is initiated.
Important
If the Auto Update Clusters is enabled while creating a blueprint New Version, the new version is automatically published to the fleet(s). Users must manually publish the blueprint version to fleets if the Auto Update Clusters option is disabled
- Click View Fleet Jobs to view the status of the job. The below example shows blueprint version 5 is successfully published to the fleets and in Ready status
- Click the job id to view the list of clusters under this fleet
Important
Users cannot publish the disabled blueprint versions to the fleets
Deployment Status
Once the blueprint version is published to the fleets, the job status varies based on the blueprint sync
Status | Description |
---|---|
No Cluster | No clusters available in the fleet(s) |
Ready | Blueprint publish is success on all fleet(s) of clusters |
Partially Ready | Blueprint published on a few clusters |
In Progress | Blueprint publishing on the fleet(s) in progress |
Failed | Blueprint publish failed on all fleets |
View¶
Click the eye icon to view the blueprint version details
Filter Clusters by Blueprint¶
Infrastructure admins can "filter" clusters by blueprint name using the Web Console to manage a fleet of clusters efficiently. An illustrative example is shown below.
View All Cluster Blueprints¶
Admins can view all custom cluster blueprints.
- Navigate to the Project
- Click on Blueprints under Infrastructure.
This will display both the "default blueprints" and any cluster blueprints that have been created. An illustrative example is shown below.
Apply Custom Blueprint¶
Once a custom cluster blueprint has been created and published, it can be used during the initial provisioning of clusters or applied to existing clusters.
New Clusters¶
While creating a new cluster, select the "custom blueprint" from the dropdown. An illustrative example is shown below.
Existing Clusters¶
- Click on options (gear icon on the far right) for an existing cluster
- Select "Update Blueprint" from the options
- Select the "blueprint" and "version" from the dropdown
- Click on Save and Publish
This will update the cluster blueprint on the target cluster. Once all the required resources are operational on the cluster, the blueprint update/sync will be complete.
Important
If the selected blueprint version has VM Operator enabled, minimum one node with label k8smgmt.io/vm-operator must be available in the selected cluster. This information is presented to the user when selecting the VM Operator is enabled in the blueprint version.
Status and Debug¶
In addition to using the Zero Trust KubeCTL channel for debugging and diagnostics, admins can also use the built-in status details if issues are encountered during a blueprint sync process with a cluster.
In the Blueprint Sync Status field on the cluster, click on the Status link. This will provide detailed status by component in the blueprint.
An illustrative example is shown below.
Dependency¶
In some scenarios, there are requirements that certain components (add-ons) need to be installed first before installing the rest of the components (add-ons).
To achieve this, define this dependency when creating the blueprint version. In the below example, install cert-manager before installing vault because vault uses cert-manager to create the certificate for Ingress.
- Navigate to the Project
- Click on Blueprints under Infrastructure
- Click on New blueprint
- Provide a name and description
- Click Create
- Click on New Version and use the wizard to provide details
- Provide a version number/name
- Select PSPs and scoping (cluster or namespace)
- Under add-ons, select cert-manager and its version.
- Select "ADD MORE" and then select Vault add-on from the dropdown and its version
- Select "ADD DEPENDENCY" under Vault add-on and select "cert-manager"
- Click "Save Changes"
Note
If an add-on is dependent on multiple add-ons, all these add-ons can be added as dependencies