Skip to content

Blueprint Governance

Overview

Blueprint governance across organizations enables centralized management of blueprints while allowing controlled reuse across organizations through compute and service profiles.

The default-gpu blueprint is available in all organizations and can be directly used as a base blueprint when creating custom blueprints. However, the sharing scope of custom blueprints depends on where they are created.


Purpose and Applicability

This capability provides platform administrators with a controlled mechanism to standardize blueprint usage while supporting centralized or organization-scoped customization.

This applies when:

  • Base blueprints are used to create custom blueprints
  • Custom blueprints are created in system-catalog for centralized sharing
  • Custom blueprints are created in other projects for org-scoped usage
  • Compute or service profiles reference these blueprints

Default-GPU Blueprint Overview

The default-gpu blueprint is a system blueprint designed for Kubernetes clusters using GPU-based worker nodes.

It is available under the Default Blueprints tab along with other system blueprints such as default, minimal, and provider-specific default blueprints.

Default GPU Blueprint

This blueprint includes all components available in the existing default blueprint, along with additional GPU-specific system add-ons required for GPU workload execution and external access to cluster applications.

The default-gpu blueprint located in the system-catalog project of the default organization acts as the centralized blueprint that can be used during compute and service instance launch across organizations.

Components Included

The following add-ons are automatically appended by the system when the default-gpu blueprint is selected. These are managed system add-ons and are not manually configurable through the UI.

Component Version Purpose
gpu-operator v24.9.0 Automates GPU enablement in Kubernetes by installing and managing GPU drivers, runtime components, device plugins, and monitoring capabilities required for GPU workloads.
ingress-nginx 4.11.3 Provides ingress functionality to expose cluster applications externally using HTTP and related protocols.

GPU Operator Behavior

The GPU operator automates the setup required to use NVIDIA GPUs inside Kubernetes workloads.

When installed, the GPU operator:

  • Installs NVIDIA drivers
  • Deploys the NVIDIA container runtime
  • Installs the Kubernetes device plugin
  • Manages GPU monitoring using DCGM exporter
  • Supports MIG (Multi-Instance GPU) on compatible GPUs
  • Discovers GPU resources on worker nodes
  • Labels nodes based on GPU availability for workload scheduling

Without the GPU operator, Kubernetes cannot identify or expose GPU hardware for workloads.

Ingress Engine

The ingress-nginx add-on provides standard ingress functionality for exposing applications running inside the cluster.


Managed System Add-ons Behavior

The gpu-operator and ingress-nginx components included with default-gpu are managed system add-ons.

Behavior:

  • Automatically appended at the backend when default-gpu is used
  • Cannot be enabled, disabled, or modified through the UI
  • Not included with other system blueprints

Blueprint Usage and Sharing Model

Using Default-GPU in Any Organization

The default-gpu blueprint is available in all organizations.

Users can:

  1. Select default-gpu as the base blueprint
  2. Create custom blueprints in any project and any organization
  3. Use those blueprints through compute and service profiles

Centralized Custom Blueprint (Cross-Org Sharing)

To create a centralized custom blueprint that can be shared across organizations:

  1. Create a custom blueprint in the system-catalog project under the default-org
  2. Select a base blueprint (for example, default-gpu, default, minimal, or any other blueprint) and extend it with additional add-ons as required
  3. Save changes

Default GPU Blueprint

Once this custom blueprint is created, it can be used when creating compute or service profiles in any organization.

During blueprint synchronization, the system validates whether the blueprint originates from the system-catalog project under the default-org.

  • If the blueprint is from the system-catalog project, blueprint synchronization succeeds.
  • If the blueprint is from any other project or organization, blueprint synchronization fails for cross-organization usage.

This approach allows the custom blueprint to be shared and used across organizations.


Organization-Scoped Custom Blueprint

If a custom blueprint is created in a different project under the default organization, or inside any tenant organization:

  • The blueprint can be shared only to projects within the same organization
  • It cannot be shared to other organizations

This behavior defines the scope difference between centralized and organization-level blueprint creation.


Blueprint Comparison

Blueprint Type Availability Sharing Scope
default-gpu system blueprint Available in all orgs Can be used as base blueprint everywhere
Custom BP in system-catalog (default org) Centralized Shareable across all orgs and projects
Custom BP in other projects/orgs Org-specific Shareable only within same org

Version Compatibility Consideration

The default-gpu blueprint currently includes:

  • gpu-operator: v24.9.0
  • ingress-nginx: 4.11.3

GPU operator compatibility depends on worker node operating system versions. If the deployed version is not supported, add-on deployment or blueprint synchronization may fail.

When different versions are required, create a custom blueprint with alternate add-on versions.


Blueprint Configuration Flow

Step 1: Create a Custom Blueprint

Create a custom blueprint:

  • In system-catalog (default org) for cross-org sharing, or
  • In any organization/project for org-scoped usage

Select a base blueprint (for example default-gpu, default, or minimal) and add required custom add-ons.

GPU PaaS Concept


Step 2: Create a Compute or Service Profile

Create a Compute Profile or Service Profile and configure Blueprint Name and Blueprint Version inputs.

  • Show to User — allows override during profile usage
  • System Variable — hides blueprint values from users

GPU PaaS Concept


Step 3: Share the Profile

Share the profile:

  • Across organizations (only when using system-catalog custom blueprint), or
  • Within the same organization

GPU PaaS Concept