Skip to content

Override Node Affinity for Addons

Overview

Cluster Overrides enable customization of blueprint-managed system addons at the cluster level, including the ability to override Kubernetes node affinity for addon components.

System addons deployed through blueprints are installed using Helm charts and scheduled by Kubernetes using default scheduling behavior. In this default configuration, addon components can run on any eligible node in the cluster, with no explicit control over node placement within the blueprint itself.

Node affinity override introduces controlled scheduling by allowing custom Kubernetes nodeAffinity rules to be injected into the addon’s Helm configuration through an AddonOverride. This makes it possible to define which nodes are eligible to run specific addon components.

This capability supports infrastructure and operational requirements such as restricting workloads to specific operating systems, controlling placement based on CPU architecture (amd64 or arm64), targeting dedicated worker nodes, excluding certain node types such as Fargate, and enforcing placement based on custom node labels.

Overrides are applied at the cluster level and do not modify the base blueprint. This ensures that the blueprint remains reusable across environments while still allowing cluster-specific scheduling behavior.

By leveraging Kubernetes-native node affinity, scheduling rules are strictly enforced during pod placement. If no nodes match the defined criteria, the addon component remains unscheduled, resulting in predictable and policy-aligned deployment behavior for system components.


Node Labels and Prerequisites

Node affinity rules match against labels assigned to cluster nodes. Before configuring an AddonOverride, ensure that the required labels are present on the target nodes.

Node labels can include Kubernetes system labels such as kubernetes.io/os and kubernetes.io/arch, cloud-provider labels such as compute type, and custom labels defined to meet infrastructure or organizational requirements.

Node labels can be viewed from the node details page. Custom labels can be added or modified using the Edit Labels option on the node details page.

Cluster Card

Labels can be defined as either key-value pairs or key only. Label keys and values are case-sensitive and must match exactly when referenced in node affinity rules.

If the labels referenced in the affinity configuration do not exist on any node in the cluster, the addon component remains unscheduled after the override is applied.


Configuring Node Affinity Using Cluster Overrides

Node affinity for blueprint-managed addons is configured using Cluster Overrides. The override mechanism injects custom Helm values into a specific addon for a selected cluster.

  • Navigate to the Cluster Overrides and create a new override with the Cluster **Override Type as Addon and File Type as Helm

Cluster Card

  • Configure the override with the following settings:

    • Resource Type: Select from list or use Custom Input
    • Placement: Select the target cluster by name or labels
  • If using Custom Input, provide the addon selector that identifies the system component to override.

Cluster Card

  • In the Override Values section, provide the node affinity configuration in Helm values format.

Cluster Card

  • Save the override configuration.

After saving, publish or reapply the blueprint so that the override is rendered into the addon deployment.

The override applies only to the selected cluster and does not modify the base blueprint definition.


Add a short note in the Supported Addons for Node Affinity Overrides section pointing users to the detailed specification pages.

Use something like this:


Supported Addons for Node Affinity Overrides

Node affinity overrides can be configured for the following system addons:

  • v2-edge-client
  • v2-alertmanager
  • v2-infra
  • rafay-prometheus

These addons support parameterizing node affinity through the override values file provided in the Cluster Override configuration.

For addons with multiple components, node affinity can be specified individually for each component within the values file. The configuration is injected into the addon Helm chart through the AddonOverride and applied when the blueprint is published or reapplied.

Detailed configuration specifications for each addon component are documented separately. Refer to the respective specification page for the exact values structure and supported component-level affinity configuration.


AddonOverride Specification (V3)

Node affinity overrides can also be configured declaratively using the AddonOverride resource (v3). This approach is suitable for automation and infrastructure-as-code workflows.

An AddonOverride targets:

  • A specific cluster through placement rules
  • A specific system addon using a resource selector
  • A Helm values file that contains the node affinity configuration

The override injects the specified values into the Helm chart during deployment without modifying the base blueprint.

Example:

apiVersion: infra.k8smgmt.io/v3
kind: AddonOverride
metadata:
  labels:
    rafay.dev/overrideScope: clusterLabels
    rafay.dev/overrideType: valuesFile
  name: v2-infra-override
  project: defaultproject
spec:
  placement:
    labels:
      - key: rafay.dev/clusterName
        value: cluster-1
  resource:
    selector:
      selector: rafay.dev/system=true,rafay.dev/component=v2-infra
    type: Addon
  type: Helm
  valuesPath:
    name: file://v2-infra-override-values.yaml

The placement section determines the cluster where the override applies. The resource.selector identifies the specific addon to override. The valuesPath references a file that contains the node affinity configuration.

When the blueprint is published or reapplied, the override values are rendered into the Helm chart, and Kubernetes enforces the defined node affinity rules during scheduling.


Best Practices for Node Affinity Configuration

When configuring node affinity for addon components, consider the following best practices to ensure reliability, availability, and predictable scheduling behavior.

Affinity Rule Override Behavior

When defining node affinity rules through a Cluster Override values file, the provided configuration replaces the default affinity rules defined in the addon’s Helm chart. The override does not append to the existing configuration.

Ensure that all required affinity constraints are explicitly defined in the override values file. This includes rules required for scheduling on specific operating systems or CPU architectures (for example, Linux vs. Windows or amd64 vs. arm64).

If these constraints are omitted, the default scheduling rules from the Helm chart will no longer apply, which may result in pods being scheduled on unintended nodes or remaining unscheduled.

Label Redundancy

Avoid relying on a single node with a specific label for pod scheduling. If a pod uses strict node affinity and only one node matches the label, the workload cannot be rescheduled if that node becomes unavailable.

For improved availability, ensure that multiple nodes share the same label used in the affinity rule. This allows Kubernetes to reschedule pods on another eligible node if one node fails, is deleted, or becomes unreachable.

Affinity Strategy Selection

Kubernetes supports different affinity strategies that influence scheduling behavior.

  • requiredDuringSchedulingIgnoredDuringExecution

This configuration enforces a strict constraint. Pods are scheduled only on nodes that match the defined labels. If no matching nodes are available, the pod remains in a Pending state. This option is suitable for workloads that must run on specific hardware, operating systems, or controlled environments.

  • preferredDuringSchedulingIgnoredDuringExecution

This configuration acts as a soft constraint. Kubernetes attempts to schedule the pod on nodes that match the preferred labels but may fall back to other nodes if none are available. This approach can improve scheduling flexibility while still guiding placement toward preferred nodes.

Selecting the appropriate affinity strategy depends on the scheduling requirements and the availability considerations of the workload.