Skip to content

Granular Control of Your EKS Auto Mode Managed Nodes with Custom Node Classes and Node Pools

With a couple of releases back, we added EKS Auto Mode support in our platform for doing either quick configuration or custom configuration. In this blog, we will explore how you can create an EKS cluster using quick configuration and then dive deep into creating custom node classes and node pools using addons to deploy them on EKS Auto Mode enabled clusters.


Overview

Amazon EKS Auto Mode simplifies cluster management by automatically provisioning and managing the Kubernetes control plane and worker nodes. While the default configuration works great for getting started quickly, many organizations need granular control over their node configurations to meet specific workload requirements, compliance needs, or cost optimization goals. For more information about EKS Auto Mode, see the EKS Auto Mode documentation.

In this blog post, we will cover:

  1. Quick Configuration: Creating an EKS Auto Mode cluster using Rafay's quick configuration approach
  2. Custom Node Classes: Creating custom node classes to define specific instance types, and configurations
  3. Custom Node Pools: Creating node pools that leverage your custom node classes.
  4. Deployment via Addons: Using Rafay addons to deploy and manage node classes and node pools on your EKS Auto Mode cluster

Prerequisites

Before proceeding, ensure you have the following:

  1. AWS Account Access: with the necessary permissions to create EKS Auto Mode clusters and manage EC2 instances
  2. Rafay Platform Access: A Rafay organization with appropriate permissions
  3. EKS Auto Mode Cluster: Either an existing cluster or the ability to create one
  4. Understanding of Node Classes and Node Pools: Basic familiarity with EKS Auto Mode concepts

Part 1: Creating an EKS Auto Mode Cluster with Quick Configuration

In this section, we will create an EKS Auto Mode cluster using Rafay's quick configuration approach. This is the fastest way to get started with EKS Auto Mode.

  1. Access the Rafay Console:

    • Log in to your Rafay organization
    • Navigate to New Cluster > Create New Cluster > Public Cloud > AWS Amazon EKS
  2. Configure Cluster Name:

    • Provide a unique name for your cluster
    • Click Continue to proceed
  3. Configure Cloud Credentials and Cluster Settings:

    • Select Cloud Credentials: Choose the cloud credentials that contain credentials to authenticate and talk to AWS APIs. These credentials should have permissions to manage EKS Auto Mode clusters. For prerequisites and required permissions, refer to the EKS Auto Mode prerequisites.

    Here are AWS links for reference as well: EKS Auto Mode cluster IAM role and EKS Auto Mode node IAM role.

    • Select Region: Choose the appropriate AWS region where you want to deploy your cluster
    • Select Kubernetes Version: Choose the Kubernetes version you want to deploy
    • Select Blueprint: Choose the blueprint that you want to deploy on the cluster

Note

The blueprint contains the Rafay operator and additional addons that you would have added. For the blueprint to work properly, a default system node pool where these pods will be scheduled is required. This system node pool is necessary for the Rafay operator to get installed as part of cluster bring up

  1. Enable EKS Auto Mode:
    • Go to Cluster Settings
    • Enable Use EKS Auto Mode with Quick configuration
    • This will create a cluster with default settings and system node pools
    • You can choose to select node pools from the node pool section, or if you don't select any, it will still create the system node pools that AWS manages

Select EKS Auto Mode

  1. Review Configuration:

    • Review all the settings you've configured
    • Verify the quick configuration defaults are acceptable
  2. Provision Cluster:

    • Click Provision to start the provisioning process
    • The cluster creation will take approximately 25 minutes

Cluster Creation Progress

  1. Verify Cluster Status:
    • Once the cluster is created, verify it shows as Healthy and Success in the Rafay console
    • You should see the default node pool that was automatically created

Cluster Active Status

Tip

The quick configuration creates a cluster with default node classes and node pools. For production workloads, you will also want to create custom node classes and node pools as we will see in the next sections.


Part 2: Understanding Node Classes and Node Pools

Before we create custom node classes and node pools, let's understand what they are and how they work together.

Node Classes

A Node Class is a template that defines how EKS Auto Mode should build and configure your EC2 nodes—covering network placement, storage settings, and tags. It gives you fine-grained control when you need nodes launched with specific infrastructure requirements.

Node Pools

A Node Pool defines the type of compute capacity your cluster can use—such as allowed instance types, zones, architectures, and Spot/On-Demand choices. It controls what nodes can be provisioned and scaled, while built-in pools can only be enabled or disabled.


Part 3: Creating Custom Node Classes and Node Pools

Now that we understand the concepts, let's create a custom node class and node pool which will allow you to customize node configuration and how to deploy it on the EKS Auto enabled cluster that we created above.

Step 1: Create Namespace

Before creating addons, we need to create a namespace. In this example, we'll create a namespace called karpenter:

  1. Navigate to Infrastructure > Namespace:

    • Go to Infrastructure > Namespace in the Rafay console
    • Click New Namespace or Create Namespace
  2. Configure Namespace:

    • Set the namespace name to karpenter
    • Select the cluster where you want to deploy the namespace
    • Deploy the namespace on your EKS Auto Mode cluster

Note

Although namespace is not required for custom nodeclass and node pool, it is currently required for rafay addons. You can create a dummy namespace now and create addons.

Step 2: Create Addons

Now let's create the addons that will manage the NodeClass and NodePool resources:

  1. Navigate to Infrastructure > Addons:

    • Go to Infrastructure > Addons in the Rafay console
    • Click Create New Add-on
  2. Choose Addon Strategy:
    You have two options:

    • Option A: Create a single addon that combines both NodeClass and NodePool in one YAML file
    • Option B: Create two separate addons—one for NodeClass and one for NodePool

For this example, we'll use Option A and combine both resources in a single addon.

  1. Configure the Addon:
    • Provide a name for your addon (e.g., eks-node-class-pool)
    • Select the namespace (karpenter) where the resources will be deployed
    • Add the YAML manifest with your NodeClass and NodePool configuration

Step 3: Reference Example

Here's a reference example that combines both NodeClass and NodePool in a single addon. You can customize this based on your requirements:

apiVersion: eks.amazonaws.com/v1
kind: NodeClass
metadata:
  name: private-compute
spec:
  role: <node role>
  subnetSelectorTerms:
    - tags:
        alpha.rafay.io/cluster-name: "demo-eks-auto"
        kubernetes.io/role/internal-elb: "1"
  securityGroupSelectorTerms:
    - tags:
        alpha.rafay.io/cluster-name: "demo-eks-auto"
  ephemeralStorage:
    size: "160Gi"
---
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: my-node-pool
spec:
  template:
    metadata:
      labels:
        billing-team: dev
    spec:
      nodeClassRef:
        group: eks.amazonaws.com
        kind: NodeClass
        name: private-compute
      requirements:
        - key: "eks.amazonaws.com/instance-category"
          operator: In
          values: [ "m", "r"]
        - key: "eks.amazonaws.com/instance-cpu"
          operator: In
          values: ["4", "8", "16", "32"]
        - key: "topology.kubernetes.io/zone"
          operator: In
          values: ["us-west-2d", "us-west-2b","us-west-2c"]
        - key: "kubernetes.io/arch"
          operator: In
          values: ["arm64", "amd64"]
  limits:
    cpu: "1000"
    memory: 1000Gi

This manifest creates:

  • NodeClass (private-compute): Defines where and how nodes are created (subnets, security groups, storage, IAM role)
  • NodePool (my-node-pool): Defines what kind of nodes (instance types, zones, architectures, labels) Karpenter can provision using that NodeClass, with an overall CPU/memory quota

Addon

Step 4: Add Addon to Blueprint

Once the addon is created, add it to a custom blueprint and apply it to your cluster:

  1. Navigate to Infrastructure > Blueprint:

    • Go to Infrastructure > Blueprint in the Rafay console
    • Either edit an existing blueprint or create a new custom blueprint
  2. Configure Blueprint:

    • If creating a new blueprint, select the base blueprint
    • Navigate to Configure Addons section
    • Add the addon you created (which contains the NodeClass and NodePool YAML spec with your required configuration)
  3. Save Changes:

    • Save the blueprint configuration

Blueprint

  1. Apply Blueprint to Cluster:

    • Either update the existing blueprint that is applied on your cluster, or create a new custom blueprint and add the above addon to it
    • Apply the blueprint on your EKS Auto Mode cluster
  2. Verify Node Class and Node Pool Creation:
    Once the blueprint apply is successful, you will see the new node class and node pool created. After some time, Karpenter will automatically provision a new node based on the matching workload requirements on the cluster, as shown in the screenshot below:

Cluster

Note

You can do any customization on EKS Auto Mode clusters by adding node classes and node pools. You can customize or tune them based on node requirements that you have. This blog just shows a sample reference example to demonstrate how you can achieve this.


Conclusion

EKS Auto Mode simplifies cluster management by automatically provisioning and managing your Kubernetes infrastructure, but that doesn't mean you have to sacrifice control. By leveraging custom node classes and node pools through Rafay's addon system, you can achieve granular control over your node configurations while still benefiting from AWS's automated management.