Skip to content

Enable Dynamic Resource Allocation (DRA) in Kubernetes

In the previous blog, we introduced the concept of Dynamic Resource Allocation (DRA) that just went GA in Kubernetes v1.34 which was released in August 2025.

In this post, we’ll will configure DRA on a Kubernetes 1.34 cluster.

Info

We have optimized the steps for users to experience this on their macOS or Windows laptops in less than 15 minutes. The steps in this blog are optimized for macOS users.


Kubernetes Cluster using Kind

Kubernetes 1.34 launched just a few days back and the easiest option for users to experience it on their laptops would be with kind.

Assumptions

The following steps assume you have the following installed and functional on your Mac

  1. Docker Desktop
  2. Helm CLI utility
  3. Kubectl CLI utility

Step 1: Install Kind

You can install kind on your Mac using the command below.

brew install kind 

Step 2: Install Kubernetes Cluster

By default, the kind create cluster command will spin up a cluster with Kind’s default Kubernetes version. This can trails slightly behind the latest upstream. So, we will indicate the Kubernetes version.

kind create cluster --name dra-test --image kindest/node:v1.34.0

Kind maintains a list of supported images here. For example:

  • kindest/node:v1.34.0 (latest)
  • kindest/node:v1.32.0
  • kindest/node:v1.28.9

Once your cluster

kubectl version 

As you can see from the output below, we are running Kubernetes v1.34.

Client Version: v1.32.2
Kustomize Version: v5.5.0
Server Version: v1.34.0
WARNING: version difference between client (1.32) and server (1.34) exceeds the supported minor version skew of +/-1

On a new cluster with no DRA driver installed, the output of these commands won't show any resources. Let's verify if that is the case.

kubectl get deviceclasses 
No resources found

Step 3: Install Example DRA Driver

DRA drivers are third-party applications that need to run on each cluster node to interface with the hardware. In this guide, we will use an example DRA driver from the kubernetes-sigs/dra-example-driver repository.

Info

This example driver advertises simulated GPUs to Kubernetes.

Let's create a Kubernetes namespace where we will deploy the example DRA driver.

Step 3.1: Clone Git Repo

Let us clone this repository. All of the scripts, Helm charts and example Pod specs used in this guide are contained here

git clone https://github.com/kubernetes-sigs/dra-example-driver.git
cd dra-example-driver

Step 3.2: Use Helm to Deploy Driver

Now, let us install the example DRA driver via Helm. The resources will be created in the namespace called dra-example-driver.

Important

Before executing the command below, please ensure that you verify that the API version is updated to "apiVersion: resource.k8s.io/v1" in the file called deviceclass.yaml under "./deployments/helm/dra-example-driver/templates/".

helm upgrade --install dra-example-driver ./deployments/helm/dra-example-driver \
  --namespace dra-example-driver \
  --create-namespace

You should see output like the following

Release "dra-example-driver" does not exist. Installing it now.
NAME: dra-example-driver
LAST DEPLOYED: Thu Aug 28 15:40:56 2025
NAMESPACE: dra-example-driver
STATUS: deployed
REVISION: 1
TEST SUITE: None

You can verify the resources in the namespace using kubectl.

kubectl get po -n dra-example-driver
NAME                                     READY   STATUS    RESTARTS   AGE
dra-example-driver-kubeletplugin-pbkk8   1/1     Running   0          2m55s

Step 3.3: Verify DRA Devices

The DRA driver will update the Kubernetes cluster with the devices that are available to pods. It does this by publishing metadata to the ResourceSlices API. Let us now check that API to see that each node with a driver is advertising the device class it represents.

kubectl get resourceslices
NAME                                           NODE                     DRIVER            POOL                     AGE
dra-test-control-plane-gpu.example.com-dnhl8   dra-test-control-plane   gpu.example.com   dra-test-control-plane   8m19s

At this stage, you have successfully installed the example DRA driver and verified that it is configured to use DRA to schedule pods.

Step 3.4: State of Simulated GPU Devices

Let us now look at the current state of available GPU devices on the cluster. Type the following command.

$ kubectl get resourceslice -o yaml

You should see something like the output below. The device plugin (gpu.example.com) created this ResourceSlice to tell Kubernetes the following.

  • The ResourceSlice belongs to the node dra-test-control-plane.
  • The ResourceSlice is a way for a node to advertise available devices/resources to Kubernetes.
  • Workloads requesting resourceclaims can now be allocated these GPUs via DRA.

This node (dra-test-control-plane) has 8 GPUs available, each with 80 GiB memory, managed by my driver.

apiVersion: v1
items:
- apiVersion: resource.k8s.io/v1
  kind: ResourceSlice
  metadata:
    creationTimestamp: "2025-08-28T22:41:02Z"
    generateName: dra-test-control-plane-gpu.example.com-
    generation: 1
    name: dra-test-control-plane-gpu.example.com-dnhl8
    ownerReferences:
    - apiVersion: v1
      controller: true
      kind: Node
      name: dra-test-control-plane
      uid: 8c59fa68-7357-4194-86ba-afe80c0e3671
    resourceVersion: "3394"
    uid: d4cf7469-3a26-48e9-add5-e84fde13e2b7
  spec:
    devices:
    - attributes:
        driverVersion:
          version: 1.0.0
        index:
          int: 2
        model:
          string: LATEST-GPU-MODEL
        uuid:
          string: gpu-1a654c2e-f3fd-3b38-7092-47d1e2cec3b7
      capacity:
        memory:
          value: 80Gi
      name: gpu-2
    - attributes:
        driverVersion:
          version: 1.0.0
        index:
          int: 3
        model:
          string: LATEST-GPU-MODEL
        uuid:
          string: gpu-537835b5-0875-11c4-b6b8-2adf5bf8f4c3
      capacity:
        memory:
          value: 80Gi
      name: gpu-3
    - attributes:
        driverVersion:
          version: 1.0.0
        index:
          int: 4
        model:
          string: LATEST-GPU-MODEL
        uuid:
          string: gpu-ce1366d4-d883-5188-8792-7246a8be7207
      capacity:
        memory:
          value: 80Gi
      name: gpu-4
    - attributes:
        driverVersion:
          version: 1.0.0
        index:
          int: 5
        model:
          string: LATEST-GPU-MODEL
        uuid:
          string: gpu-7c8413eb-c4a7-b759-9f32-367a17074901
      capacity:
        memory:
          value: 80Gi
      name: gpu-5
    - attributes:
        driverVersion:
          version: 1.0.0
        index:
          int: 6
        model:
          string: LATEST-GPU-MODEL
        uuid:
          string: gpu-0f1ea05b-65f4-232c-01d0-ff56897960b3
      capacity:
        memory:
          value: 80Gi
      name: gpu-6
    - attributes:
        driverVersion:
          version: 1.0.0
        index:
          int: 7
        model:
          string: LATEST-GPU-MODEL
        uuid:
          string: gpu-787280ee-4a2d-f587-b838-56ad98374b62
      capacity:
        memory:
          value: 80Gi
      name: gpu-7
    - attributes:
        driverVersion:
          version: 1.0.0
        index:
          int: 0
        model:
          string: LATEST-GPU-MODEL
        uuid:
          string: gpu-6fdb2b2a-317b-7567-901e-4fd527d642d9
      capacity:
        memory:
          value: 80Gi
      name: gpu-0
    - attributes:
        driverVersion:
          version: 1.0.0
        index:
          int: 1
        model:
          string: LATEST-GPU-MODEL
        uuid:
          string: gpu-0821c150-0f52-922a-cfc7-d2b230c459ab
      capacity:
        memory:
          value: 80Gi
      name: gpu-1
    driver: gpu.example.com
    nodeName: dra-test-control-plane
    pool:
      generation: 1
      name: dra-test-control-plane
      resourceSliceCount: 1
kind: List
metadata:
  resourceVersion: ""

Clean Up

We recommend leaving the Kind cluster as is for the next blog. But, if you wish to clean up everything, you can delete the Kind cluster we provisioned earlier by issuing the following command.

kind delete cluster --name dra-test 

Conclusion

In this blog, we installed a Kubernetes v1.34 cluster, installed and configured a DRA driver on it with simulated GPUs. In the next blog, we will deploy a few example workloadss that will demonstrate how ResourceClaims and ResourceClaimTemplates can be used to select and configure GPU resources using DRA.