Skip to content

Deploy Workload using DRA ResourceClaim in Kubernetes

In the first blog in the DRA series, we introduced the concept of Dynamic Resource Allocation (DRA) that recently went GA in Kubernetes v1.34 which was released end of August 2025.

In the second blog, we installed a Kuberneres v1.34 cluster and deployed an example DRA driver on it with "simulated GPUs". In this blog, we’ll will deploy a few workloads on the DRA enabled Kubernetes cluster to understand how "Resource Claim" and "ResourceClaimTemplates" work.

Info

We have optimized the steps for users to experience this on their laptops in less than 5 minutes. The steps in this blog are optimized for macOS users.


Deploy Test Workload with ResourceClaim

This section assumes that you have completed the steps in the second blog and have access to a functional Kubernetes cluster with DRA configured and enabled. We will deploy example workloads that demonstrate how ResourceClaims can be used to select and configure resources in various ways.

Let's create a ResourceClaim which we will reference in a Pod. Note that the deviceClassName is a required field because it helps narrow down the scope of the request to a specific device class. In the example below, the ResouceClaim called "some-gpu" will be created in the same namespace (dra-tutorial) we created in the previous blog.

flowchart LR
  %% Cluster Admin and Workload Admin for DRA
  classDef step fill:#fff,stroke:#333,stroke-width:1px,rx:12,ry:12,color:#111;
  classDef note fill:#fff,stroke:#333,stroke-width:1px,rx:20,ry:20,color:#111;
  classDef highlightRed fill:#fff,stroke:#f00,stroke-width:2px,rx:12,ry:12,color:#111;

  A[Install DRA drivers<br/>Cluster Admin]:::step --> B[Create DeviceClasses<br/>Cluster Admin]:::highlightRed

  %% Branch
  B --> C[Create ResourceClaim<br/>Workload Admin<br/>Pods share the device]:::highlightRed
  B --> D[Create ResourceClaimTemplate<br/>Workload Admin<br/>Pods get separate devices]:::step

  %% Converge
  C --> E([Add claim to Pod requests]):::highlightRed
  D --> E

  %% Final step
  E --> F[Deploy workload<br/>Workload Admin]:::highlightRed

Copy the YAML below and save it to a file called "resourceclaim.yaml". This will allow us to create a request for any GPU advertising over 10Gi memory capacity.

apiVersion: resource.k8s.io/v1
kind: ResourceClaim
metadata:
 name: some-gpu
 namespace: dra-tutorial
spec:
   devices:
     requests:
     - name: some-gpu
       exactly:
         deviceClassName: gpu.example.com
         selectors:
         - cel:
             expression: "device.capacity['gpu.example.com'].memory.compareTo(quantity('10Gi')) >= 0"

Let's create this ResourceClaim using kubectl.

kubectl apply -f resourceclaim.yaml --server-side 

Now, check if it was created successfully by typing the following command. As you can see in the example below, the state shows "allocated,reserved".

kubectl get resourceclaim -n dra-tutorial
NAME       STATE     AGE
some-gpu   pending   23s

Note

Notice that the STATE of the resource claim is pending. This state will change once a pod uses the resourceclaim.


Deploy Workload

Now, let us create a pod that references the ResourceClaim called "some-gpu" we created in the previous step.

apiVersion: v1
kind: Pod
metadata:
  name: pod0
  namespace: dra-tutorial
  labels:
    app: pod
spec:
  containers:
  - name: ctr0
    image: ubuntu:24.04
    command: ["bash", "-c"]
    args: ["export; trap 'exit 0' TERM; sleep 9999 & wait"]
    resources:
      claims:
      - name: gpu
  resourceClaims:
  - name: gpu
    resourceClaimName: some-gpu

Copy the YAML above and save it to a file called "pod0.yaml". Now, deploy to the Kubernetes cluster.

kubectl apply -f pod0.yaml -n dra-tutorial --server-side 

Important

Remember that the workloads need to be deployed to the same namespace where the resourceclaim exists. In our case, this is the namespace called "dra-tutorial".


Validate DRA Usage

Let's check the status of the pod by issuing the following command:

kubectl get pod pod0 -n dra-tutorial

You should see something like the following. As you can see, our pod is in a RUNNING state.

NAME   READY   STATUS    RESTARTS   AGE
pod0   1/1     Running   0          61s

Check Resource Claim

Now, let's check the status of our resourceclaim.

kubectl get resourceclaims -n dra-tutorial 

As you can see from the output below, the STATE has transitioned from pending to allocated,reserved

NAME       STATE                AGE
some-gpu   allocated,reserved   64m

You can also get deeper details and status of the resourceclaim by issuing the following command.

kubectl get resourceclaim some-gpu -n dra-tutorial -o yaml

Shown below is an illustrative example of the output. Once the pod is deployed, the Kubernetes cluster will attempt to schedule the pod to a node where Kubernetes can satisfy the ResourceClaim. In this example, all the GPUs have sufficient capacity to satisfy the pod's claim.

apiVersion: resource.k8s.io/v1
kind: ResourceClaim
metadata:
  creationTimestamp: "2025-09-16T21:25:15Z"
  finalizers:
  - resource.kubernetes.io/delete-protection
  name: some-gpu
  namespace: dra-tutorial
  resourceVersion: "780"
  uid: 0902160b-2b3f-4350-86e6-6d47f09958bd
spec:
  devices:
    requests:
    - exactly:
        allocationMode: ExactCount
        count: 1
        deviceClassName: gpu.example.com
        selectors:
        - cel:
            expression: device.capacity['gpu.example.com'].memory.compareTo(quantity('10Gi'))
              >= 0
      name: some-gpu
status:
  allocation:
    devices:
      results:
      - device: gpu-4
        driver: gpu.example.com
        pool: dra-test-control-plane
        request: some-gpu
    nodeSelector:
      nodeSelectorTerms:
      - matchFields:
        - key: metadata.name
          operator: In
          values:
          - dra-test-control-plane
  reservedFor:
  - name: pod0
    resource: pods
    uid: a3e18c4a-7cb7-409e-9d64-9c3e79819b76

Delete Pod

When our pod with the resource claim is deleted, the DRA driver will deallocate the GPU so it can be available for scheduling again. In this step, we will delete the pod that we created in the previous step

kubectl delete pod pod0 -n dra-tutorial

Clean Up

If you wish to clean up everything, you can delete the Kind cluster we provisioned earlier by issuing the following command.

kind delete cluster --name dra-test 

Conclusion

In this blog, we deployed a test workload that was using a ResourceClaim to select and configure GPU resources using DRA. In the next blog, we will deploy a test workload that will use ResourceClaimTemplates.