Skip to content

2025

Enhancing Namespace Chargeback Reports with Custom Label-Based Metadata in Rafay

In the world of FinOps, precise cost allocation is more than just a “nice to have”, it’s the foundation for accurate chargeback, accountability, and informed decision-making. With Rafay’s latest release, Chargeback Summary Reports aggregated by namespace now support custom label-based metadata enrichment.

This enhancement empowers FinOps teams to add business-relevant metadata (like team or cost_center) directly into their cost reports making it easier to trace expenses to the right owners and justify resource consumption.


Why This Matters for FinOps

In large, multi-tenant Kubernetes environments, namespaces often represent workloads owned by different teams, applications, or business units. Without enriched metadata, a FinOps practitioner might see “Namespace A” incurring costs, but need extra steps to figure out which team or cost center is responsible.

Now, you can define specific label keys (e.g., team, cost_center) in the chargeback report configuration, and Rafay will automatically include them as additional columns in the report—populated with values from the namespace labels. This directly embeds organizational context into your cost visibility.

Note:
This enhancement applies to namespace-based aggregation in chargeback reports (not namespace-label-based aggregation). This is because if a primary label value (e.g., cost_center) is the same across multiple namespaces but secondary label values (e.g., team) differ, the report will not be able to aggregate on primary labels in such cases.

GitOps Without Borders: Running Argo CD Across Isolated Security Domains with Rafay’s Zero-Trust Kubectl

Modern enterprises rarely run applications in a single cluster. A production fleet might include on-prem clusters in Singapore and London, a regulated environment in AWS us-east-1, and a developer sandbox in someone’s laptop. GitOps with Argo CD is the natural way to keep all those clusters in the desired state—but the moment clusters live in different security domains (fire-walled data centers, private VPCs, or even air-gapped networks) the simple argocd cluster add story breaks down:

  • Bespoke bastion hosts or VPN tunnels for every hop
  • Long-lived bearer-token Secrets stashed in Argo’s namespace
  • High latency between the GitOps engine and far-flung clusters, turning reconciliations into a slog

Rafay’s Zero-Trust Kubectl Access (ZTKA) solves all three problems in one stroke. By front-loading the connection with a hardened Kube API Access Proxy—and issuing just-in-time (JIT), short-lived ServiceAccounts inside every cluster.

In this blog, we will describe how Rafay Zero Trust Kubectl Access Proxy gives Argo CD a secure path to every cluster in the fleet, even when those clusters sit deep behind corporate firewalls.

ArgCD integration Rafay

Turbo-charging kubectl: How Rafay’s Zero-Trust Access + Regional Proxies Deliver Lightning-Fast CLI Performance

When developers are halfway around the world from their clusters, every kubectl get pods can feel like it’s moving through molasses. Rafay’s Zero-Trust Kubectl (ZTKA) service fixes the security risks and the lag by adding a network of regional proxies between the user and the cluster.

Zero-Trust Kubectl in a Nutshell

Rafay ZTKA routes all CLI and web-terminal traffic through its Kube API Access Proxy. The key design goals are:

  1. Friction-free for users (“vanilla kubectl”),
  2. Zero infrastructure to manage for platform teams,
  3. Centralized RBAC + audit, and “great performance” even for clusters behind firewalls. 

Under the hood, users authenticate to Rafay; Rafay spins up just-in-time service accounts inside the target cluster and tears them down after idle timeouts, eliminating credential sprawl.

Drift Prevention vs Detection: Does a Polling Approach make sense At Scale?

Many organizations typically rely on pull-based GitOps tools (e.g. Argo CD) to detect and remediate drift on their Kubernetes clustes. This approach allows clusters to diverge before reconciling them on the next polling interval. For the last 4 years, Rafay customers have benefited from an architecturally different approach that focuses on true drift prevention, backed by robust detection capabilities across both cluster blueprints and application workloads.

Info

In a previous blog, we discussed how ArgoCD's reconcilation works and its best practices.

Drift Block

Understanding ArgoCD Reconciliation: How It Works, Why It Matters, and Best Practices

ArgoCD is a powerful GitOps controller for Kubernetes, enabling declarative configuration and automated synchronization of workloads. One of its core functions is reconciliation, a continuous process by which ArgoCD ensures that the live state of a Kubernetes cluster matches the desired state defined in a Git repository.

While this might sound straightforward, reconciliation plays a critical role in the GitOps lifecycle, and its default behavior can be surprisingly aggressive. In this blog post, we’ll explore:

  • What reconciliation in ArgoCD actually does
  • Why it exists and how it ensures cluster integrity
  • The pitfalls of the default timer
  • Best practices for tuning reconciliation to balance responsiveness and resource efficiency

Info

In a related blog, we describe how customers using Rafay are able to Block Drift in the first place.

ArgoCD Reconciliation

Custom GPU Resource Classes in Kubernetes

In the modern era of containerized machine learning and AI infrastructure, GPUs are a critical and expensive asset. Kubernetes makes scheduling and isolation easier—but managing GPU utilization efficiently requires more than just assigning something like

nvidia.com/gpu: 1

In this blog post, we will explore what custom GPU resource classes are, why they matter, and when to use them for maximum impact. Custom GPU resource classes are a powerful technique for fine-grained GPU management in multi-tenant, cost-sensitive, and performance-critical environments.

Info

If you are new to GPU sharing approaches, we recommend reading the following introductory blogs: Demystifying Fractional GPUs in Kubernetes and Choosing the Right Fractional GPU Strategy.

Choosing the Right Fractional GPU Strategy for Cloud Providers

As demand for GPU-accelerated workloads soars across industries, cloud providers are under increasing pressure to offer flexible, cost-efficient, and isolated access to GPUs. While full GPU allocation remains the norm, it often leads to resource waste—especially for lightweight or intermittent workloads.

In the previous blog, we described the three primary technical approaches for fractional GPUs. In this blog, we'll explore the most viable approaches to offering fractional GPUs in a GPU-as-a-Service (GPUaaS) model, and evaluate their suitability for cloud providers serving end customers.

Demystifying Fractional GPUs in Kubernetes: MIG, Time Slicing, and Custom Schedulers

As GPU acceleration becomes central to modern AI/ML workloads, Kubernetes has emerged as the orchestration platform of choice. However, allocating full GPUs for many real-world workloads is an overkill resulting in under utilization and soaring costs.

Enter the need for fractional GPUs: ways to share a physical GPU among multiple containers without compromising performance or isolation.

In this post, we'll walk through three approaches to achieve fractional GPU access in Kubernetes:

  1. MIG (Multi-Instance GPU)
  2. Time Slicing
  3. Custom Schedulers (e.g., KAI)

For each, we’ll break down how it works, its pros and cons, and when to use it.

The Rise of AI Agents: From Zero to Production

Artificial Intelligence (AI) has moved far beyond simple chat bots and rigid automation. At the frontier of this evolution lies a powerful new paradigm—AI Agents. These autonomous, intelligent programs can understand their environment, reason through complex problems, and take meaningful actions.

Whether you’re a developer, product leader, or startup founder, understanding AI agents isn't just a competitive advantage—it’s a necessity. In this blog, we will attempt to decipher agents, how they are different from regular applications and how you can build them.

AI Agents

Configure and Manage GPU Resource Quotas in Multi-Tenant Clouds

In multi-tenant GPU cloud environments, effective resource management is critical to ensure fair usage and prevent contention. GPU resource quotas allow organizations to allocate computing capacity at multiple levels—across the entire organization, at individual project scopes, and even down to the per-user level. In this blog, we will describe how GPU Clouds can provide fine grained control of limited resources to their tenants and their admins.

Per Project and User Quotas