Kubernetes has cemented its position as the de-facto standard for orchestrating containerized workloads in the enterprise. In recent years, its role has expanded beyond web services and batch processing into one of the most demanding domains of all: AI/ML workloads.
Organizations now run everything from lightweight inference services to massive, distributed training pipelines on Kubernetes clusters, relying heavily on GPU-accelerated infrastructure to fuel innovation.
But there’s a problem. In this blog, we will explore why the current model falls short, what a more advanced GPU allocation approach looks like, and how it can unlock efficiency, performance, and cost savings at scale.
In the world of FinOps, precise cost allocation is more than just a “nice to have”, it’s the foundation for accurate chargeback, accountability, and informed decision-making. With Rafay’s latest release, Chargeback Summary Reports aggregated by namespace now support custom label-based metadata enrichment.
This enhancement empowers FinOps teams to add business-relevant metadata (like team or cost_center) directly into their cost reports making it easier to trace expenses to the right owners and justify resource consumption.
In large, multi-tenant Kubernetes environments, namespaces often represent workloads owned by different teams, applications, or business units. Without enriched metadata, a FinOps practitioner might see “Namespace A” incurring costs, but need extra steps to figure out which team or cost center is responsible.
Now, you can define specific label keys (e.g., team, cost_center) in the chargeback report configuration, and Rafay will automatically include them as additional columns in the report—populated with values from the namespace labels. This directly embeds organizational context into your cost visibility.
Note:
This enhancement applies to namespace-based aggregation in chargeback reports (not namespace-label-based aggregation). This is because if a primary label value (e.g., cost_center) is the same across multiple namespaces but secondary label values (e.g., team) differ, the report will not be able to aggregate on primary labels in such cases.
Modern enterprises rarely run applications in a single cluster. A production fleet might include on-prem clusters in Singapore and London, a regulated environment in AWS us-east-1, and a developer sandbox in someone’s laptop. GitOps with Argo CD is the natural way to keep all those clusters in the desired state—but the moment clusters live in different security domains (fire-walled data centers, private VPCs, or even air-gapped networks) the simple argocd cluster add story breaks down:
Bespoke bastion hosts or VPN tunnels for every hop
Long-lived bearer-token Secrets stashed in Argo’s namespace
High latency between the GitOps engine and far-flung clusters, turning reconciliations into a slog
Rafay’s Zero-Trust Kubectl Access (ZTKA) solves all three problems in one stroke. By front-loading the connection with a hardened Kube API Access Proxy—and issuing just-in-time (JIT), short-lived ServiceAccounts inside every cluster.
In this blog, we will describe how Rafay Zero Trust Kubectl Access Proxy gives Argo CD a secure path to every cluster in the fleet, even when those clusters sit deep behind corporate firewalls.
When developers are halfway around the world from their clusters, every kubectl get pods can feel like it’s moving through molasses. Rafay’s Zero-Trust Kubectl (ZTKA) service fixes the security risks and the lag by adding a network of regional proxies between the user and the cluster.
Zero-Trust Kubectl in a Nutshell
Rafay ZTKA routes all CLI and web-terminal traffic through its Kube API Access Proxy. The key design goals are:
Friction-free for users (“vanilla kubectl”),
Zero infrastructure to manage for platform teams,
Centralized RBAC + audit, and “great performance” even for clusters behind firewalls. 
Under the hood, users authenticate to Rafay; Rafay spins up just-in-time service accounts inside the target cluster and tears them down after idle timeouts, eliminating credential sprawl.
Many organizations typically rely on pull-based GitOps tools (e.g. Argo CD) to detect and remediate drift on their Kubernetes clustes. This approach allows clusters to diverge before reconciling them on the next polling interval. For the last 4 years, Rafay customers have benefited from an architecturally different approach that focuses on true drift prevention, backed by robust detection capabilities across both cluster blueprints and application workloads.
Info
In a previous blog, we discussed how ArgoCD's reconcilation works and its best practices.
ArgoCD is a powerful GitOps controller for Kubernetes, enabling declarative configuration and automated synchronization of workloads. One of its core functions is reconciliation, a continuous process by which ArgoCD ensures that the live state of a Kubernetes cluster matches the desired state defined in a Git repository.
While this might sound straightforward, reconciliation plays a critical role in the GitOps lifecycle, and its default behavior can be surprisingly aggressive. In this blog post, we’ll explore:
What reconciliation in ArgoCD actually does
Why it exists and how it ensures cluster integrity
The pitfalls of the default timer
Best practices for tuning reconciliation to balance responsiveness and resource efficiency
Info
In a related blog, we describe how customers using Rafay are able to Block Drift in the first place.
In the modern era of containerized machine learning and AI infrastructure, GPUs are a critical and expensive asset. Kubernetes makes scheduling and isolation easier—but managing GPU utilization efficiently requires more than just assigning something like
nvidia.com/gpu:1
In this blog post, we will explore what custom GPU resource classes are, why they matter, and when to use them for maximum impact. Custom GPU resource classes are a powerful technique for fine-grained GPU management in multi-tenant, cost-sensitive, and performance-critical environments.
As demand for GPU-accelerated workloads soars across industries, cloud providers are under increasing pressure to offer flexible, cost-efficient, and isolated access to GPUs. While full GPU allocation remains the norm, it often leads to resource waste—especially for lightweight or intermittent workloads.
In the previous blog, we described the three primary technical approaches for fractional GPUs. In this blog, we'll explore the most viable approaches to offering fractional GPUs in a GPU-as-a-Service (GPUaaS) model, and evaluate their suitability for cloud providers serving end customers.
As GPU acceleration becomes central to modern AI/ML workloads, Kubernetes has emerged as the orchestration platform of choice. However, allocating full GPUs for many real-world workloads is an overkill resulting in under utilization and soaring costs.
Enter the need for fractional GPUs: ways to share a physical GPU among multiple containers without compromising performance or isolation.
In this post, we'll walk through three approaches to achieve fractional GPU access in Kubernetes:
MIG (Multi-Instance GPU)
Time Slicing
Custom Schedulers (e.g., KAI)
For each, we’ll break down how it works, its pros and cons, and when to use it.
Artificial Intelligence (AI) has moved far beyond simple chat bots and rigid automation. At the frontier of this evolution lies a powerful new paradigm—AI Agents. These autonomous, intelligent programs can understand their environment, reason through complex problems, and take meaningful actions.
Whether you’re a developer, product leader, or startup founder, understanding AI agents isn't just a competitive advantage—it’s a necessity. In this blog, we will attempt to decipher agents, how they are different from regular applications and how you can build them.