Skip to content

Kubernetes

From Docker Image to 1-Click App: Enabling Self-Service for Custom Apps

In the Developer Pods series (part-1, part-2 and part-3), we made a simple point: most users do not want infrastructure. They want outcomes.

They do not want tickets. They do not want YAML. They do not want to think about pods, namespaces, ingress, or DNS. They want a working environment or application, available quickly, through a clean self-service experience. That was the core theme behind Developer Pods: Kubernetes is a powerful engine, but it should not be the user interface.

The next step is just as important: letting end users deploy applications packaged as Docker containers into shared, multi-tenant Kubernetes clusters with a true 1-click experience.

Rafay’s 3rd Party App Marketplace is designed for exactly this. It allows providers to curate and publish containerized apps from Docker Hub, third-party vendors, or open-source communities, package them with defaults, user overrides, and policies, and expose them as a secure, governed self-service experience for users across multiple tenants.

Docker App

OpenClaw on Kubernetes: A Platform Engineering Pattern for Always-On AI

AI is moving beyond chat windows. The next useful form factor is an Always-On AI service that can live behind messaging channels, expose a control surface, invoke tools, and be operated like any other platform workload. OpenClaw is interesting because it is built around that model.

OpenClaw is a Gateway-centric runtime with onboarding, workspace/config, channels, and skills, plus a documented Kubernetes install path for hosting.

For platform teams, that makes OpenClaw more than an AI app. It looks like an AI gateway layer that can be deployed, secured, and managed on Kubernetes using the same operational patterns you would use for internal developer platforms, control planes, or multi-service middleware.

OpenClaw

Developer Pods for Platform Teams: Designing the Right Self-Service GPU Experience

In Part 1, we discussed the core problem: most organizations still deliver GPU access through the wrong abstraction. Developers and data scientists do not want tickets, YAML, and long provisioning cycles. They want a ready-to-use environment with the right amount of compute, available when they need it.

In Part 2, we looked at what that self-service experience feels like for the end user: a familiar, guided workflow that lets them select a profile, launch an environment, and SSH into it in about 30 seconds.

In this part, we shift to the other side of the experience: how platform teams design that experience in the first place. Specifically, we will look at how teams can configure and customize a Developer Pod SKU using the integrated SKU Studio in the Rafay Platform.

SKU in Rafay Platform

Developer Pods: A Self-Service GPU Experience That Feels Instant

In Part 1, we discussed the core problem: most organizations still deliver GPU access through the wrong abstraction. Developers do not want tickets, YAML, and long wait times. They want a working environment with the right tools and GPU access, available when they need it.

In this post, let’s look at the other half of the story: the end-user experience. Specifically, what does self-service actually look like for a developer or data scientist using Rafay Developer Pods?

The answer is simple: a familiar UI, a few guided choices, and a running environment they can SSH into in about 30 seconds.

New Developer Pod

Instant Developer Pods: Rethinking GPU Access for AI Teams

It's the week of KubeCon Europe 2026 in Amsterdam. Much of the conversations will be about Kubernetes, AI and GPUs. Let's have a honest discussion.

We are in 2026 and we’re still handing out infrastructure like it’s 2008. The entire workflow is slow, expensive and wildly inefficient. Meanwhile, your most expensive resource—GPUs—sit idle or underutilized.

The way most enterprises deliver GPU access today is completely misaligned with how developers and data scientists actually work. A developer wants to:

  • Run a PyTorch experiment
  • Fine-tune a model
  • Test a pipeline

What do they get instead?

A ticketing system with a multi day wait time and then finally a bloated VM or an entire bare-metal GPU server

There has to be a better way. This is the first part of a blog series on Rafay's Developer Pods. In this, we will describe why and how many of our customers have completely transformed the way they deliver their end users with a self service experience to GPUs.

Dev Pod

No More SSH: Control Plane Overrides for Rafay MKS Clusters

Customizing a Kubernetes control plane has always been an uncomfortable exercise. You SSH into a master node, carefully edit a static pod manifest, and then hope nothing breaks. With our latest release, we are replacing that workflow entirely. Control Plane Overrides give you a safe, declarative way to customize the API Server, Controller Manager, and Scheduler for MKS (Managed Kubernetes Service) clusters — Rafay's upstream Kubernetes offering for bare metal and VMs — directly from the Rafay Console or cluster specification.

NVIDIA AICR Generates It. Rafay Runs It. Your GPU Clusters, Finally Under Control

Deploying GPU-accelerated Kubernetes infrastructure for AI workloads has never been simple. Administrators face a relentless compatibility matrix i.e. matching GPU driver versions to CUDA releases, pinning Kubernetes versions to container runtimes, tuning configurations differently for NVIDIA H100s versus A100s, and doing all of it differently again for training versus inference.

One wrong version combination and workloads fail silently, or worse, perform far below hardware capability. For years, the answer was static documentation, tribal knowledge, and hoping that whoever wrote the runbook last week remembered to update it.

NVIDIA's AI Cluster Runtime (AICR) and the Rafay Platform represent a new approach — one where GPU infrastructure configuration is treated as code, generated deterministically, validated against real hardware, and enforced continuously across fleets of clusters.

Together, they cover the full lifecycle from first aicr snapshot to production-grade day-2 operations, with cluster blueprints as the critical bridge between the two.

Baton Pass

From Slurm to Kubernetes: A Guide for HPC Users

If you've spent years submitting batch jobs with Slurm, moving to a Kubernetes-based cluster can feel like learning a new language. The concepts are familiar — resource requests, job queues, priorities — but the vocabulary and tooling are different. This guide bridges that gap, helping HPC veterans understand how Kubernetes handles workloads and what that means day-to-day.

SLurm to k8s

Run nvidia-smi on Remote GPU Kubernetes Clusters Using Rafay Zero Trust Access

Infra operators managing GPU-enabled Kubernetes clusters often need a fast and secure way to validate GPU visibility, driver health, and runtime readiness without exposing the cluster directly or relying on bastion hosts, VPNs, or manually managed kubeconfigs.

With Rafay's zero trust kubectl, operators can securely access remote Kubernetes resources and execute commands inside running pods from the Rafay platform. A simple but powerful example is running nvidia-smi inside a GPU Operator pod to confirm that the NVIDIA driver stack, CUDA runtime, and GPU devices are functioning correctly on a remote cluster.

In this post, we walk through how infra operators can use Rafay's zero trust access workflow to run nvidia-smi on a remote GPU-based Kubernetes cluster.

Nvidia SMI over ZTKA

Interact with Your Rafay Managed Kubernetes Clusters Using MCP-compatible AI clients

The Model Context Protocol (MCP) is an open standard that enables AI assistants to securely interact with external tools and systems. When used with Kubernetes, MCP allows an AI assistant to execute operations (for example, kubectl commands), retrieve live cluster state, and reason about results without requiring users to manually copy and paste output into a chat interface.

This blog uses Claude Desktop as an example AI assistant. The same approach applies to any MCP-compatible AI client.

For platform administrators, this capability enables controlled, auditable, and policy-driven AI-assisted cluster operations.


For production environments, the recommended approach is to run the MCP server locally and connect to your Kubernetes cluster using a Rafay Zero Trust Kubectl Access (ZTKA) kubeconfig.

In this model:

  • The MCP server runs on the administrator’s workstation
  • Cluster access is established through Rafay’s ZTKA secure relay
  • No inbound access to the cluster is required
  • No VPN tunnels or exposed Kubernetes API endpoints are needed

This architecture aligns with zero-trust security principles and enterprise compliance requirements.