Skip to content

Deploy the Kubernetes MCP Server on Rafay MKS

The Model Context Protocol (MCP) is an open standard that lets AI assistants like Claude and Cursor talk to external tools and data sources. Instead of pasting kubectl output into a chat, the AI can run Kubernetes operations directly and reason about the results.

In this blog, we deploy the open-source Kubernetes MCP Server as a blueprint add-on on a Rafay MKS cluster and connect it to Claude Desktop. You will be able to manage your cluster through natural language.

MCP deployment walkthrough — Rafay console and Claude Desktop

Prerequisites

  • A Kubernetes cluster like EKS, GKE, or any other. In this blog we deploy on a Rafay MKS cluster.
  • Claude Desktop (download)
  • Node.js 18+ (for npx mcp-remote, which bridges Claude Desktop's stdio to our HTTP MCP endpoint)

Step 1: Create the Add-on

  1. Go to Infrastructure → Add-ons → New Add-on
  2. Name: mcp-server-k8s
  3. Type: Kubernetes YAML
  4. Namespace: mcp-server

Create a new version and paste the following manifest. In this demo, we use the Kubernetes MCP Server from the Docker MCP Catalog. The MCP server is exposed via NodePort on port 30001.

---
apiVersion: v1
kind: Namespace
metadata:
  name: mcp-server
  labels:
    app.kubernetes.io/name: mcp-server-kubernetes

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: mcp-server-kubernetes
  namespace: mcp-server
  labels:
    app.kubernetes.io/name: mcp-server-kubernetes

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: mcp-server-kubernetes
  labels:
    app.kubernetes.io/name: mcp-server-kubernetes
rules:
  - apiGroups: [""]
    resources:
      - pods
      - pods/log
      - services
      - configmaps
      - secrets
      - namespaces
      - nodes
      - events
      - endpoints
      - persistentvolumeclaims
      - persistentvolumes
      - replicationcontrollers
      - serviceaccounts
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  - apiGroups: ["apps"]
    resources:
      - deployments
      - deployments/scale
      - daemonsets
      - replicasets
      - statefulsets
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  - apiGroups: ["batch"]
    resources: [jobs, cronjobs]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  - apiGroups: ["networking.k8s.io"]
    resources: [ingresses, networkpolicies]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  - apiGroups: ["autoscaling"]
    resources: [horizontalpodautoscalers]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  - apiGroups: ["rbac.authorization.k8s.io"]
    resources: [roles, rolebindings, clusterroles, clusterrolebindings]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: [storageclasses]
    verbs: ["get", "list", "watch"]
  - nonResourceURLs: ["/api", "/api/*", "/apis", "/apis/*"]
    verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: mcp-server-kubernetes
  labels:
    app.kubernetes.io/name: mcp-server-kubernetes
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: mcp-server-kubernetes
subjects:
  - kind: ServiceAccount
    name: mcp-server-kubernetes
    namespace: mcp-server

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: mcp-server-config
  namespace: mcp-server
  labels:
    app.kubernetes.io/name: mcp-server-kubernetes
data:
  ENABLE_UNSAFE_STREAMABLE_HTTP_TRANSPORT: "1"
  HOST: "0.0.0.0"
  PORT: "3001"
  MASK_SECRETS: "true"

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mcp-server-kubernetes
  namespace: mcp-server
  labels:
    app.kubernetes.io/name: mcp-server-kubernetes
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: mcp-server-kubernetes
  template:
    metadata:
      labels:
        app.kubernetes.io/name: mcp-server-kubernetes
    spec:
      serviceAccountName: mcp-server-kubernetes
      automountServiceAccountToken: true
      containers:
        - name: mcp-server
          image: flux159/mcp-server-kubernetes:latest
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 3001
              protocol: TCP
          envFrom:
            - configMapRef:
                name: mcp-server-config
          livenessProbe:
            httpGet:
              path: /health
              port: http
            initialDelaySeconds: 10
            periodSeconds: 15
            timeoutSeconds: 5
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /ready
              port: http
            initialDelaySeconds: 5
            periodSeconds: 10
            timeoutSeconds: 5
            failureThreshold: 3
          startupProbe:
            httpGet:
              path: /health
              port: http
            initialDelaySeconds: 5
            periodSeconds: 5
            failureThreshold: 12
          resources:
            requests:
              cpu: 100m
              memory: 256Mi
            limits:
              cpu: 500m
              memory: 512Mi
      terminationGracePeriodSeconds: 30

---
apiVersion: v1
kind: Service
metadata:
  name: mcp-server-kubernetes
  namespace: mcp-server
  labels:
    app.kubernetes.io/name: mcp-server-kubernetes
spec:
  type: NodePort
  selector:
    app.kubernetes.io/name: mcp-server-kubernetes
  ports:
    - name: http
      port: 3001
      targetPort: http
      nodePort: 30001
      protocol: TCP

mcp-server-k8s add-on with mcp.yaml manifest

Add-on version v1 with mcp.yaml component

Step 2: Add to Blueprint and Deploy

  1. Create a blueprint (e.g., mcp-enabled) based on minimal
  2. Add the mcp-server-k8s add-on
  3. Apply the blueprint to your MKS cluster via Options → Update Blueprint

Blueprint with mcp-server-k8s add-on

Wait for the blueprint sync to complete.

Blueprint sync — mcp-server-k8s deployed successfully

Verify the MCP server is running:

curl http://<NODE_IP>:30001/health
# {"status":"ok"}

Step 3: Connect Claude Desktop

Edit your Claude Desktop config file. For example in my case it was stored in Library/Application Support. This is what Claude uses to connect to the endpoint (use npx to run mcp-remote — the globally installed binary can exit immediately):

{
  "mcpServers": {
    "kubernetes": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "http://<NODE_IP>:30001/mcp",
        "--allow-http"
      ]
    }
  }
}

Replace <NODE_IP> with your cluster node IP. Ensure your security group allows inbound traffic on port 30001.

Restart Claude Desktop. You should see the hammer icon when the Kubernetes MCP tools are connected.

Try It

Once connected, Claude Desktop shows Kubernetes tools (Pods, Deployments, Services, Namespaces, Nodes) in the Add from kubernetes menu:

Claude Desktop with Kubernetes MCP tools

Open a new conversation and ask:

  • List all pods in all the namespaces
  • create a namespace
  • create a nginx deployment

Claude will request permission on first use — click Allow.

What's Next

We are building a native Rafay MCP Server that will expose the full Rafay Platform through MCP multi-cluster operations, project management, blueprints, and more. We will share more when we are close to release.