Skip to content

Kubernetes

Break Glass Workflows for Developer Access to Kubernetes Clusters - Introduction

In any large-scale, production-grade Kubernetes setup, maintaining the security and integrity of the clusters is critical. However, there are exceptional circumstances—such as production outages or critical bugs—where developers need emergency access to a Kubernetes cluster to resolve issues.

This is where a "Break Glass" process comes into play. It is a controlled procedure that grants temporary, elevated access to developers in critical situations, with the appropriate safeguards in place to minimize risks.

Break Glass

User Access Reports for Kubernetes

Access reviews are required and mandated by regulations such as SOX, HIPAA, GLBA, PCI, NYDFS, and SOC-2. Access reviews are critical to help organizations maintain a strong risk management posture and uphold compliance. These reviews are typically conducted on a periodic basis (e.g. monthly, quarterly or annually) depending on the organization's policies and tolerance to risk.

Providing auditors with periodic access to user access reports for Kubernetes is a critical task for any typical platform team. This becomes onerous and burdensome especially for organizations that operate 10s or 100s of Kubernetes clusters that are used by 100s of app developers and SREs. Doing this via manual processes is impractical.

General Process

In this blog, we will look at why user access reports are critical for organizations and how Rafay's customers implement this with very high levels of automation.

CIS Benchmark for Kubernetes using Rafay

The Center for Internet Security (CIS) benchmark for Kubernetes consists of secure configuration guidelines especially for Kubernetes infrastructure set-up. These benchmarks encapsulate best practice security recommendations for configuring Kubernetes to support a strong security posture. The CIS Kubernetes Benchmark is written for the open source, upstream Kubernetes distribution and intended to be as universally applicable across distributions as possible.

In this blog, we describe how our customers perform CIS benchmark scans of their fleet of Kubernetes clusters using Rafay.

Vector Databases for Generative AI on Kubernetes

Many of our customers use Kubernetes extensively for AI/ML use cases. This is one of the reasons why we have turnkey support for Nvidia GPUs on EKS, AKS, Upstream Kubernetes in on-prem data centers. Recently, we have had several customers look at adding support for Generative AI to their applications. Doing so will require looking at a slightly different technology stack.

Traditional relational databases are adept at handling structured data. They do this by storing data in tables. However, AI use cases are focused on handling unstructured data (e.g. images, audio, and text). Data like this is not well suited for storage and retrieval in a tabular format. This critical technology gap with relational databases has opened the door for a new type of database called a Vector Database that can natively store and process vector embeddings. The rapid rise of large-scale generative AI models has further propelled the demand for vector databases.

In this blog, we will review why vector databases are well suited and critical for AI and Generative AI. We will then look at how you can deploy and operate vector databases on Kubernetes using the Rafay Kubernetes Operations Platform in just one step.