Skip to content

Mohan Atreya

Encrypt your Kubernetes Backups using Server Side Encryption

As Kubernetes adoption grows rapidly in enterprises, protecting cluster data is critical. Backups ensure business continuity in case of failures, accidental deletions, or security breaches. For over 2 years, users have depended on the integrated backup/restore capability in the Rafay Platform to dramatically simplify Kubernetes backup and restore operations. When the backups artifacts are stored in public cloud environments, organizations may have a concern with security. One of the most effective ways to secure these backups is by using Server-Side Encryption (SSE). SSE encrypts data at rest within cloud storage services, protecting it from unauthorized access while minimizing operational overhead.

In this blog, I describe the value of SSE encryption for Kubernetes backups and how it enhances security and compliance. I will also describe how administrators can configure and use SSE for backups in the Rafay Platform.

Encryption

Info

Learn about the integrated Backup/Restore capabilities in the Rafay Platform.

Flatcar Linux: A Great Fit for Kubernetes

In the fast-evolving landscape of containerized applications and cloud-native technologies, choosing the right operating system for your Kubernetes cluster can sometimes make a very big difference. Enter Flatcar Container Linux, an open-source, minimal, and immutable Linux distribution tailored specifically for running containers.

Flatcar is an excellent choice for Kubernetes and modern cloud-native environments. In Aug 2024, Flatcar Linux was accepted as a CNCF project.

This is a 3-part blog series. In this blog, we'll explore what Flatcar Linux is, why it’s uniquely suited for Kubernetes, and the benefits it brings relative to generic Linux.

Flatcar Logo


What Is Flatcar Linux?

Flatcar Linux is a lightweight and container-optimized Linux distribution designed to provide a secure, consistent, and low-maintenance platform for containerized applications. Originally forked from CoreOS after its deprecation, Flatcar has carried forward the same principles of immutability, simplicity, and reliability, making it a preferred choice for cloud-native deployments.

The most interesting capabilities of Flatcar are:

Immutable Infrastructure

The root file system is read-only and immutable, preventing accidental or malicious changes.

Atomic Updates

Updates are applied atomically, ensuring consistency and eliminating the risk of partial updates.

Container-Native Design

It is optimized for running containers specifically with Kubernetes in mind.

Reduced Attack Surface

The minimalist design reduces the attack surface, and security features like SELinux and secure defaults are enabled out of the box.


Why Flatcar Linux Is a Good Fit for Kubernetes

Kubernetes, as a container orchestration platform, relies on the underlying operating system to provide a stable, efficient, and secure foundation. Here are some reasons why Flatcar Linux is an excellent fit for Kubernetes clusters: s

1.Minimal and Lightweight

Flatcar Linux is stripped down to the essentials required for container workloads. This minimalism reduces complexity and resource consumption, ensuring Kubernetes nodes are efficient and responsive.

2.Immutable

In a Kubernetes cluster, consistency across nodes is crucial. Flatcar’s immutable infrastructure ensures that all nodes run the same configuration, eliminating configuration drift and making it easier to manage large-scale deployments.

3.Automatic and Atomic Updates

Flatcar’s update mechanism is built with atomicity in mind. Updates are applied as a single transaction and can be rolled back if necessary. This is invaluable in a Kubernetes environment where uptime and reliability are critical.

4.Security First

Flatcar provides a minimal attack surface, coupled with features like read-only file systems and SELinux. This ensures that Kubernetes nodes are resilient against vulnerabilities and exploits.

5. Container-Optimized Kernel

Flatcar comes with a kernel optimized for running containers. It integrates seamlessly with Docker, Kubernetes, and other container runtimes, ensuring smooth performance and compatibility.


Conclusion

Flatcar Linux is an excellent operating system for Kubernetes and modern containerized workloads. Its immutable design, security features, and minimal footprint align perfectly with the needs of cloud-native environments. By adopting Flatcar Linux, organizations can achieve greater operational efficiency, enhanced security, and improved reliability for their Kubernetes clusters.

If you’re looking for a secure, reliable, and easy-to-manage operating system for your Kubernetes environment, Flatcar Linux is well worth considering. Its purpose-built nature ensures that your infrastructure is optimized for the demands of modern, containerized workloads. Visit flatcar.org to learn more and get started!

In the 2nd blog, we will demonstrate how you can configure, install and operate Flatcar Linux. In the 3rd and final blog in the series, we will describe how you can provision and operate Rafay MKS Kubernetes Clusters on Flatcar Linux based nodes. Support for Flatcar Linux with Rafay MKS is coming in a few weeks.

EKS Auto Mode - Considerations

In the introductory blog on Auto Mode for Amazon EKS, we described the basics of this new capability that was announced at AWS re:Invent 2024. In this blog, we will review considerations that organizations need to factor in before using EKS in Auto Mode.

Note

Please consider this as a living/evolving document. EKS Auto Mode is relatively new and we update this blog with new learnings/findings.

Considerations for EKS Auto Mode

EKS Auto Mode - An Introduction

The Rafay team just got back late last week from an incredibly busy AWS re:Invent 2024. Congratulations to the EKS Product team led by our friend, Nate Taber for the launch of Auto Mode for EKS.

Since this announcement last week, we have had several customers reach out and ask us for our thoughts on this newly launched EKS Auto Mode service. There are several blogs that already describe "How Auto Mode for EKS works etc". In this blog series, I will attempt to provide perspective on "Why", "Why Now?" and "What this means for the industry".

EKS Auto Mode

The Kube-OVN CNI: A Powerful Networking Solution for Kubernetes

Kubernetes has become the de facto standard for orchestrating containerized applications, but efficient networking remains one of the biggest challenges. For Kubernetes networking, Container Network Interface (CNI) plugins handle the essential task of managing the network configuration between pods, nodes, and external systems. Among these CNI plugins, Kube-OVN stands out as a feature-rich and enterprise-ready solution, designed for cloud-native applications requiring robust networking features.

In this blog, we will discuss how it is different from popular CNI plugins such as Calico and Cilium and use cases where it is particularly useful.

Kube-OVN Logo

Spatial Partitioning of GPUs using Nvidia MIG

In the prior blogs, we discussed why GPUs are managed differently in Kubernetes, how the GPU Operator helps streamline management and various strategies to share GPUs on Kubernetes. In 2020, Nvidia introduced Multi-Instance GPU (MIG) that takes GPU sharing to a different level.

In this blog, we will start by reviewing some common industry use cases where MIG is used and then dive deeper into how MIG is configured and used.

Nvidia MIG

GPU Sharing Strategies in Kubernetes

In the previous blogs, we discussed why GPUs are managed differently in Kubernetes and how the GPU Operator can help streamline management. In Kubernetes, although you can request fractional CPU units for workloads, you cannot request fractional GPU units.

Pod manifests must request GPU resources in integers which results in an entire physical GPU allocated to one container even if the container only requires a fraction of the resources. In this blog, we will describe two popular and commonly used strategies to share a GPU on Kubernetes.

GPU Sharing in Kubernetes

Why do we need a GPU Operator for Kubernetes

This is a follow up from the previous blog where we discussed device plugins for GPUs in Kubernetes. We reviewed why the Nvidia device plugin was necessary for GPU support in Kubernetes. A GPU Operator is needed in Kubernetes to automate and simplify the management of GPUs for workloads running on Kubernetes.

In this blog, we will look at how a GPU operator helps automate and streamline operations through the lens of a market leading implementation by Nvidia.

Without and With GPU Operator

Using GPUs in Kubernetes

Unlike CPU and Memory, GPUs are not natively supported in Kubernetes. Kubernetes manages CPU and memory natively. This means it can automatically schedule containers based on these resources, allocates them to Pods, and handles resource isolation and over-subscription.

GPUs are considered specialized hardware and require the use of device plugins to support GPUs in Kubernetes. Device Plugins help make Kubernetes GPU-aware allowing it to Discover, Allocate and Schedule GPUs for containerized workloads. Without a device plugin, Kubernetes is unaware of the GPUs available on the nodes and cannot assign them to Pods. In this blog, we will discuss why GPUs are not natively supported and understand how device plugins help address this gap.

Device Plugin K8s

Rafay Newsletter-September 2024

Welcome to the September 2024 edition of the Rafay customer newsletter. This month, we’re excited to bring you the latest product enhancements and insightful content crafted to help you make the most of your AI/ML, Kubernetes, and cloud-native operations.

Every month, we push out a number of incremental updates to our product documentation, new functionality, our YouTube channel, tech blogs etc. Our users tell us that it will be great if we summarized all the updates for the month in the form of a newsletter that they can read or listen to in 10 minutes.

Newsletter Sep 2024