We just wrapped up our annual hackathon earlier this month. The theme and focus for this hackathon was AI and Generative AI and our teams had the opportunity to prototype and demonstrate fascinating solutions esp. based on Generative AI.
We had eleven (11) fully functioning submissions spanning both "external" and "internal" use cases. The panel that reviewed and judged the submissions observed extensive use of the following technologies associated with Generative AI.
In our recent release in May, we enhanced our turnkey integration with OPA Gatekeeper. In this blog, I describe why we worked on this enhancement.
Many of our customers that operate mission critical applications on Kubernetes clusters have to comply with organizational policies and best practices. These customers depend on and use Rafay's turnkey integration with OPA Gatekeeper in the Rafay Kubernetes Operations platform.
Prior to this release, our customers would use Rafay to
Centrally orchestrate and enforce OPA Gatekeeper policies, and
Centrally aggregate OPA Gatekeeper violations in the audit logging system
They would then use Rafay's audit log aggregator to push the OPA violations in real time to their corporate SIEM such as Splunk etc.
Since most "Infrastructure and Operations" personnel and "App Developers" are not provided access to the corporate SIEM, they have been asking Rafay to develop dashboards that will help them answer some critical questions related to compliance with policies.
What is my current posture (i.e. summary) and how has my posture evolved over time? (i.e. trend)?
Rafay’s Kubernetes Operations Platform includes a GitOps service that enables infrastructure orchestration (Infra GitOps) and application deployment (App GitOps) through multi-stage, git-triggered pipelines. In this blog post, we will discuss setting up a simple pipeline to sync cluster configuration to a Git repo in 3 easy steps.
In this example, we will start with a brownfield cluster that we will import and convert to a ‘Rafay managed’ cluster. We will then initiate a ‘system sync’ operation to write back the cluster configuration to a specified Git repo.
This blog is a brief description of one of the enhancements from our recent release in May 2023. This feature was a frequently requested enhancement by our customers and it provides them with fine grained configuration for email notifications at a project level.
The Rafay platform acts as a single pane of glass providing a centralized view of all clusters and applications spanning the organization. Customers have had the option for over two years to leverage the Visibility & Monitoring service for centralized monitoring, alerting and notifications. This is a turnkey integration available for customers when they enable the visibility & monitoring managed add-on in their cluster blueprint.
The Visibility & Monitoring service can be leveraged to automatically generate and aggregate Alerts centrally for developers and operations personnel in their Orgs. In addition to centrally aggregated alerts, users have always had the option to optionally enable the platform to proactively send email notification when alerts are generated. To do this, administrators would specify email addresses for recipients that need to receive email notifications proactively everytime something needs immediate attention. Read on more about the enhancement below.
flowchart LR
subgraph c1[Cluster]
direction TB
bp1[Cluster <br>Blueprint] -->
vis1[Visibility & <br>Monitoring <br> Managed Add-on]
end
subgraph c2[Cluster]
direction TB
bp2[Cluster <br>Blueprint] -->
vis2[Visibility & <br>Monitoring <br> Managed Add-on]
end
vis1-->rafay
vis2-->rafay
subgraph rafay[Rafay Controller]
direction TB
notifier[Visibility & Monitoring <br> Service]
proja[Project A]-->notifier
projb[Project B]-->notifier
end
rafay --> |Notification|admin[Administrators]
classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000;
classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000
class c1,bp1,vis1,c2,bp2,vis2,notifier,tsdb,admin box
In our recent release in May, we added support for a number of new features and enhancements. One of these was support for new EKS cluster provisioning based on Kubernetes v1.26.
Customers have shared with us that they would like to provision new EKS clusters using new Kubernetes versions so that they do not have to plan/schedule for Kubernetes upgrades for these clusters right away. On the other hand, they want to be extremely careful with their existing clusters and plan/test in-place upgrades for these. There is no benefit in rushing this and impacting mission critical applications etc.
Based on this feedback, starting this release, we plan to introduce support for new Kubernetes versions for CSPs (i.e. EKS, AKS and GKE) in two phases. In the first phase, which will come very quickly, we will support new cluster provisioning for the new Kubernetes version. This requires us to extensively validate support for ALL supported interfaces in the platform (See details below). We will follow up with a Phase 2 which will bring support for zero touch in-place upgrades.
Important
Support for zero touch, in-place upgrades in Rafay from EKS v1.25 to v1.26 will follow in a few weeks. This requires us to add support for new preflight tests etc and perform extensive testing and validation.
Early this year, we released the ability to import existing brownfield clusters into the Rafay platform using an official Helm Chart. With the helm-based import, it's a simple 3 step process
a) Add the rafay-helm-charts to your helm repo
b) Download the values.yaml using the API, RCTL or TF. This allows you to customize certain configurations as needed.
c) Helm install with the custom values. This will bootstrap the Rafay operator onto the cluster.
Using the API or TF, the default values.yaml file can be downloaded that will be used for the Helm install. Whether you are operating using the SaaS or self-hosted controller, the API endpoints, tokens, registries etc. will be populated with the latest and up to date information. You can change things like the Rafay relay image cpu and memory limits to fine tune performance accordingly. An example of what the values.yaml file looks like is shown here:
In this YouTube video below, I show an example of how you can use the Rafay API to download the values.yaml file, which you can customize according to your needs and then import the cluster into Rafay using Helm.
Sincere thanks to readers of our blog who spend time reading our product blogs. Please Contact the Rafay Product Team if you would like us to write about other topics.
In late May 2023, Microsoft announced General Availability of Azure Linux Container Host. This is based on the "CBL Mariner" OSS project maintained by Microsoft.
It is an operating system image that is optimized for running container workloads on Azure Kubernetes Service (AKS).
The OS image is maintained by Microsoft and based on Microsoft Azure Linux, an open-source Linux distribution created by Microsoft.
It is lightweight, containing only the packages needed to run container workloads.
It is hardened based on validation tests and is compatible with Azure agents.
Upgrading a Kubernetes cluster is a crucial process that ensures your infrastructure stays up-to-date with the latest features, bug fixes, and security patches. As part of this process, several components within the cluster undergo upgrades.
In this blog post, we will explore the components that typically get upgraded during a cluster upgrade and highlight some of the periodic upgrades that both Cloud Service Providers (CSPs) and Rafay undertakes to enhance cluster performance and stability.
Our recent release update in May adds support for a number of new features and enhancements and we have written about the these enhancements and new features in our blogs. This blog is focused on Cluster Templates for GKE that enables customers to implement a Developer Self Service for Kubernetes clusters.
We added support for cluster templates in early 2022 starting with support for Amazon EKS initially, then followed by cluster templates for Azure AKS and with this release, cluster templates for Google's GKE. Common Use Cases for Cluster Templates are "Ephemeral Clusters" for lower environments such as:
Our recent release update in May to our Preview environment adds support for a number of new features and enhancements. We will write about the other new features in separate blogs. This blog is focused on our turnkey support for Amazon EKS v1.25.
Both new cluster provisioning and in-place upgrades of existing EKS clusters are supported. As with most Kubernetes releases, this version also deprecates and removes a number of features. To ensure there is zero impact to our customers, we have made sure that every feature in the Rafay Kubernetes Operations Platform has been validated on this Kubernetes version.
This release will be promoted from Preview to Production in a few days and will be made available to all customers.
Note that no action is needed on the part of our SaaS customers with the new release. Once the rollout is completed, all they need to do is learn about the new features and determine how and when they would like to use them.