Skip to content

2023

Rafay's AI Hackathon 2023: Advancements To Improve Our Customer Experience

We just wrapped up our annual hackathon earlier this month. The theme and focus for this hackathon was AI and Generative AI and our teams had the opportunity to prototype and demonstrate fascinating solutions esp. based on Generative AI.

We had eleven (11) fully functioning submissions spanning both "external" and "internal" use cases. The panel that reviewed and judged the submissions observed extensive use of the following technologies associated with Generative AI.

AI Hackathon Words

Org-wide Dashboards for OPA Gatekeeper

In our recent release in May, we enhanced our turnkey integration with OPA Gatekeeper. In this blog, I describe why we worked on this enhancement.

Many of our customers that operate mission critical applications on Kubernetes clusters have to comply with organizational policies and best practices. These customers depend on and use Rafay's turnkey integration with OPA Gatekeeper in the Rafay Kubernetes Operations platform.

Prior to this release, our customers would use Rafay to

  • Centrally orchestrate and enforce OPA Gatekeeper policies, and
  • Centrally aggregate OPA Gatekeeper violations in the audit logging system

They would then use Rafay's audit log aggregator to push the OPA violations in real time to their corporate SIEM such as Splunk etc.

Since most "Infrastructure and Operations" personnel and "App Developers" are not provided access to the corporate SIEM, they have been asking Rafay to develop dashboards that will help them answer some critical questions related to compliance with policies.

What is my current posture (i.e. summary) and how has my posture evolved over time? (i.e. trend)?

Create Git Pipeline on Rafay in 3 easy steps

Rafay’s Kubernetes Operations Platform includes a GitOps service that enables infrastructure orchestration (Infra GitOps) and application deployment (App GitOps) through multi-stage, git-triggered pipelines. In this blog post, we will discuss setting up a simple pipeline to sync cluster configuration to a Git repo in 3 easy steps.

In this example, we will start with a brownfield cluster that we will import and convert to a ‘Rafay managed’ cluster. We will then initiate a ‘system sync’ operation to write back the cluster configuration to a specified Git repo.

Mermaid

Per Project Settings for Notifications

This blog is a brief description of one of the enhancements from our recent release in May 2023. This feature was a frequently requested enhancement by our customers and it provides them with fine grained configuration for email notifications at a project level.

The Rafay platform acts as a single pane of glass providing a centralized view of all clusters and applications spanning the organization. Customers have had the option for over two years to leverage the Visibility & Monitoring service for centralized monitoring, alerting and notifications. This is a turnkey integration available for customers when they enable the visibility & monitoring managed add-on in their cluster blueprint.

The Visibility & Monitoring service can be leveraged to automatically generate and aggregate Alerts centrally for developers and operations personnel in their Orgs. In addition to centrally aggregated alerts, users have always had the option to optionally enable the platform to proactively send email notification when alerts are generated. To do this, administrators would specify email addresses for recipients that need to receive email notifications proactively everytime something needs immediate attention. Read on more about the enhancement below.

flowchart LR
    subgraph c1[Cluster]
    direction TB
        bp1[Cluster <br>Blueprint] -->
        vis1[Visibility & <br>Monitoring <br> Managed Add-on]
    end

    subgraph c2[Cluster]
    direction TB
        bp2[Cluster <br>Blueprint] -->
        vis2[Visibility & <br>Monitoring <br> Managed Add-on]
    end

    vis1-->rafay
    vis2-->rafay

    subgraph rafay[Rafay Controller]
    direction TB
        notifier[Visibility & Monitoring <br> Service]
        proja[Project A]-->notifier
        projb[Project B]-->notifier
    end

    rafay --> |Notification|admin[Administrators]

    classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000;
    classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000
    class c1,bp1,vis1,c2,bp2,vis2,notifier,tsdb,admin box

Amazon EKS v1.26 Clusters using Rafay

In our recent release in May, we added support for a number of new features and enhancements. One of these was support for new EKS cluster provisioning based on Kubernetes v1.26.

Customers have shared with us that they would like to provision new EKS clusters using new Kubernetes versions so that they do not have to plan/schedule for Kubernetes upgrades for these clusters right away. On the other hand, they want to be extremely careful with their existing clusters and plan/test in-place upgrades for these. There is no benefit in rushing this and impacting mission critical applications etc.

Based on this feedback, starting this release, we plan to introduce support for new Kubernetes versions for CSPs (i.e. EKS, AKS and GKE) in two phases. In the first phase, which will come very quickly, we will support new cluster provisioning for the new Kubernetes version. This requires us to extensively validate support for ALL supported interfaces in the platform (See details below). We will follow up with a Phase 2 which will bring support for zero touch in-place upgrades.

Important

Support for zero touch, in-place upgrades in Rafay from EKS v1.25 to v1.26 will follow in a few weeks. This requires us to add support for new preflight tests etc and perform extensive testing and validation.

Helm-Based Import

Early this year, we released the ability to import existing brownfield clusters into the Rafay platform using an official Helm Chart. With the helm-based import, it's a simple 3 step process a) Add the rafay-helm-charts to your helm repo b) Download the values.yaml using the API, RCTL or TF. This allows you to customize certain configurations as needed. c) Helm install with the custom values. This will bootstrap the Rafay operator onto the cluster.

Steps for this process are documented here.

Advantages of using Helm-Based Import

  • automate through the same Helm based interface that you are used to today and add this to your existing Helm automation
  • ability to use custom values.yaml file just like you would do with normal Helm to deploy the same bootstrap chart to multiple clusters
  • expanded and deeper integration with our technology partners - for example you can install Rafay through the AWS Marketplace

Values.yaml file customization

Using the API or TF, the default values.yaml file can be downloaded that will be used for the Helm install. Whether you are operating using the SaaS or self-hosted controller, the API endpoints, tokens, registries etc. will be populated with the latest and up to date information. You can change things like the Rafay relay image cpu and memory limits to fine tune performance accordingly. An example of what the values.yaml file looks like is shown here:

global:
  Rafay:
    ClusterLabels:
      rafay.dev/clusterID: 'kg1nqzk'
      rafay.dev/clusterLocation: 'newyorkcity-us'
      rafay.dev/clusterName: 'do-c'
      rafay.dev/clusterType: 'imported'
      rafay.dev/kubernetesProvider: 'OTHER'

connector:
  image:
    repository: "registry.rafay-edge.net/rafay/rafay-connector"
    pullPolicy: IfNotPresent
    tag: "r1.25.0-1"

controller:
  image:
    repository: "registry.rafay-edge.net/rafay/cluster-controller"
    pullPolicy: IfNotPresent
    tag: "r1.25.0-1"

relay:
  image:
    repository: "registry.rafay-edge.net/rafay/rafay-relay-agent"
    pullPolicy: IfNotPresent
    tag: "r1.25.0-1"
  resources:
    requests:
      cpu: 100m
      memory: 128Mi
    limits:
      cpu: 500m
      memory: 512Mi

initContainer:
  image:
    repository: "registry.rafay-edge.net/rafay/busybox"
    pullPolicy: IfNotPresent
    tag: "1.33"

token: "ci174q5rhpjof3hkjv40"
api_addr: "api.rafay.dev."
control_addr: "control.rafay.dev."
allow_insecure_bootstrap: ""
cluster_id: 'kg1nqzk'
max_dials: '2'
relays: '[{"token":"ci174q5rhpjof3hkjv4g","addr":"app.rafay.dev.:443","endpoint":"*.connector.kubeapi-proxy.rafay.dev.:443","name":"rafay-core-relay-agent","templateToken":"bsegkge8bg0jn4l6pdjg"}]'
http_proxy: ""
https_proxy: ""
no_proxy: ""
proxy_auth: ""
openshift: false

Example

In this YouTube video below, I show an example of how you can use the Rafay API to download the values.yaml file, which you can customize according to your needs and then import the cluster into Rafay using Helm.

Blog Ideas

Sincere thanks to readers of our blog who spend time reading our product blogs. Please Contact the Rafay Product Team if you would like us to write about other topics.


Azure Linux Container Host for AKS Clusters

In late May 2023, Microsoft announced General Availability of Azure Linux Container Host. This is based on the "CBL Mariner" OSS project maintained by Microsoft.

  • It is an operating system image that is optimized for running container workloads on Azure Kubernetes Service (AKS).
  • The OS image is maintained by Microsoft and based on Microsoft Azure Linux, an open-source Linux distribution created by Microsoft.
  • It is lightweight, containing only the packages needed to run container workloads.
  • It is hardened based on validation tests and is compatible with Azure agents.

Understanding Component Upgrades in an Upstream Rafay MKS Cluster

Upgrading a Kubernetes cluster is a crucial process that ensures your infrastructure stays up-to-date with the latest features, bug fixes, and security patches. As part of this process, several components within the cluster undergo upgrades.

In this blog post, we will explore the components that typically get upgraded during a cluster upgrade and highlight some of the periodic upgrades that both Cloud Service Providers (CSPs) and Rafay undertakes to enhance cluster performance and stability.

Developer Self Service via Cluster Templates

Our recent release update in May adds support for a number of new features and enhancements and we have written about the these enhancements and new features in our blogs. This blog is focused on Cluster Templates for GKE that enables customers to implement a Developer Self Service for Kubernetes clusters.

We added support for cluster templates in early 2022 starting with support for Amazon EKS initially, then followed by cluster templates for Azure AKS and with this release, cluster templates for Google's GKE. Common Use Cases for Cluster Templates are "Ephemeral Clusters" for lower environments such as:

  • Developer Test Beds
  • QA environments
  • Product support to replicate customer issues