Skip to content

Oct

v2.11-SaaS

Expected rollout date: November 5, 2024 for Production Orgs


Amazon EKS

Ubuntu 22.04 Node AMI Family Support

Added support for Ubuntu 22.04 AMI family for node groups, providing enhanced compatibility and performance improvements for clusters running on Ubuntu.

Ubuntu 22.04

Pod Identity

The EKS Pod Identity feature simplifies granting AWS IAM permissions to Kubernetes applications running in an Amazon EKS cluster. Support for this feature for clusters managed through the Rafay platform is being added with this enhancement. Support is being added for:

  • Installation of the Amazon EKS Pod Identity Agent Add-on: This agent runs as a DaemonSet on the cluster. This can be installed both on Day 0 for new clusters and Day 2 for existing clusters
  • Creation of Pod Identity Associations: This allows specific Kubernetes service accounts to be associated with IAM roles.
  • Migrating existing IRSA to pod identity associations: This allows migrating existing IAM Roles for service accounts to Pod Identity associations.

Info

To learn more about how to use EKS Pod Identity and its associations with Rafay, please read the following blogs: Introducing EKS Pod Identity and EKS Pod Identity with Rafay.

This feature will be supported through all interfaces including UI, RCTL, Terraform, System Sync and Swagger APIs.

Add-on Deployment in Day 0

PIA Day 0

Add-on Deployment in Day 2

PIA Day 2

Pod Identity Associations

PIA

Migration

Migration

Note

  • Pod identity associations for EKS managed addons is not available in this release and will be included in a subsequent release
  • Permissions Required: To utilize this feature, the following IAM permissions are necessary for the role or user part of cloud credentials:

    "eks:CreatePodIdentityAssociation",
    "eks:DescribePodIdentityAssociation",
    "eks:DeletePodIdentityAssociation",
    "eks:UpdatePodIdentityAssociation",
    "eks:ListPodIdentityAssociations"
    

To migrate from IRSA to Pod Identity Association, you will need an additional permission for tags: iam:UntagRole along with above set of permissions

Info

Click here to learn more about Rafay's support for EKS Pod Identity.


Upstream Kubernetes for Bare Metal and VMs

The features in this section are for Rafay's Kubernetes Distribution (aka Rafay MKS).

Ubuntu 24.04 LTS

Support is being added for the Ubuntu 24.04 LTS operating system. This allows users to leverage Ubuntu 24.04-based nodes for their Rafay MKS clusters.

ubuntu24


Cordon/Uncordon/Drain Node Actions

New node actions have been introduced including Cordon, Uncordon, and Drain. These actions enable users to more efficiently manage nodes. These actions will be supported through UI, RCTL, and Swagger API interfaces. For governance and compliance purposes, for each of these actions, an immutable audit log entry will be added to the centralized audit logging system.

Node Actions

Info

For information on CLI commands, please refer here.

LDAP Support

An enhancement has been added, allowing the Conjurer binary to successfully run on a node with an LDAP user and install the minion agent, ensuring smooth execution in LDAP authenticated environments

Inter-Pod Communication Check

Enhanced MKS cluster provisioning to include an inter-pod communication check. After each node addition, a pod is deployed on the newly added node to verify seamless pod-to-pod communication between the new node and existing master nodes.


Terraform/OpenTofu Provider

In addition to using the existing interfaces (UI, API, CLI and GitOps SystemSync), users can now also use Terraform or OpenTofu to manage the lifecycle (i.e. configure, provision, upgrade, scale, delete) of Rafay MKS based upstream Kubernetes clusters.

Furthermore, we also added support for data source for upstream Kubernetes cluster(rafay_mks_cluster) . This enhancement allows users to query and utilize specific cluster related information during the lifecycle management process, offering more flexibility and control in their automation workflows.

Users will be able to leverage this functionality with version 1.37 of Rafay's Terraform Provider Rafay's Terraform Provider and will be available one week after the Production SaaS rollout.


Environment Manager

Drivers/Workflow Handlers

It is possible today to execute custom workflows by packaging them as a container and/or through a set of HTTP calls. Support is being added to Drivers/Workflow Handlers to execute code written in Go or Python with this enhancement.

Drivers/Workflow Handlers can be leveraged at multiple places including as part of:

  • Resource templates through the Custom Provider option
  • Hooks attached to the resource/environment template configuration (e.g. Approval is need in ServiceNow before an environment provisioning is initiated)
  • Schedule policy (e.g. capture snapshot of K8s resources every 24 hrs)

The ability to execute custom code written in Go or Python will be supported through RCTL CLI, Terraform, System Sync and APIs interfaces initially. Support for UI interface will be added in a subsequent release. For more information on this feature, please refer here.


Template Designer & Visualizer Studio

A Designer Studio for Environment Manager is being added to the platform with this enhancement. The first version of the studio will support visualizing the relationship between different objects that constitute an environment template. This makes it easier for the Platform teams to debug/verify the templates before it is ready to be shared with other teams. Upcoming versions will add the ability to create templates from the scratch using the studio.


GitOps

UI enhancements: Pipelines and Approvals

A number of UI improvements are being implemented for the Pipelines and Approvals pages. These are intended to make it easy to get visibility around recent pipeline runs & pending approvals.

Pipelines page:

  • Ability to search by pipeline name
  • Ability to sort by columns
  • Additional columns, "Created At" and "Last Run"

Pipeline

Approvals page:

  • Ability to search by pipeline name
  • Ability to filter by Status (pending or approved)

Approval


System to Git Sync

In scenarios where the Platform team has standardized on GitOps as the choice of interface for the SRE/end user teams (i.e. all actions are driven through spec files in the Git repo), there are challenges around educating SRE/end user teams on folder structure that needs to be used for various resources (e.g. clusters).

With this enhancement, the required folder structure (empty folders) is automatically created for all resources that have been selected as part of the System Sync pipeline on the first System to Git sync. This makes it extremely easy for Platform teams to onboard new teams (create a project, a system sync pipeline and hand-off to the SRE/end user teams).


Role Based Access

Break Glass Workflows

There are scenarios where users (e.g. developers) may require elevated privileges for a specific period of time, example includes troubleshooting an application running in a production cluster. This new feature allows Platform teams (Org admins) to:

  • Temporarily assign users to override groups with elevated privileges
  • Integrate with external systems of record such as ServiceNow or Jira to enable workflows where access can be granted upon approval
  • Centralized audit logs capture the temporary access assignment/delete action and Platform teams (Org Admins) have full visibility into users who have temporary access across the organization
  • Stream the audit logs to the organization's SIEM such as Splunk
  • Export the audit logs as a CSV

Administrators can configure and use this feature through UI, RCTL CLI, Terraform and APIs interfaces.

Shown below is an example of a break glass configuration

Break Glass Access

Shown below is an example of the audit logs for break glass

Break Glass Access

Info

To learn more about the concepts behind break glass, please read our recent blogs: An Introduction to Break Glass Workflows for Developer Access to Kubernetes Clusters and Enhancing Security and Compliance in Break Glass Workflows with Rafay. For more information about this feature, please refer here.


Cost Management

Google Cloud Platform (GCP)

Support is being added to configure Cost Profiles for GCP with this enhancement. This allows customers to leverage the chargeback and cluster/application rightsizing capabilities available today for GKE clusters as well.

GCP

Info

For more information on this feature, please refer here.


Audit Logs

Namespace Operations

Audit logs are being added for namespace creation/delete operations that were handled implicitly by the controller. An example for this is an implicit namespace creation as part of an add-on deployment during the blueprint sync process.


User Experience in Rafay Console

Namespace Admin users

A number of improvements are being implemented to improve the user UX for namespace admin roles. These include filtering objects in the UI based on the access that the role provides and the ability to download kubeconfig for a specific cluster (versus a consolidated kubeconfig).

Page Size Selection

With this enhancement, any changes that the user makes to the 'rows per page' selection will be persisted across pages for that specific browser session.


Bug Fixes

Bug ID Description
RC-30381 Backup/restore jobs are not cleaned up when the cluster is deleted
RC-37499 Upstream k8s: Unable to add worker nodes to existing clusters in certain scenarios
RC-36543 Blueprint Sync operation is not successful when updating the blueprint version to remove an undesired add-on
RC-28677 UI: 404 error when pod metrics are unavailable
RC-33389 Modified time is updated and audit log entries are created on a workload publish action even when there are no changes
RC-36326 Fixed an issue where audit logs were missing for the namespace placement.
RC-32540 Greyed out the validate option for cloud creds for backup and restore and added an information tooltip for clarification.
RC-36566 Env Manager: Resolved a performance issue that caused a ~1 minute delay between activities, the time has now been reduced to approximately 30 seconds
RC-37499 Upstream k8s: Fixed an issue that prevented adding a worker node to an existing cluster in a corner case scenario
RC-37478 BluePrint Addon: Fixed the issue where addons would get stuck during deployment if it took longer than 65 minutes
RC-38160 UI: Fixed an issue where the arrow mark did not rotate while sorting the GitOps pipeline

v2.10 Update 1 - SaaS

17 Oct, 2024

The section below provides a brief description of the new functionality and enhancements in this release.


Environment Manager

Schedules

There are actions that may be need to be executed against environments one time or on a recurring basis. Examples of these include:

  • Configuring a Time to Live (TTL) policy to shut down environments after a specified time period
  • Configuring a Schedule Policy to shut down environments when not in use (evenings/weekends)
  • Periodically capture a snapshot of K8s resources for compliance purposes

Supported actions include

  • Deploy
  • Destroy (to shut down the environment)
  • Custom Workflows (this can be a series of tasks that can be a container, a set of HTTP calls or functions written in Go or Python)

It is also possible for Platform team to configure opt-out policies for Schedules. This includes:

  • Maximum number of times that an end user can opt-out of a configured schedule policy
  • Maximum duration that the user can opt-out for (e.g. if an end user is opting out of a TTL policy, the Platform team can configure the maximum duration that user can specify when opting out)
  • Attaching an approval workflow for opt-out (e.g. integration with a system like ServiceNow or JIRA for raising/recording approvals)

Schedule policies can be defined as part of the environment template configuration.

This feature is currently supported through RCTL CLI, System Sync and API interfaces. Support for UI and TF provider interfaces will be added in an upcoming release.

For more information, refer to the Schedules feature documentation.


Amazon EKS

Kubernetes v1.31

New EKS clusters can now be provisioned based on Kubernetes v1.31. Existing clusters managed by the controller can be upgraded "in-place" to Kubernetes v1.31.Read more about this in this blog post

New Cluster

eks 1.31

In Place Upgrade

eks upgrade


v1.1.36 - Terraform Provider

16 Oct, 2024

An updated version of the Terraform provider is now available. This release adds data source support for namespace resource.