Important
This page will be periodically updated with features that are scheduled to roll into Rafay's Preview environment as part of upcoming releases. Learn more about Previews. Learn about our recent releases.
Navigate to our public roadmap for details on what we are working on for future releases.
3.7 - SaaS¶
Note
Preview Target: September 10th, 2025
Blogs for New Features in this Release¶
# | Description |
---|---|
1 | Parallel execution of stages in GitOps pipeline |
Amazon EKS & Azure AKS¶
Kubernetes v1.33 Support¶
Benefit
Stay current with the latest Kubernetes release, unlocking new features and ongoing support.
Support for Kubernetes v1.33 is being added for Amazon EKS and Azure AKS. This includes:
- Provisioning of new clusters with v1.33
- Upgrading existing clusters from earlier versions to v1.33
AmazonLinux2 Deprecation in EKS 1.33
AWS has deprecated AmazonLinux2 in EKS 1.33. By default, when you create a new cluster with v1.33, the default node AMI family is set to AmazonLinux2023.
For more information, see the AWS documentation on Amazon Linux 2 AMI deprecation.
For existing clusters using AmazonLinux2:
Before upgrading to EKS 1.33, you must:
- Add a new node group based on AmazonLinux2023
- Test and validate the new node group with your application pods
- Migrate pods from the old node group to the new node group
- Delete the old node group after migration is complete
- Then proceed with the EKS 1.33 upgrade
Important: If any existing EKS cluster with AmazonLinux2 is upgraded to EKS 1.33 directly without adding node groups based on AmazonLinux2023, the node group upgrade will fail with the following error:
internal error: failed to get aws session, InvalidParameterException: AMI Type AL2_x86_64 is only supported for kubernetes versions 1.32 or earlier { RespMetadata: { StatusCode: 400, RequestID: "462022b3-da9b-4d90-a2b9-2d12b9586de3" }, Message_: "AMI Type AL2_x86_64 is only supported for kubernetes versions 1.32 or earlier" }
Upstream Kubernetes for Bare Metal and VMs¶
Enhanced Debug Capabilities¶
Benefit
Direct debug log download functionality has been added for upstream clusters, providing enhanced debugging capabilities. Users can now directly download logs from nodes to troubleshoot issues more effectively.
Previously users had to ssh to the nodes to see these logs, now this option is available on the every node card with an option to view/download debug logs.
RHEL 10 Support¶
Benefit
Support has been added for Red Hat Enterprise Linux (RHEL) 10 operating system. This allows customers to leverage RHEL 10 based nodes for Rafay MKS clusters.
ARM Support Enhancement¶
Benefit
Complete ARM architecture support has been added for Ubuntu 24.04 LTS and Ubuntu 22.04 LTS on both master and worker nodes. This enables customers to leverage ARM-based infrastructure for their Kubernetes workloads.
Previously only worker node support was added and only minimal blueprint support was available.
Platform Version v1.1.0¶
Benefit
New platform version v1.1.0 provides enhanced cluster management capabilities with integrated core components and automated utilities.
The new platform version v1.1.0 has been added, which includes integration of the Chisel core component and cluster utils version controller component. The cluster utils component includes utilities pushed to the nodes for self-healing, certificate rotation, and monitoring.
This will help us to push changes to the cluster utils via the platform version upgrade, enabling seamless updates and improved cluster maintenance capabilities.
RCTL Enhancement¶
Force Delete¶
Benefit
A new force delete option has been added to RCTL to allow force delete the cluster object if delete failed for any reason.
A new force delete option has been added to RCTL to allow force delete the cluster object if delete failed for any reason.
Usage:
./rctl delete cluster <cluster-name> --force
Add-ons, Workloads & Blueprints¶
Draft Version Support¶
Benefit
Draft versions of add-ons and blueprints can now be created directly through the UI, making iteration easier.
Previously, draft versions were only supported through non-UI interfaces. This capability is now extended to the UI for better usability.
Cluster Overrides¶
Benefit
Improves usability and provides more flexibility for administrators.
Two key UI enhancements:
-
Custom Input for Managed Add-ons – Resource selector previously only supported dropdowns for custom add-ons. Now, a free-form Custom Input option is available, enabling overrides for managed add-ons directly from the UI.
-
Enhanced Placement UI – Placement configuration has been redesigned for clarity, especially in scenarios where admins create overrides for clusters across projects (e.g., using labels).
Artifact Files¶
Benefit
Improves reliability of add-on deployments by automatically retrying artifact fetches before failing.
When artifact fetches (from Git or Helm repos) fail due to transient network issues, the system now retries automatically. This avoids the need to create new add-on versions solely for re-attempting artifact pulls.
Zero-Trust Access¶
Relay Agent¶
Benefit
Improves security by eliminating the need to deploy privileged containers.
The init container is being removed from the relay-agent
pod deployed on clusters, resulting in a more secure and simplified deployment.
GitOps¶
Pipelines¶
Benefit
Pipelines respect stage dependencies while maximizing concurrency, dramatically improving overall efficiency.
Historically, Rafay’s GitOps pipeline executed all stages sequentially, regardless of dependencies. While effective for simpler workflows, this limited performance for complex operations. With this enhancement, the pipeline engine supports Directed Acyclic Graphs (DAGs) enabling stages to execute in parallel where dependencies permit.
Environment Manager¶
Reconciliation on Environment Runs¶
Benefit
Improves operational efficiency and reduces deployment times.
Previously, redeploying an environment always redeployed all resources, even if unchanged. With selective resource reconciliation, admins can now specify the reconcile_resources
field to redeploy only targeted resources.
Example: In a failover where only DNS updates are required, admins can reconcile the DNS resource without redeploying the entire environment. If no resources are specified, the system redeploys all resources by default.
Environment Deletion and Failure States¶
Benefit
Provides clearer lifecycle management and better visibility into environment operations.
Environments now support three actions: Deploy, Destroy, and Delete (which performs a Destroy followed by removal of the environment object).
Enhancements include:
- New distinct status states
- Ability to filter environments by Active, Inactive, Deploy Failed, and Delete Failed in both the Environments list and Dashboard
- For environments in Delete Failed status, admins can review logs, clean up resources, and use a new remove object action to manually delete the environment
Namespace¶
Ephemeral Storage Resource Quota Limits¶
Benefit
Improved resource management and cost control through enforcement of ephemeral storage quotas at the namespace level.
A previous release introduced ephemeral storage limits as namespace quotas via non-UI interfaces. This has now been extended to the UI, making configuration easier.
Cost Management¶
Chargeback Reports¶
Benefit
Enables metadata-enriched chargeback reports for better visibility and more accurate cost allocation across tenants.
Earlier release introduced Chargeback summary reports aggregated by namespace. These now support custom label-based metadata enrichment, enabling more precise chargeback reporting for multi-tenant clusters. This capability is also available through the UI.
System Template Catalog Updates¶
AKS System Template¶
Benefit
AKS System Template will be loaded and available to all organizations, enabling users to leverage read to use AKS cluster templates for self-service workflows.
The AKS System Template will be loaded and available to all organizations as part of our catalog. This will help users to leverage this tested AKS cluster template and create self-service workflows using this template, streamlining cluster provisioning and ensuring best practices are followed.