Skip to content

Release Notes

v1.4.3.1

24 Feb, 2021

QCOW Image Update

An updated qcow image (v1.4) is now available to customers. This is primarily an ongoing security update that incorporates the latest kernel updates, container images and refreshes the OS packages.


v1.4.3

19 Feb, 2021

Amazon EKS

The RCTL CLI based lifecycle management of Amazon EKS clusters has been enhanced to add support for "Volume Encryption", "GP3" and "Envelope Encryption of Secrets in etcd". All customers are recommended to update to the latest version of RCTL. View additional details here.

Kubernetes Patches

Support for latest security patch updates of upstream k8s: v1.19.7, 1.18.15, 1.17.17. Customers are recommended to upgrade their managed clusters as quickly as possible to ensure they have the latest security related updates.

k8s Upgrades

Upgrades of managed upstream k8s clusters are performed "in-place" with "zero downtime" and are completed in just a few minutes. See screenshot below for an example.

k8s Upgrades


v1.4.2

9 Feb, 2021

Options for Blueprints

The log aggregation addon is no longer mandatory in the default cluster blueprint. Users can optionally deselect this addon from their custom blueprints. This can be useful for deployments where organizations may have standardized on an alternate log aggregation technology.

Optional Log Aggregation Addon

Defaults for OVA based Clusters

Default settings for the OVA based cluster provisioning wizard have been updated to enhance the user experience.

Ongoing Security Updates

Multiple addons in the default cluster blueprint have been updated and hardened to comply with security best practices. These updates are provided on an ongoing basis to customers and are driven by results of daily security/vulnerability scans by the internal security team.

These are automatically used as the baseline for new clusters and blueprint updates to existing clusters. All customers are recommended to update their cluster blueprints at the earliest possible opportunity.

Updated Default Blueprint


v1.4.1

27 Jan, 2021

Bug fixes


v1.4.0

18 Dec, 2020

New Features

GitOps Pipelines

Users can implement cloud native, GitOps continuous deployment pipelines directly in the Console. This enables users to manage the lifecycle of their workloads using a developer-centric, declarative model using tools and infrastructure developers are already familiar with such as Git.

v1.4 GitOps Pipelines

Amazon EKS Upgrades

For Amazon EKS clusters provisioned by the Controller, administrators can now perform seamless upgrades of the cluster's control plane, worker nodes and critical addons directly from the Console in just a few clicks.

v1.4 EKS Upgrades

v1.4 EKS Upgrades

Backup and Restore

Fully integrated and turnkey cluster backup and restore capability enabling users to centrally configure, automate and operationalize disaster recovery (DR) and/or cluster migration use cases.

Enhancements

Swagger based REST APIs

In addition to the existing REST APIs, authorized users can now also use the newly introduced Swagger based REST APIs that adhere to the OpenAPI specification.

Swagger API

Git and Helm Repos for Workloads

In addition to uploading workload artifacts (Helm charts, k8s yaml) to the controller, workloads can now be configured to integrate and pull the artifacts directly from the user's Git or Helm repository.

Git and Helm based Repos

Drift Detection Control Loop

Workload can now be configured with a configuration drift detection control loop. This will proactively detect, report and optionally perform automated remediation if it detects that the workload configuration has drifted from the configured workload specification.

Drift Detection

Dashboard Enhancements

The Kubernetes resources dashboard has been enhanced to display additional resources on the cluster such as Cluster Roles, Role Bindings, Persistent Volumes, Storage Classes, PSPs and Service Accounts. At the namespace level, Jobs, CronJobs and Config Maps details are displayed in addition to existing resources.

k8s Dashboard

Administrators can now select a specific container in a pod, click through to view a detailed container dashboard comprising charts for health, CPU, Memory and Restart trends.

Container Dashboard

Administrators can now filter and view k8s resources by cluster addons allowing them to zero in on status of resources for a specific addon.

Cluster administrators are provided visibility to pre-filtered Kubernetes events based on the context of where they are in the dashboard. For example, they can instantly view the events filtered at the "cluster level" or "by namespace" or "by pod".

k8s Events

MKS (Upstream k8s)

RHEL 7 is now supported for both master and worker nodes for bare metal and VM based environments.

Amazon EKS

Single Click upgrades (see above).

Administrators can now provision EKS Clusters with worker nodes based on Ubuntu 20.04 LTS.

Administrators can specify node labels to be automatically assigned to every node in a node group. This helps eliminate the need to manually add labels to every node in the node group.

EKS Node Labels

Organizations that are required to use tags for all AWS resources created by the Controller can now specify tags (key only or key-value pair) during cluster provisioning as well as when they add node groups.

Security

Administrators can now implement an "anti hammering" policy that will automatically lockout the user/account after a number (customer configurable) of consecutive failed login attempts.

Audit Logs

Org Admins are now provided with a filter for "projects" allowing them to quickly zero in on audit logs for a selected project. In addition to Org Admins, users in a project are also provided with a view to the audit logs relevant to the projects they have access to.

Audit Logs Project Filter

Users of audit logs are now provided with the ability to perform free text search to help reduce the operational burden. They can now quickly zero in on the audit logs that match the provided search criteria.

Free Text Search of Audit

Notifications

Users can now select specific clusters for which they would like to receive notifications when alerts are generated.

Per Cluster Notifications

Cluster Blueprint Debugging

An integrated debugging facility is now available for administrators to perform their diagnostics directly from the Console where they manage the lifecycle of cluster blueprints.

Debugging of Blueprints

RCTL CLI

Users can create and manage version controlled, declarative specifications for their GitOps pipelines using the RCTL CLI. Administrators can use version controlled, declarative cluster specs to automatically provision and deprovision Amazon EKS, Azure AKS and Google GKE based clusters.

Partner/Provider Ops Console

Partners/Providers that use the Kubernetes as a Service (KaaS) Ops Console can now remotely debug/diagnose issues on customer clusters spanning multiple Orgs using the Zero Trust KubeCTL Access (ZTKA) channel.


v1.3.8

14 Nov, 2020

RCTL CLI Enhancements

RCTL has been updated to support automation for new functionality introduced in the 1.3.6 release. Specifically, the following capabilities can now be automated and embedded in a pipeline.

  • Native Helm 3 support for workloads and addons
  • Manage versions for addons and blueprints
  • Manage configuration for the Alert Manager managed addon
  • Set and list PSPs in cluster blueprints

v1.3.7

01 Nov, 2020

White Labeling Enhancements

  • White labeled partner's Copyright and Terms of Service URL can now be displayed to partner's end customers

v1.3.6.1

23 Oct, 2020

Amazon EKS

  • Support for k8s 1.18

RCTL CLI

  • Bug fixes and updates.

v1.3.6

17 Oct, 2020

MKS (Upstream Kubernetes)

  • K8s version updates: Support for k8s v1.18.8, 1.17.11 and 1.16.14
  • Ability to add/remove master nodes from a provisioned cluster e.g. single master to multi-master configuration
  • Ability to expand storage in a cluster by adding storage nodes
  • Ability to add/expand storage on an existing storage node
  • Updated self-service, cluster provisioning workflows for on-prem, OVA and QCOW pre-packaged image based clusters

Kubernetes Upgrades

  • Kubernetes fleet upgrade workflows
  • Ability to configure upgrade protection

Cloud Credentials for AWS

  • Ability to edit and update credentials.
  • IAM details associated with a cloud credential are presented partially masked to help admins associate them after initial creation.
  • Ability to use/specify a session token for authentication

Amazon EKS

  • Support for Bottlerocket AMI for worker nodes.
  • Support for k8s 1.17.
  • Support for EKS lifecycle management on AWS Outposts

Namespace Admin

  • Users and Groups in a Project can be locked down to identified namespaces providing another level of multi tenancy inside a Project.
  • New namespace scoped roles: Namespace Admin, Namespace Read Only
  • Automated RBAC access for Zero Trust KubeCTL (see details below)

Zero Trust KubeCTL

  • Admins can now implement a break glass process for KubeCTL access esp. in higher level environments by selectively enabling/disabling Zero Trust KubeCTL access to production clusters
  • Ability to limit/control access via "KubeCTL CLI" AND/OR "Browser based KubeCTL" via Console
  • Admins can require users to successfully authenticate and have an active session with the Console (direct or SSO) before allowing access using the Zero Trust KubeCTL CLI channel. This can be implemented Org Wide with support for user specific overrides.
  • Enhanced KubeCTL Audit Logs now displays "access method": Virtual Terminal in Console or KubeCTL CLI

Helm 3 (Native Helm)

  • The Controller can now behave like a Helm client with all existing integrations, multi cluster, policy based deployments and provide a native Helm experience for both workloads and blueprints
  • Cluster admins can now view k8s resources organized by Helm releases

Improved Addon and Blueprint Lifecycle Management

  • Support for Native Helm (Helm 3 client in Controller)
  • Admins can now view entire history (versions) of addons and blueprints
  • Enhanced user experience for custom blueprint creation and management

Cluster Label Membership

  • Admins can now perform bulk operations to associate labels to clusters in a project

Customizable System Addons

  • Customers can now customize the "Alert Manager" system addon in the default blueprint. For example, send notifications to Slack channel etc.

User Management

  • Support for a machine user that can access the Controller only using APIs for automation

Single Sign On (SSO)

  • A separate IdP users list is available with a list of all users that have accessed using SSO
  • Ability to set Kubeconfig validity overrides for SSO Users
  • Ability to revoke Kubeconfig for SSO users
  • For IdP's that do not support a metadata URL for automated configuration, admins can now download and upload the IdP metadata file during SSO configuration

k8s Native Security - Pod Security Policy (PSP)

  • Partners can use the Partner Operations Console to centrally create, manage and enforce PSPs and PSP polies on orgs/tenants under management.
  • Partners can configure and manage "org" specific PSP overrides
  • Org Admins can create, manage and enforce organization wide PSPs via cluster blueprints
  • Infrastructure admins can select and enforce PSPs for specified namespaces

Partner Ops Console Enhancements

  • Partner Admins now have the ability to view detailed dashboards about an end customer's cluster, node, k8s resources and pods allowing them to support their customers better.

Continuous Workload Placement

  • Workloads configured with "cluster label" or "location" based placement policy will be automatically deployed to newly provisioned clusters that match the placement policy. No administrative intervention is required.

Detect and Report k8s Version

  • The Controller now detects and reports Kubernetes version for all cluster types including imported clusters.

v1.3.5.1

5 Oct, 2020

Amazon EKS

  • Support for k8s 1.17
  • The nodegroup scale count needs to be specified in the configured min-max range
  • For clusters with private (cloaked) control plane, validation of at least 1 healthy nodegroup is performed before nodegroups can be added/deleted

v1.3.5

22 Sep, 2020

RCTL CLI Enhancements

The RCTL CLI Utility has been enhanced to support additional automation options. Customers can now create and embed end-to-end workflows in their automation pipelines. The RCTL binary for macOS is now digitally signed by an Apple issued certificate to verify authenticity.


v1.3.4

20 Sep, 2020

Workload and Addon Enhancements

Workloads and Addons can now be configured and deployed into kube-system namespace on target clusters.


v1.3.3

30 Aug, 2020

Blueprint Sync Timeout

Blueprint sync timeout windows have been tuned and optimized for low bandwidth, edge type provisioning of clusters.

Pod Waiting Alert

Alerts are now generated when pods are stuck in "ContainerCreating" state for 5 minutes. This can occur due to several container issues such as ContainerCreating, CrashLoopBackOff, ErrImagePull, ImagePullBackOff, CreateContainerConfigError, InvalidImageName, CreateContainerError.


v1.3.0

01 Aug, 2020

Multiple k8s Versions

Users can now specify a Kubernetes version (1.16.x, 1.17.x and 1.18.x) during cluster provisioning.

In-place k8s Upgrades

Administrators can now schedule and perform Kubernetes upgrades of provisioned clusters with the click of a button. As we qualify new Kubernetes versions (major and minor), customers will be provided notifications.

Support for Ubuntu 16.04 LTS

In addition to Ubuntu 18.04 LTS and CentOS 7, users can now also use Ubuntu 16.0.4 LTS for bare metal and VM based provisioning of clusters.

Cluster Labels

You can now create and assign labels to clusters providing the ability to organize and manage a fleet of clusters effectively and efficiently. Clusters can be sorted/filtered by Blueprints.

Cluster Label based Placement

In addition to "specific cluster" and "specific location" policies for workload placement (deployment), users can now drive placement based on "cluster label" based policies. This allows users to implement custom logic to drive workload deployments across a fleet of clusters.

Node Labels and Taints

Users can now view and set node level labels and taints directly from the Console/Controller.

Zero Trust Kube API Access Proxy

Secure access to a managed cluster's API server via a proxy providing centralized authentication, authorization and auditing. Instant provisioning and de-provisioning of user access.

Monitoring & Alerts

Enhanced monitoring with proactive alerts and notifications for a number of common scenarios related to clusters, nodes, workloads, pods and storage.

Alerts will be automatically opened when a condition is observed and closed when the underlying issue is resolved. Users will have centralized access to all alerts across the fleet of clusters.

Cluster Blueprints and Addons

Users can now view, download and update existing add-ons. Cluster blueprints are now version controlled. Clusters can be sorted/filtered by Blueprints. The blueprint version active on a cluster is clearly presented for each cluster.

Cluster Sharing across Projects

Administrators can now enable sharing of clusters across multiple projects. This enables workloads from different projects to be deployed on a shared fleet of clusters.

AWS Node Termination Handler

The default cluster blueprint for Rafay provisioned Amazon EKS Clusters has been updated to automatically deploy the AWS Node Termination Handler. This is a daemonset that allows the cluster to respond appropriately when unforeseen ec2 maintenance events occur as well as handle Spot interruptions.

Default Private API Server for EKS

By default, provisioned Amazon EKS Clusters are configured as Private ensuring the cluster's control plane is not visible or accessible over the Internet.

Spot Instances on EKS

In addition to on-demand ec2 instances, the controller can also provision worker nodes using spot instances that can provide 70-90% savings over On-Demand prices. The Controller will also automatically deploy AWS's node termination handler to ensure spot instance interruptions are handled gracefully.

Vault for Workload Wizard

The workload wizard has been enhanced to leverage the controller's turnkey integration with Hashicorp's Vault. Workload admins can leverage secure, dynamic retrieval of secrets from their Vault server in just a few clicks.

Custom Metrics based HPA

The workload wizard has been enhanced to leverage the controller's turnkey integration with Prometheus resident on the managed clusters. Workload admins configure and enable the use of custom, application specific metrics for horizontal pod autoscaling (HPA) in just a few clicks.


v1.2.8

Jun 28, 2020

CentOS Support

In addition to Ubuntu 18.04, MKS clusters for Bare Metal and VMs now supports CentOS 7. Watch a demo for additional details.

Additional Storage Integrations

In addition to the existing turnkey integration of GlusterFS for distributed storage, MKS for Bare Metal and VMs now provides a turnkey integration with OpenEBS for Local Persistent Volumes.

Note that both storage classes can be active concurrently. Customers can select one of them as the default storage class. Watch a demo for additional details.


v1.2.5

Jun 7, 2020

Amazon EKS Enhancements

For Amazon EKS Clusters provisioned using the controller

  • Support for provisioning EKS clusters based on Kubernetes 1.16
  • Save time and costs by scaling down an EKS cluster's worker nodes down to zero for extended periods of time. Scale worker nodes back with workloads intact in minutes.

RCTL Enhancements

  • Ability to specify Project name in workload meta file

v1.2.0

May 16, 2020

Unified, Integrated Console

The separate App and Ops console has been integrated and unified. Users now have seamless access to all the information at a glance and can perform actions quickly and efficiently. Customers using existing URLs for the App and Ops Console will be automatically redirected to the URL for the integrated console

Cluster Dashboard

Users now have deep visibility into metrics, health and utilization trends for clusters, nodes, workloads and all Kubernetes resources.

RBAC Enhancements

Role based Access Control (RBAC) has been substantially enhanced allowing customers to allow fine grained access to resources on the Controller.

See Projects, SSO for related capabilities.

Projects

Org/Tenant admins can now create and manage dedicated isolation boundaries called Projects which can have dedicated users, groups and resources (clusters, namespaces and integrations). Users assigned to specific projects will only have access/visibility into resources in their projects.

See RBAC Enhancements and SSO for related capabilities.

Updated Rafay CLI (RCTL)

Users need to upgrade to the v1.2.x of the Rafay CLI (RCTL) to use the new functionality such as Projects, RBAC etc.

Single Sign On (SSO)

Org/Tenant admins can now configure and enable seamless user access to the Rafay Console without the burden of local user lifecycle management by leveraging the controller's turnkey support for SSO with Identity Providers (IdPs) such as Okta, Ping Identity or any SAML 2.0 compliant identity provider.

Secrets Management

A turnkey integration with HashiCorp's Vault that allows users to dramatically enhance the security posture for their workloads by dynamically retrieving "secrets" from their centralized Vault secrets server.

Rafay provides intuitive workflows that eliminates the operational complexity, burden and learning curve associated with (a) configuration of clusters to connect to Vault and (b) dynamic retrieval of secrets by workloads deployed to clusters.

Cloud Credential Enhancements

Ability to configure and use an AWS IAM role (cross account) as cloud credentials to provision and manage the lifecycle of Amazon EKS Clusters and auto provisioned MKS clusters on AWS infrastructure. This capability eliminates the lifecycle management burden wth IAM user based credentials. It also eliminates potential security concerns with handling secrets and impact due to frequent rotation.

Amazon EKS Enhancements

  • Support for provisioning EKS clusters based on k8s 1.15
  • Ability to use a delegated role for provisioning and lifecycle management (see cloud credentials for details)
  • Ability to update and restart provisioning in case of failures due to misconfiguration or errors encountered during control plane and nodegroup provisioning.

v1.1.0

March 25, 2020

Amazon EKS Enhancements

For Amazon EKS Clusters provisioned using the controller

  • Improved UX for Amazon EKS Provisioning and Lifecycle Management
  • Ability to Add/Remove multiple node groups
  • Scale Up, Scale Down, Drain node groups

Preflight Checks

Comprehensive preflight checks are now available for manual cluster provisioning. This will ensure that the customer provided infrastructure is compatible before cluster provisioning is initited.

Auto Approval of Nodes

Auto approval of nodes can now be optionally enabled for manually provisioned clusters.

New Managed Ingress

The managed Ingress components have been replaced with a standard Nginx Ingress Controller.

Subscriptions for Notifications

Customers can specify recipients that will be sent notifications when alerts are generated.

Annotations for Integrations

Workloads based on Helm and k8s YAML can now leverage "Annotations" to leverage integrations for DNS based GSLB, Log Aggregation and Container Registries.

Multiple Interfaces

During manual provisioning, all available interfaces are auto detected and presented. Users can select the correct/preferred interface during cluster configuration and provisioning.

Customize Cluster Blueprint Addons

Helm chart based addons can now be customized using a separate values.yaml file


February 04, 2020

Type: Minor

Outbound Port Requirements

The k8s operator and all associated components deployed to newly provisioned/imported Kubernetes clusters only require OUTBOUND port 443 (https) connectivity to the Controller. Existing clusters will continue using the older list of outbound ports.

Multiple IP Addresses

Multiple IP addresses are now supported for manually provisioned clusters. Users can select the interface they would like to use for Intra cluster traffic.


January 15, 2020

Type: Minor

RCTL CLI Enhancements

The RCTL CLI has been enhanced with new functionality specifically for Helm and k8s yaml workloads. In addition, several bug fixes have also been incorporated. All customers are recommended to upgrade to this version of the CLI.


January 9, 2020

Type: Major

Amazon EKS Lifecycle Management

Customers can now use the controller to fully manage the lifecycle (Configure, Provision, Operate and Delete) of their Amazon EKS Clusters in all supported regions.


November 27, 2019

Type: Major

Imported Kubernetes Clusters

Customers can now import and manage "existing" Kubernetes clusters. These can be clusters from managed k8s providers like (EKS, GKE, AKS etc) or DIY Kubernetes clusters.

Helm and YAML Workloads

In addition to the wizard based workload, customers can now also bring their Helm charts and native Kubernetes YAML files as workloads.

Multi Cluster Namespace Management

Users can now manage the lifecycle of namespaces and their resource quotas. The controller will automatically create and delete namespaces on all managed clusters where the workload needs to be deployed

New CLI

A new and updated CLI with additional functionality is available. All customers are required to upgrade. The previous version of the CLI is now deprecated.


November 12, 2019

Type: Minor

Manual Cluster Expansion Optimizations

Customers can now use the Console (via the GUI and APIs) to add/remove worker nodes with the click of a button. This streamlined workflow is supported for both manually provisioned clusters as well as auto provisioned clusters on AWS.

Alerting Framework

An alerting framework has been introduced. In this release, the controller will automatically and proactively generate an email based alert when there is a failure with application/workload deployments. Future releases will leverage the alerting framework for additional scenarios.


October 31, 2019

Type: Minor

Custom Registry Integrations

Users can now directly configure credentials for access to any Docker Compatible Registry that requires authentication and use it seamlessly within their workloads. The controller will securely store the image pull credentials, perform image/tag validation using the credentials and also automatically inject/deprovision the image pull credentials to the configured clusters.

With this release, the controller has been tested with "Docker Hub (Public)", "Docker Hub (Private)", "AWS ECR", "GCP GCR", "Quay by RedHat", "Nexus by Sonatype", "JFrog Artifactory", "Microsoft MCR", "System Container Registry" and "Any Docker Compatible Registry".

Elastic Search for Log Aggregation

In addition to AWS S3, users can now configure Elastic Search as a log aggregation endpoint.

App Console Debug Enhancements

For workloads that are configured to use the managed Layer 7 Ingress (API Gateway), users now have deep visibility into the status and logs of the Rafay Managed API Gateway pods in their namespace.


October 15, 2019

Type: Major

ECR and GCR Integrations

Users can now directly configure credentials for access to ECR and GCR in Rafay and use it within their workloads. Rafay will securely store the image pull credentials, perform image/tag validation using the credentials and also automatically inject/deprovision the image pull credentials to the configured clusters.

Cluster Health Enhancements

The Rafay Ops Console now provides deep real time visibility into the current state of the nodes, pods and namespaces on managed clusters. Operational personnel will have access to information identical to what they would see if they were using "kubectl" to interact with the cluster.

Enhanced Debug for Workloads

The Rafay Application Console now provides real time visibility into the current state of their application's pods across all clusters where their application is deployed to.

Developers will have access to information identical to what they would see if they were using "kubectl" without having to deal with the learning curve of kubectl or the security ramifications by opening up configs and rbac on clusters.

Docker Commands and Arguments

Users can now specify custom arguments and commands that will allow them to customize the behavior of their container at runtime.

Stateful Sets

The guided workload configuration workflow now supports Stateful Sets. This can be used for stateful workloads such as Databases etc.


August 27, 2019

Type: Major

New URLs

Both the Application Console and Ops Console](https://ops.rafay.dev/) are now accessible via easy to remember URLs. Users can continue using the older URLs until they are disabled.

Cluster Heartbeat

Managed clusters now maintain a heartbeat with the Rafay Controller providing a near real time view into health, status and availability of the clusters.

Cluster Health

The near real time cluster heartbeat is leveraged to determine the health of the cluster and is presented in both the Ops Console and in the Placement screen of the Application Console.

Cluster Last Checkin

The Operations Console now provides a view into when the cluster last checked in with the Rafay Controller.

Audit Trail

All actions performed by authorized users on the Rafay Platform are audited. A reverse chronological audit trail is available via the Application and Ops Console.

Cluster Auto Provisioning on Google Cloud (GCP)

A highly automated, low touch experience is now available for users that wish to provision Rafay managed clusters in Google Cloud Platform (GCP). Rafay takes care of programmatically creating and configuring the necessary infrastructure on GCP before deploying the necessary software components.

Cluster Reachability Monitoring

Customers can opt-in for continuous cluster reachability monitoring for their Internet facing clusters. The clusters are continuously probed every "60 seconds '. If it becomes unreachable, the DNS entries for applications operating on the cluster are automatically updated. This ensures that users can be automatically steered to the nearest cluster eliminating application availability issues.

AWS Auto Provisioning Enhancements

Auto provisioning support for newly announced AWS Region (Bahrain).

Dynamic Volume Provisioning for AWS Clusters

Storage volumes for containers on AWS based clusters are now dynamically provisioned as Elastic Block Storage (EBS) volumes.

Docker Hub Container Registry Integration

Container images on Docker Hub Registry (public and private repos) can now be configured directly in the Application Console. The images will be pulled directly from Docker Hub to the managed clusters.

Runtime Config Sync via AWS S3

Customers can configure runtime configuration to point to their private AWS S3 bucket for runtime data sync updates.

Rafay CLI Support for Canary and Test Upgrades

Canary and Test upgrades can now be performed using the Rafay CLI enabling end to end deployment automation.

Detailed Workload Summary

Detailed workload summary is presented to the user on the App Console providing a holistic view into the selected configurations and options.


July 15, 2019

Patch Release (Build Number p0619-203)

Rafay System Domain and Certificates

Developers and QA teams no longer have to deal with the operational burden and complexity associated with DNS and certificates for their pre-production workloads.

The Rafay System domain and certificates can now be used for workloads on private clusters.

Enhancements to the Rafay CLI

A number of optimizations and enhancements have been made to the Rafay CLI making it easier for customers to embed the CLI into their scripted workflows. All customers are recommended to upgrade to the latest version.


June 20, 2019

Type: Major

Support for Non HTTPS Application Workloads

Application workloads deployed on private clusters can now be configured to accept/handle non-http(s) ingress traffic i.e. TCP and UDP.


Admin Selection of Canary Cluster

Workload admins can now specify a "canary" cluster for multi cluster, rolling upgrades. By default, the platform will pick a random cluster as a "canary" cluster to attempt the upgrade first before upgrading rest of the clusters.

This allows application owners to pick a canary cluster that meets the risk profile they find acceptable for the application upgrade. For example, admins can select a cluster that has "low usage".


Test Upgrade Workflows

Application owners can now perform "test upgrades" on a selected canary. The workflow will HOLD the process independent of the outcome of the upgrade.

In the case of an unsuccessful upgrade, the developer may wish to perform a live diagnosis. In the case of a successful upgrade, the application admin may wish to evaluate the non-functional aspects of new code (performance, stability etc) before deciding to upgrade the remaining clusters.


Auto Provision Cluster on Amazon Web Services (AWS)

A highly automated, low touch experience is now available for users that wish to provision Rafay managed clusters in AWS. Rafay takes care of programmatically creating and configuring the necessary infrastructure on AWS before deploying the necessary software components as well.


One Click Setup for Rafay CLI

Developers and Application admins can generate and download a CLI configuration by just clicking a button and downloading the config file.


MFA Support for Application and Ops Console

Support for TOTP based MFA (e.g. Google Authenticator) for secure browser based access to the Application and Ops Console.


Infrastructure admins now have visibility into long term utilization trends of critical attributes (Utilization and Saturation trends for CPU, Memory and Disk) of managed clusters and nodes for capacity planning and forecasting decisions.


Download Workload Configuration

Developers and Application admins can download an existing workload's configuration (YAML) file directly from the Application Console.


SSO between Application and Ops Console

Authorized users can seamlessly switch between the Application and Ops Console without having to login again.


Custom Container Sizing Option

Application Admins can now specify custom container sizes for their applications.


Inline Documentation

Product documentation is now available inline right from the Application and Ops Console.


May 2019

Type: Major

Global Key-Value (K-V) Store

Distributed applications may require access to data locally to be functional. With Rafay's Global K-V data sync service, applications will have access to a “low latency K-V data store” anywhere in the world.

This will require developers to integrate (using a lightweight SDK) it into their application to be able to use it.


YAML Format Support For Rafay Workload Configuration

In addition to the JSON format, users can now describe their Rafay workload/application configuration in YAML format and utilize it via the Rafay CLI.


Single Node (Non HA) Cluster

Rafay now supports a a single node cluster form factor.

In addition to dev/qa type deployments, this can be used for production deployments to tier-2 locations enabling greater in-country/in-region coverage for the application. For example, a customer can deploy a single node system in Perth, Brisbane and Melbourne backed by a HA cluster in Sydney, Australia to provide comprehensive in-region coverage for users in Australia.


Custom Namespace Sizing

Customers using private clusters can use the Ops Console to dynamically update the default resource allocation for a namespace. This will allow end users to operate containers that are not of the “standard size” we support out of the box.


Support for White Labeling for Partners

A streamlined process to white label the Rafay platform for “Provider Partners”. The transiiton to the white labeled experience can be performed anytime in the partner lifecycle.

The partner’s customers will see a Partner Branded experience when they login into the “Application or Ops Console”.


Offnet Support for Partners

Rafay's Provider Partners upon request can be configured to leverage Rafay's global network footprint for their customer workloads.