Skip to content

2020

v1.4.0

18 Dec, 2020

New Features

GitOps Pipelines

Users can implement cloud native, GitOps continuous deployment pipelines directly in the Console. This enables users to manage the lifecycle of their workloads using a developer-centric, declarative model using tools and infrastructure developers are already familiar with such as Git.

v1.4 GitOps Pipelines

Amazon EKS Upgrades

For Amazon EKS clusters provisioned by the Controller, administrators can now perform seamless upgrades of the cluster's control plane, worker nodes and critical addons directly from the Console in just a few clicks.

v1.4 EKS Upgrades

v1.4 EKS Upgrades

Backup and Restore

Fully integrated and turnkey cluster backup and restore capability enabling users to centrally configure, automate and operationalize disaster recovery (DR) and/or cluster migration use cases.

Enhancements

Swagger based REST APIs

In addition to the existing REST APIs, authorized users can now also use the newly introduced Swagger based REST APIs that adhere to the OpenAPI specification.

Swagger API

Git and Helm Repos for Workloads

In addition to uploading workload artifacts (Helm charts, k8s yaml) to the controller, workloads can now be configured to integrate and pull the artifacts directly from the user's Git or Helm repository.

Git and Helm based Repos

Drift Detection Control Loop

Workload can now be configured with a configuration drift detection control loop. This will proactively detect, report and optionally perform automated remediation if it detects that the workload configuration has drifted from the configured workload specification.

Drift Detection

Dashboard Enhancements

The Kubernetes resources dashboard has been enhanced to display additional resources on the cluster such as Cluster Roles, Role Bindings, Persistent Volumes, Storage Classes, PSPs and Service Accounts. At the namespace level, Jobs, CronJobs and Config Maps details are displayed in addition to existing resources.

k8s Dashboard

Administrators can now select a specific container in a pod, click through to view a detailed container dashboard comprising charts for health, CPU, Memory and Restart trends.

Container Dashboard

Administrators can now filter and view k8s resources by cluster addons allowing them to zero in on status of resources for a specific addon.

Cluster administrators are provided visibility to pre-filtered Kubernetes events based on the context of where they are in the dashboard. For example, they can instantly view the events filtered at the "cluster level" or "by namespace" or "by pod".

k8s Events

MKS (Upstream k8s)

RHEL 7 is now supported for both master and worker nodes for bare metal and VM based environments.

Amazon EKS

Single Click upgrades (see above).

Administrators can now provision EKS Clusters with worker nodes based on Ubuntu 20.04 LTS.

Administrators can specify node labels to be automatically assigned to every node in a node group. This helps eliminate the need to manually add labels to every node in the node group.

EKS Node Labels

Organizations that are required to use tags for all AWS resources created by the Controller can now specify tags (key only or key-value pair) during cluster provisioning as well as when they add node groups.

Security

Administrators can now implement an "anti hammering" policy that will automatically lockout the user/account after a number (customer configurable) of consecutive failed login attempts.

Audit Logs

Org Admins are now provided with a filter for "projects" allowing them to quickly zero in on audit logs for a selected project. In addition to Org Admins, users in a project are also provided with a view to the audit logs relevant to the projects they have access to.

Audit Logs Project Filter

Users of audit logs are now provided with the ability to perform free text search to help reduce the operational burden. They can now quickly zero in on the audit logs that match the provided search criteria.

Free Text Search of Audit

Notifications

Users can now select specific clusters for which they would like to receive notifications when alerts are generated.

Per Cluster Notifications

Cluster Blueprint Debugging

An integrated debugging facility is now available for administrators to perform their diagnostics directly from the Console where they manage the lifecycle of cluster blueprints.

Debugging of Blueprints

RCTL CLI

Users can create and manage version controlled, declarative specifications for their GitOps pipelines using the RCTL CLI. Administrators can use version controlled, declarative cluster specs to automatically provision and deprovision Amazon EKS, Azure AKS and Google GKE based clusters.

Partner/Provider Ops Console

Partners/Providers that use the Kubernetes as a Service (KaaS) Ops Console can now remotely debug/diagnose issues on customer clusters spanning multiple Orgs using the Zero Trust KubeCTL Access (ZTKA) channel.

Bug Fixes

Bug ID Description
RC-9150 Node is not able to check in to Controller when allowing only outbound 443
RC-9139 Error message when creating namespace starting with number like "3dsoftware-abc"
RC-8925 Console should only enable the KUBECTL button when the cluster is ready
RC-8869 Cluster resources numbers flicker when reloading the cluster overview page
RC-8619 Add new master nodes to the existing cluster did not enable the PSP admission controller in the newly added nodes
RC-8468 Hyperlink in OTP screen takes user to the empty cluster page with user and org information
RC-8467 The Company icon title from the otp screen is wrong. When hovering over this icon "jumbo" is displayed as a text
RC-8466 Company logo from login screen is not redirecting the user to the correct page
RC-8408 Although the relay pod and prometheus pods are in running state, but not operational after cluster reboot
RC-8348 Absolute Server Paths appearing in the DevTools Console & generating warnings
RC-7972 Session data residue in the browser cache after logging out
RC-7703 Controller PenTest Scan - Sensitive Data Disclosure - Audit Logs

v1.3.8

14 Nov, 2020

RCTL CLI Enhancements

RCTL has been updated to support automation for new functionality introduced in the 1.3.6 release. Specifically, the following capabilities can now be automated and embedded in a pipeline.

  • Native Helm 3 support for workloads and addons
  • Manage versions for addons and blueprints
  • Manage configuration for the Alert Manager managed addon
  • Set and list PSPs in cluster blueprints

v1.3.7

01 Nov, 2020

White Labeling Enhancements

  • White labeled partner's Copyright and Terms of Service URL can now be displayed to partner's end customers

v1.3.6.1

23 Oct, 2020

Amazon EKS

  • Support for k8s 1.18

RCTL CLI

  • Bug fixes and updates.

v1.3.6

17 Oct, 2020

MKS (Upstream Kubernetes)

  • K8s version updates: Support for k8s v1.18.8, 1.17.11 and 1.16.14
  • Ability to add/remove master nodes from a provisioned cluster e.g. single master to multi-master configuration
  • Ability to expand storage in a cluster by adding storage nodes
  • Ability to add/expand storage on an existing storage node
  • Updated self-service, cluster provisioning workflows for on-prem, OVA and QCOW pre-packaged image based clusters

Kubernetes Upgrades

  • Kubernetes fleet upgrade workflows
  • Ability to configure upgrade protection

Cloud Credentials for AWS

  • Ability to edit and update credentials.
  • IAM details associated with a cloud credential are presented partially masked to help admins associate them after initial creation.
  • Ability to use/specify a session token for authentication

Amazon EKS

  • Support for Bottlerocket AMI for worker nodes.
  • Support for k8s 1.17.
  • Support for EKS lifecycle management on AWS Outposts

Namespace Admin

  • Users and Groups in a Project can be locked down to identified namespaces providing another level of multi tenancy inside a Project.
  • New namespace scoped roles: Namespace Admin, Namespace Read Only
  • Automated RBAC access for Zero Trust KubeCTL (see details below)

Zero Trust KubeCTL

  • Admins can now implement a break glass process for KubeCTL access esp. in higher level environments by selectively enabling/disabling Zero Trust KubeCTL access to production clusters
  • Ability to limit/control access via "KubeCTL CLI" AND/OR "Browser based KubeCTL" via Console
  • Admins can require users to successfully authenticate and have an active session with the Console (direct or SSO) before allowing access using the Zero Trust KubeCTL CLI channel. This can be implemented Org Wide with support for user specific overrides.
  • Enhanced KubeCTL Audit Logs now displays "access method": Virtual Terminal in Console or KubeCTL CLI

Helm 3 (Native Helm)

  • The Controller can now behave like a Helm client with all existing integrations, multi cluster, policy based deployments and provide a native Helm experience for both workloads and blueprints
  • Cluster admins can now view k8s resources organized by Helm releases

Improved Addon and Blueprint Lifecycle Management

  • Support for Native Helm (Helm 3 client in Controller)
  • Admins can now view entire history (versions) of addons and blueprints
  • Enhanced user experience for custom blueprint creation and management

Cluster Label Membership

  • Admins can now perform bulk operations to associate labels to clusters in a project

Customizable System Addons

  • Customers can now customize the "Alert Manager" system addon in the default blueprint. For example, send notifications to Slack channel etc.

User Management

  • Support for a machine user that can access the Controller only using APIs for automation

Single Sign On (SSO)

  • A separate IdP users list is available with a list of all users that have accessed using SSO
  • Ability to set Kubeconfig validity overrides for SSO Users
  • Ability to revoke Kubeconfig for SSO users
  • For IdP's that do not support a metadata URL for automated configuration, admins can now download and upload the IdP metadata file during SSO configuration

k8s Native Security - Pod Security Policy (PSP)

  • Partners can use the Partner Operations Console to centrally create, manage and enforce PSPs and PSP polies on orgs/tenants under management.
  • Partners can configure and manage "org" specific PSP overrides
  • Org Admins can create, manage and enforce organization wide PSPs via cluster blueprints
  • Infrastructure admins can select and enforce PSPs for specified namespaces

Partner Ops Console Enhancements

  • Partner Admins now have the ability to view detailed dashboards about an end customer's cluster, node, k8s resources and pods allowing them to support their customers better.

Continuous Workload Placement

  • Workloads configured with "cluster label" or "location" based placement policy will be automatically deployed to newly provisioned clusters that match the placement policy. No administrative intervention is required.

Detect and Report k8s Version

  • The Controller now detects and reports Kubernetes version for all cluster types including imported clusters.

Bug Fixes

Bug ID Description
RC-8410 The Web Console UI does not honor browser BACK button
RC-8370 Where there are more than 10 groups, the project/role assignment does not show properly
RC-8347 Workload Debug screen sometimes shows empty cluster selection and no pod displayed with spinning "connecting"
RC-8265 UX: Inviting User already in Another Org
RC-8229 Show resource name in kubectl audit log when using "kubectl delete namespace"
RC-8194 Browser freezes when adding a registry with a long registry endpoint
RC-8172 Remove the mandatory "50GB" size for the raw, unformatted disk to use for glusterfs and use "1GB" instead
RC-8128 Provide "DELETE" button for nodes of an on-prem cluster in all cases
RC-8127 Should provide option to delete node with storage role with warning on data migration before deleting the node
RC-8010 Kubectl access failed with connection error if there are special characters in the username
RC-7953 UI should validate invalid characters when creating Addon name instead of throwing error when trying to publish addon
RC-7873 Add pre-test for duplicated hostname of nodes during MKS HA provisioning
RC-7793 Workload publish failed when using ECR registry annotation in the workload with a long name due to "failed to create cronjob"
RC-7780 Not showing nginx-ingress-controller in Default Blueprint service list
RC-7741 Node discovery failed when deleting a cluster during metadata fetch of the node
RC-7726 SSL Medium Strength Cipher Suites Supported (SWEET32)
RC-7701 Even though HSTS is properly configured, the controller does not redirect HTTP traffic to HTTPS, which prevents the session cookie from ever being set
RC-7698 Latest Jenkins Helm Workload status is "Published" even though no pods/deployments are created
RC-7697 Loading the cluster resources screen takes a very long time, and some of the requests time out 504 error
RC-7586 core-dns pods are in CrashLoopBackOff state after rebooting the Rafay VM
RC-7554 qcow node stuck at "Approving" state for a long time
RC-7551 Passphrase and credentials files are empty for the newly created cluster
RC-7453 Unpublish a cronjob workload does not delete the job and pod

v1.3.5.1

5 Oct, 2020

Amazon EKS

  • Support for k8s 1.17
  • The nodegroup scale count needs to be specified in the configured min-max range
  • For clusters with private (cloaked) control plane, validation of at least 1 healthy nodegroup is performed before nodegroups can be added/deleted

v1.3.5

22 Sep, 2020

RCTL CLI Enhancements

The RCTL CLI Utility has been enhanced to support additional automation options. Customers can now create and embed end-to-end workflows in their automation pipelines. The RCTL binary for macOS is now digitally signed by an Apple issued certificate to verify authenticity.


v1.3.4

20 Sep, 2020

Workload and Addon Enhancements

Workloads and Addons can now be configured and deployed into kube-system namespace on target clusters.


v1.3.3

30 Aug, 2020

Blueprint Sync Timeout

Blueprint sync timeout windows have been tuned and optimized for low bandwidth, edge type provisioning of clusters.

Pod Waiting Alert

Alerts are now generated when pods are stuck in "ContainerCreating" state for 5 minutes. This can occur due to several container issues such as ContainerCreating, CrashLoopBackOff, ErrImagePull, ImagePullBackOff, CreateContainerConfigError, InvalidImageName, CreateContainerError.

Bug Fixes

Bug ID Description
RC-7684 SSO based Org Admin user not able to generate APIKey for user and delete it
RC-7647 Edgesrv connection pool timeout resolution
RC-7260 For Non-HA, non-AWS, on-prem cluster, although node is shutdown, cluster status shows Healthy

v1.3.0

01 Aug, 2020

Multiple k8s Versions

Users can now specify a Kubernetes version (1.16.x, 1.17.x and 1.18.x) during cluster provisioning.

In-place k8s Upgrades

Administrators can now schedule and perform Kubernetes upgrades of provisioned clusters with the click of a button. As we qualify new Kubernetes versions (major and minor), customers will be provided notifications.

Support for Ubuntu 16.04 LTS

In addition to Ubuntu 18.04 LTS and CentOS 7, users can now also use Ubuntu 16.0.4 LTS for bare metal and VM based provisioning of clusters.

Cluster Labels

You can now create and assign labels to clusters providing the ability to organize and manage a fleet of clusters effectively and efficiently. Clusters can be sorted/filtered by Blueprints.

Cluster Label based Placement

In addition to "specific cluster" and "specific location" policies for workload placement (deployment), users can now drive placement based on "cluster label" based policies. This allows users to implement custom logic to drive workload deployments across a fleet of clusters.

Node Labels and Taints

Users can now view and set node level labels and taints directly from the Console/Controller.

Zero Trust Kube API Access Proxy

Secure access to a managed cluster's API server via a proxy providing centralized authentication, authorization and auditing. Instant provisioning and de-provisioning of user access.

Monitoring & Alerts

Enhanced monitoring with proactive alerts and notifications for a number of common scenarios related to clusters, nodes, workloads, pods and storage.

Alerts will be automatically opened when a condition is observed and closed when the underlying issue is resolved. Users will have centralized access to all alerts across the fleet of clusters.

Cluster Blueprints and Addons

Users can now view, download and update existing add-ons. Cluster blueprints are now version controlled. Clusters can be sorted/filtered by Blueprints. The blueprint version active on a cluster is clearly presented for each cluster.

Cluster Sharing across Projects

Administrators can now enable sharing of clusters across multiple projects. This enables workloads from different projects to be deployed on a shared fleet of clusters.

AWS Node Termination Handler

The default cluster blueprint for provisioned Amazon EKS Clusters has been updated to automatically deploy the AWS Node Termination Handler. This is a daemonset that allows the cluster to respond appropriately when unforeseen ec2 maintenance events occur as well as handle Spot interruptions.

Default Private API Server for EKS

By default, provisioned Amazon EKS Clusters are configured as Private ensuring the cluster's control plane is not visible or accessible over the Internet.

Spot Instances on EKS

In addition to on-demand ec2 instances, the controller can also provision worker nodes using spot instances that can provide 70-90% savings over On-Demand prices. The Controller will also automatically deploy AWS's node termination handler to ensure spot instance interruptions are handled gracefully.

Vault for Workload Wizard

The workload wizard has been enhanced to leverage the controller's turnkey integration with Hashicorp's Vault. Workload admins can leverage secure, dynamic retrieval of secrets from their Vault server in just a few clicks.

Custom Metrics based HPA

The workload wizard has been enhanced to leverage the controller's turnkey integration with Prometheus resident on the managed clusters. Workload admins configure and enable the use of custom, application specific metrics for horizontal pod autoscaling (HPA) in just a few clicks.

Bug Fixes

Bug ID Description
RC-6738 Blank page appears when republishing a workload after workload publish is failed
RC-7247 Doc URL in the Node Installation Instructions is not working
RC-7085 Unpublished add-ons should not be listed in the blueprint > addon dropdown list
RC-7005 Clusters list API is not working with API key authentication if the user is present in multiple Organizations
RC-6894 Tooltips for the white labelled partner should be displayed as the partner name instead of Rafay
RC-6876 RCTL workload update command doesn't work if workload name is given as an argument
RC-6834 UI shows "invalid domain name" for IdP Configuration > Domain even though valid domain is set
RC-6737 Cannot edit the docker hub registry configuration after failed validation
RC-6281 Missing CPU Units on Cluster Dashboard
RC-6279 Node View still shows incorrect number of Cores (i.e. data is millicores, but it is shown as Cores)
RC-6056 fav_icon whitelabeling for partner does not work in ops-console page
RC-6033 Make Ingress host name validation error message similar to K8s DNS hostname validation error message
RC-6019 System does not retry to add the DNS record when DNS creation fails during workload publish
RC-5877 Error reason is not shwon in the UI when the AWS EC2 cluster creation fails due to insufficient role permissions
RC-5862 Addon in a blueprint shows empty , as we allow addons to be deleted irrespective of being associated with the blueprint

v1.2.8

Jun 28, 2020

CentOS Support

In addition to Ubuntu 18.04, MKS clusters for Bare Metal and VMs now supports CentOS 7. Watch a demo for additional details.

Additional Storage Integrations

In addition to the existing turnkey integration of GlusterFS for distributed storage, MKS for Bare Metal and VMs now provides a turnkey integration with OpenEBS for Local Persistent Volumes.

Note that both storage classes can be active concurrently. Customers can select one of them as the default storage class. Watch a demo for additional details.

Bug Fixes

Bug ID Description
RC-7021 [On-prem cluster]:1.2.x:UI: When Storage class GlusterFS is deselected, default local storage is still shown as GlusterFS
RC-7000 EKS cluster provision when fails when creating in an existing subnet in us-east-1 region
RC-6927 No memory and cpu metrics in Resources dashboard for Pods
RC-6914 User is not getting deleted upon deletion of the Organization
RC-6913 Custom location name doesn't take underscore; POST of /edge/v1/metros returns 400 if it has an underscore

v1.2.5

Jun 7, 2020

Amazon EKS Enhancements

For Amazon EKS Clusters provisioned using the controller

  • Support for provisioning EKS clusters based on Kubernetes 1.16
  • Save time and costs by scaling down an EKS cluster's worker nodes down to zero for extended periods of time. Scale worker nodes back with workloads intact in minutes.

RCTL Enhancements

  • Ability to specify Project name in workload meta file

v1.2.0

May 16, 2020

Unified, Integrated Console

The separate App and Ops console has been integrated and unified. Users now have seamless access to all the information at a glance and can perform actions quickly and efficiently. Customers using existing URLs for the App and Ops Console will be automatically redirected to the URL for the integrated console

Cluster Dashboard

Users now have deep visibility into metrics, health and utilization trends for clusters, nodes, workloads and all Kubernetes resources.

RBAC Enhancements

Role based Access Control (RBAC) has been substantially enhanced allowing customers to allow fine grained access to resources on the Controller.

See Projects, SSO for related capabilities.

Projects

Org/Tenant admins can now create and manage dedicated isolation boundaries called Projects which can have dedicated users, groups and resources (clusters, namespaces and integrations). Users assigned to specific projects will only have access/visibility into resources in their projects.

See RBAC Enhancements and SSO for related capabilities.

Updated CLI (RCTL)

Users need to upgrade to the v1.2.x of the RCTL CLI to use the new functionality such as Projects, RBAC etc.

Single Sign On (SSO)

Org/Tenant admins can now configure and enable seamless user access to the Web Console without the burden of local user lifecycle management by leveraging the controller's turnkey support for SSO with Identity Providers (IdPs) such as Okta, Ping Identity or any SAML 2.0 compliant identity provider.

Secrets Management

A turnkey integration with HashiCorp's Vault that allows users to dramatically enhance the security posture for their workloads by dynamically retrieving "secrets" from their centralized Vault secrets server.

The controller provides intuitive workflows that eliminates the operational complexity, burden and learning curve associated with (a) configuration of clusters to connect to Vault and (b) dynamic retrieval of secrets by workloads deployed to clusters.

Cloud Credential Enhancements

Ability to configure and use an AWS IAM role (cross account) as cloud credentials to provision and manage the lifecycle of Amazon EKS Clusters and auto provisioned MKS clusters on AWS infrastructure. This capability eliminates the lifecycle management burden wth IAM user based credentials. It also eliminates potential security concerns with handling secrets and impact due to frequent rotation.

Amazon EKS Enhancements

  • Support for provisioning EKS clusters based on k8s 1.15
  • Ability to use a delegated role for provisioning and lifecycle management (see cloud credentials for details)
  • Ability to update and restart provisioning in case of failures due to misconfiguration or errors encountered during control plane and nodegroup provisioning.

Bug Fixes

Bug ID Description
RC-6281 Missing CPU Units on Cluster Dashboard
RC-6279 Node View shows incorrect number of Cores (i.e. data is millicores, but it is shown as Cores)
RC-6187 UI should remove "Optionally" for version name in Blueprint and Addon publish pages as version is required for publishing
RC-6057 Page titles still reference to Rafay Systems instead of the partner names
RC-6056 fav_icon whitelabeling for partner does not work in ops-console page
RC-6029 Workload name conflict with addon name makes workload publish stuck in publishing state
RC-6015 Error message when creating custom container registry should be improved to point out that only lowercase and numbers are allowed in the name
RC-6004 Conjurer run fails at "Unable to locate package chisel" when provisioning on-prem cluster
RC-5877 Cluster provisioning failure error does not bubble up in UI even though API has this in comment for AWS auto provisioned clusters
RC-5862 Addon in a blueprint shows empty , as addons are allowed to be deleted irrespective of being associated to blueprint

v1.1.0

March 25, 2020

Amazon EKS Enhancements

For Amazon EKS Clusters provisioned using the controller

  • Improved UX for Amazon EKS Provisioning and Lifecycle Management
  • Ability to Add/Remove multiple node groups
  • Scale Up, Scale Down, Drain node groups

Preflight Checks

Comprehensive preflight checks are now available for manual cluster provisioning. This will ensure that the customer provided infrastructure is compatible before cluster provisioning is initited.

Auto Approval of Nodes

Auto approval of nodes can now be optionally enabled for manually provisioned clusters.

New Managed Ingress

The managed Ingress components have been replaced with a standard Nginx Ingress Controller.

Subscriptions for Notifications

Customers can specify recipients that will be sent notifications when alerts are generated.

Annotations for Integrations

Workloads based on Helm and k8s YAML can now leverage "Annotations" to leverage integrations for DNS based GSLB, Log Aggregation and Container Registries.

Multiple Interfaces

During manual provisioning, all available interfaces are auto detected and presented. Users can select the correct/preferred interface during cluster configuration and provisioning.

Customize Cluster Blueprint Addons

Helm chart based addons can now be customized using a separate values.yaml file


February 04, 2020

Type: Minor

Outbound Port Requirements

The k8s operator and all associated components deployed to newly provisioned/imported Kubernetes clusters only require OUTBOUND port 443 (https) connectivity to the Controller. Existing clusters will continue using the older list of outbound ports.

Multiple IP Addresses

Multiple IP addresses are now supported for manually provisioned clusters. Users can select the interface they would like to use for Intra cluster traffic.


January 15, 2020

Type: Minor

RCTL CLI Enhancements

The RCTL CLI has been enhanced with new functionality specifically for Helm and k8s yaml workloads. In addition, several bug fixes have also been incorporated. All customers are recommended to upgrade to this version of the CLI.


January 9, 2020

Type: Major

Amazon EKS Lifecycle Management

Customers can now use the controller to fully manage the lifecycle (Configure, Provision, Operate and Delete) of their Amazon EKS Clusters in all supported regions.