November 27, 2019¶
Imported Kubernetes Clusters¶
Customers can now import and manage "existing" Kubernetes clusters. These can be clusters from managed k8s providers like (EKS, GKE, AKS etc) or DIY Kubernetes clusters.
Helm and YAML Workloads¶
In addition to the wizard based workload, customers can now also bring their Helm charts and native Kubernetes YAML files as workloads.
Multi Cluster Namespace Management¶
Users can now manage the lifecycle of namespaces and their resource quotas. The controller will automatically create and delete namespaces on all managed clusters where the workload needs to be deployed
A new and updated CLI with additional functionality is available. All customers are required to upgrade. The previous version of the CLI is now deprecated.
November 12, 2019¶
Manual Cluster Expansion Optimizations¶
Customers can now use the Console (via the GUI and APIs) to add/remove worker nodes with the click of a button. This streamlined workflow is supported for both manually provisioned clusters as well as auto provisioned clusters on AWS.
An alerting framework has been introduced. In this release, the controller will automatically and proactively generate an email based alert when there is a failure with application/workload deployments. Future releases will leverage the alerting framework for additional scenarios.
October 31, 2019¶
Custom Registry Integrations¶
Users can now directly configure credentials for access to any Docker Compatible Registry that requires authentication and use it seamlessly within their workloads. The controller will securely store the image pull credentials, perform image/tag validation using the credentials and also automatically inject/deprovision the image pull credentials to the configured clusters.
With this release, the controller has been tested with "Docker Hub (Public)", "Docker Hub (Private)", "AWS ECR", "GCP GCR", "Quay by RedHat", "Nexus by Sonatype", "JFrog Artifactory", "Microsoft MCR", "System Container Registry" and "Any Docker Compatible Registry".
Elastic Search for Log Aggregation¶
In addition to AWS S3, users can now configure Elastic Search as a log aggregation endpoint.
App Console Debug Enhancements¶
For workloads that are configured to use the managed Layer 7 Ingress (API Gateway), users now have deep visibility into the status and logs of the Managed API Gateway pods in their namespace.
October 15, 2019¶
ECR and GCR Integrations¶
Users can now directly configure credentials for access to ECR and GCR in the controller and use it within their workloads. The controller will securely store the image pull credentials, perform image/tag validation using the credentials and also automatically inject/deprovision the image pull credentials to the configured clusters.
Cluster Health Enhancements¶
The Web Console now provides deep real time visibility into the current state of the nodes, pods and namespaces on managed clusters. Operational personnel will have access to information identical to what they would see if they were using "kubectl" to interact with the cluster.
Enhanced Debug for Workloads¶
The Web Console now provides real time visibility into the current state of their application's pods across all clusters where their application is deployed to.
Developers will have access to information identical to what they would see if they were using "kubectl" without having to deal with the learning curve of kubectl or the security ramifications by opening up configs and rbac on clusters.
Docker Commands and Arguments¶
Users can now specify custom arguments and commands that will allow them to customize the behavior of their container at runtime.
The guided workload configuration workflow now supports Stateful Sets. This can be used for stateful workloads such as Databases etc.
August 27, 2019¶
Both the Application Console and Ops Console are now accessible via easy to remember URLs. Users can continue using the older URLs until they are disabled.
Managed clusters now maintain a heartbeat with the Controller providing a near real time view into health, status and availability of the clusters.
The near real time cluster heartbeat is leveraged to determine the health of the cluster and is presented in both the Ops Console and in the Placement screen of the Application Console.
Cluster Last Checkin¶
The Operations Console now provides a view into when the cluster last checked in with the Controller.
All actions performed by authorized users on the Platform are audited. A reverse chronological audit trail is available via the Application and Ops Console.
Cluster Auto Provisioning on Google Cloud (GCP)¶
A highly automated, low touch experience is now available for users that wish to provision managed clusters in Google Cloud Platform (GCP). The controller takes care of programmatically creating and configuring the necessary infrastructure on GCP before deploying the necessary software components.
Cluster Reachability Monitoring¶
Customers can opt-in for continuous cluster reachability monitoring for their Internet facing clusters. The clusters are continuously probed every "60 seconds '. If it becomes unreachable, the DNS entries for applications operating on the cluster are automatically updated. This ensures that users can be automatically steered to the nearest cluster eliminating application availability issues.
AWS Auto Provisioning Enhancements¶
Auto provisioning support for newly announced AWS Region (Bahrain).
Dynamic Volume Provisioning for AWS Clusters¶
Storage volumes for containers on AWS based clusters are now dynamically provisioned as Elastic Block Storage (EBS) volumes.
Docker Hub Container Registry Integration¶
Container images on Docker Hub Registry (public and private repos) can now be configured directly in the Application Console. The images will be pulled directly from Docker Hub to the managed clusters.
Runtime Config Sync via AWS S3¶
Customers can configure runtime configuration to point to their private AWS S3 bucket for runtime data sync updates.
CLI Support for Canary and Test Upgrades¶
Canary and Test upgrades can now be performed using the RCTL CLI enabling end to end deployment automation.
Detailed Workload Summary¶
Detailed workload summary is presented to the user on the App Console providing a holistic view into the selected configurations and options.
July 15, 2019¶
Patch Release (Build Number p0619-203)
System Domain and Certificates¶
Developers and QA teams no longer have to deal with the operational burden and complexity associated with DNS and certificates for their pre-production workloads.
The System domain and certificates can now be used for workloads on private clusters.
Enhancements to the RCTL CLI¶
A number of optimizations and enhancements have been made to the CLI making it easier for customers to embed the CLI into their scripted workflows. All customers are recommended to upgrade to the latest version.
June 20, 2019¶
Support for Non HTTPS Application Workloads¶
Application workloads deployed on private clusters can now be configured to accept/handle non-http(s) ingress traffic i.e. TCP and UDP.
Admin Selection of Canary Cluster¶
Workload admins can now specify a "canary" cluster for multi cluster, rolling upgrades. By default, the platform will pick a random cluster as a "canary" cluster to attempt the upgrade first before upgrading rest of the clusters.
This allows application owners to pick a canary cluster that meets the risk profile they find acceptable for the application upgrade. For example, admins can select a cluster that has "low usage".
Test Upgrade Workflows¶
Application owners can now perform "test upgrades" on a selected canary. The workflow will HOLD the process independent of the outcome of the upgrade.
In the case of an unsuccessful upgrade, the developer may wish to perform a live diagnosis. In the case of a successful upgrade, the application admin may wish to evaluate the non-functional aspects of new code (performance, stability etc) before deciding to upgrade the remaining clusters.
Auto Provision Cluster on Amazon Web Services (AWS)¶
A highly automated, low touch experience is now available for users that wish to provision managed clusters in AWS. The controller takes care of programmatically creating and configuring the necessary infrastructure on AWS before deploying the necessary software components as well.
One Click Setup for RCTL CLI¶
Developers and Application admins can generate and download a CLI configuration by just clicking a button and downloading the config file.
MFA Support for Application and Ops Console¶
Support for TOTP based MFA (e.g. Google Authenticator) for secure browser based access to the Application and Ops Console.
Cluster Utilization & Saturation Trends¶
Infrastructure admins now have visibility into long term utilization trends of critical attributes (Utilization and Saturation trends for CPU, Memory and Disk) of managed clusters and nodes for capacity planning and forecasting decisions.
Download Workload Configuration¶
Developers and Application admins can download an existing workload's configuration (YAML) file directly from the Application Console.
SSO between Application and Ops Console¶
Authorized users can seamlessly switch between the Application and Ops Console without having to login again.
Custom Container Sizing Option¶
Application Admins can now specify custom container sizes for their applications.
Product documentation is now available inline right from the Application and Ops Console.
Global Key-Value (K-V) Store¶
Distributed applications may require access to data locally to be functional. With the Global K-V data sync service, applications will have access to a “low latency K-V data store” anywhere in the world.
This will require developers to integrate (using a lightweight SDK) it into their application to be able to use it.
YAML Format Support For Workload Configuration¶
In addition to the JSON format, users can now describe their workload/application configuration in YAML format and utilize it via the CLI.
Single Node (Non HA) Cluster¶
The controller now supports a single node cluster form factor.
In addition to dev/qa type deployments, this can be used for production deployments to tier-2 locations enabling greater in-country/in-region coverage for the application. For example, a customer can deploy a single node system in Perth, Brisbane and Melbourne backed by a HA cluster in Sydney, Australia to provide comprehensive in-region coverage for users in Australia.
Custom Namespace Sizing¶
Customers using private clusters can use the Ops Console to dynamically update the default resource allocation for a namespace. This will allow end users to operate containers that are not of the “standard size” we support out of the box.
Support for White Labeling for Partners¶
A streamlined process to white label the controller for “Provider Partners”. The transiiton to the white labeled experience can be performed anytime in the partner lifecycle.
The partner’s customers will see a Partner Branded experience when they login into the “Application or Ops Console”.
Offnet Support for Partners¶
Provider Partners upon request can be configured to leverage the global network footprint for their customer workloads.