Skip to content

Aug

v1.1.34 - Terraform Provider

13 August, 2024

An updated version of the Terraform provider is now available.

This release includes enhancements around resources listed below:

Existing Resource

  • rafay_eks_cluster : Added Day 2 support for secret encyrption for EKS resource.
  • rafay_environment : Added support for environment variable and file override in this resource.
  • rafay_resource_template : Added hcpterraform provider support including HCP terraform specific options in this resource.

There are certain scenarios where diff is shown even when there are no changes. A caching mechanism has been implemented for TF schema to alleviate this issue for the following resources:

  • rafay_eks_cluster
  • rafay_aks_cluster

v1.1.34 Terraform Provider Bug Fixes

Bug ID Description
RC-36073 Getting wrong kubeconfig while downloading from terraform
RC-35832 Unable to Fetch V3 Workload created via terraform code on the Rafay UI while Piepline Creation for Workload Deployment
RC-32044 Maxpodspernode field set to 0 in cluster config of EKS (Managed nodegroup)created using terraform
RC-35649 Unable to add timeouts in resource templates using Terraform
RC-35400 Terraform plan/apply detecting redundant diff in resource template's provider block

v2.8 - SaaS

09 Aug, 2024

Note

With this release, Environment Manager and Fleet Plan capabilities are available to all Orgs and are no longer feature flagged.


Amazon EKS

Enabling secret encryption on an existing cluster

Currently, secret encryption with a KMS key can only be configured during cluster creation (Day 0) using the 'save and customize' option. Support is now added to enable secret encryption on existing clusters (Day 2) as well, along with the option to configure it during cluster creation (Day 0) using the cluster form.

For more information on this feature, please refer here.

Day 0 Configuration

For new clusters, you can find this configuration in the cluster settings. Based on valid cloud credentials, you will see the option to enable secret encryption during Day 0 configuration.

Existing EKS Cluster

Day 2 Configuration

For existing clusters, you can navigate to the cluster view, then go to Configuration where you will find an option to enable Secret Encryption.

Existing EKS Cluster

RCTL Cluster Configuration with Secret Encryption

kind: Cluster
metadata:
  name: cluster-config
  project: defaultproject
spec:
  blueprint: minimal
  blueprintversion: 2.7.0
  cloudprovider: eks-cloud
  cniprovider: aws-cni
  proxyconfig: {}
  type: eks
---
addons:
- name: coredns
  version: v1.10.1-eksbuild.4
- name: vpc-cni
  version: v1.15.1-eksbuild.1
- name: kube-proxy
  version: v1.28.2-eksbuild.2
- name: aws-ebs-csi-driver
  version: latest
apiVersion: rafay.io/v1alpha5
kind: ClusterConfig
managedNodeGroups:
- amiFamily: AmazonLinux2
  desiredCapacity: 2
  iam:
    withAddonPolicies:
      autoScaler: true
  instanceTypes:
  - t3.xlarge
  maxSize: 2
  minSize: 0
  name: ng-b476082a
  version: "1.28"
  volumeSize: 80
  volumeType: gp3
metadata:
  name: cluster-config
  region: us-west-2
  tags:
    email: demo@rafay.co
    env: dev
  version: "1.28"
secretsEncryption:
  keyARN: arn:aws:kms:us-west-2:xxxxxxxxxxxxx:key/xxxxxxxxxxxxxxx
vpc:
  cidr: 192.168.0.0/16
  clusterEndpoints:
    privateAccess: true
    publicAccess: false
  nat:
    gateway: Single

IAM Permissions Required

When enabling secret encryption with a KMS key on EKS clusters, ensure that the IAM roles or users performing these actions have the following permissions assigned:

  • For key listing: kms:ListKeys
  • For secret encryption: kms:DescribeKey, kms:CreateGrant
  • For Encryption on Existing EKS Cluster : eks:AssociateEncryptionConfig

These permissions are necessary for listing KMS keys, managing encryption grants, and associating an encryption configuration to an existing cluster that don't already have encryption enabled.

Irreversible Action: Secrets Encryption

Once enabled, secrets encryption cannot be disabled. This action is irreversible.

Amazon Linux 2023 Support

This release introduces support for AmazonLinux2023 in node groups. Users can now leverage the benefits of AL2023 by creating node groups to use this Amazon Linux version.

Node Group Configuration with AL2023

AL2023 Node Setting

Example: Creating an EKS Node Group with AL2023 based Custom AMI

managedNodeGroups:
- ami: <custom_ami>
  desiredCapacity: 1
  iam:
    withAddonPolicies:
      autoScaler: true
  instanceTypes:
  - t3.xlarge
  maxSize: 1
  minSize: 1
  name: al2023_customami
  overrideBootstrapCommand: |
    [AmazonLinux2023]
  ssh:
    allow: true
    publicKeyName: <awskey>
  volumeSize: 80
  volumeType: gp3

Amazon EKS Cluster with AL2023 node

AL2023 Node


GKE Clusters

Network Policy and Dataplane V2 for GKE Clusters

This enhancement provides users with advanced networking capabilities ensuring improved security and performance for their applications. Dataplane V2 leverages the power of eBPF, providing enhanced observability, scalability, and resilience, enabling seamless traffic management across clusters. Additionally, Network Policy support allows fine-grained control over network traffic, ensuring that only authorized communications occur between services.

For more information on this feature, please refer here.

GKE Dataplane


Upstream Kubernetes for Bare Metal and VMs

GitOps SystemSync with Write Back to Git

With this feature, users will be able to configure the platform to continuously sync cluster specifications for upstream Kubernetes clusters with a Git repository. Changes can be made in a bidirectional manner.

For more information on this, please refer here.

  • If the cluster specification is updated in the Git repository, the platform will update the corresponding upstream Kubernetes cluster to bring it to the desired state
  • If the upstream Kubernetes cluster's state is modified by an authorized user using the UI or CLI, the changes are automatically written back to the configured Git repository
  • To enable GitOps SystemSync for Upstream Kubernetes (MKS) on Bare Metal and VMs, users must create cloud credentials. These credentials are necessary for GitOps Agent to facilitate the GitOps synchronization.

MKS Git Sync

GitOps Agent Update Required

To use GitOps for upstream cluster types, you must update the GitOps agent to version r2.8.0+.

Cloud Credential Support for Upstream Kubernetes Cluster(MKS)

This release introduces support for managing cloud credentials for upstream Kubernetes clusters (MKS) within the platform. These credentials are essential for enabling GitOps SystemSync functionality and have been integrated into the UI, RCTL, and SystemSync interfaces.

Cloud creds

SystemComponentsPlacement Support

SystemComponentsPlacement allows configuring the scheduling of system components on dedicated nodes. This release introduces support for systemComponentsPlacement as part of the new extended configuration schema. This functionality is currently supported in RCTL, V3 APIs and GitOps System Sync.

For more information on this, please refer here.

Using systemComponentsPlacement in RCTL

To utilize systemComponentsPlacement in RCTL, you need to pass the --v3 flag when applying your cluster configuration. Here's an example:

./rctl apply -f <cluster configuration> --v3
apiVersion: infra.k8smgmt.io/v3
kind: Cluster
metadata:
  name: demo-mks
  project: defaultproject
spec:
  blueprint:
    name: minimal
    version: latest
  cloudCredentials: demo-mks-creds
  config:
    autoApproveNodes: true
    dedicatedControlPlane: true
    highAvailability: true
    kubernetesVersion: v1.28.9
    location: sanjose-us
    network:
      cni:
        name: Calico
        version: 3.26.1
      podSubnet: 10.244.0.0/16
      serviceSubnet: 10.96.0.0/12
    nodes:
    - arch: amd64
      hostname: demo-mks-scale-w-tb-2
      operatingSystem: Ubuntu20.04
      privateIP: 10.12.105.227
      roles:
      - Worker
    - arch: amd64
      hostname: demo-mks-scale-w-tb-7
      labels:
        app: infra
      operatingSystem: Ubuntu20.04
      privateIP: 10.12.110.50
      roles:
      - Worker
      taints:
      - effect: NoSchedule
        key: app
        value: infra
    - arch: amd64
      hostname: demo-mks-scale-w-tb-3
      operatingSystem: Ubuntu20.04
      privateIP: 10.12.29.164
      roles:
      - Worker
    - arch: amd64
      hostname: demo-mks-scale-w-tb-6
      labels:
        app: infra
      operatingSystem: Ubuntu20.04
      privateIP: 10.12.101.223
      roles:
      - Worker
      taints:
      - effect: NoSchedule
        key: app
        value: infra
    - arch: amd64
      hostname: demo-mks-scale-c-tb-3
      operatingSystem: Ubuntu20.04
      privateIP: 10.12.118.110
      roles:
      - ControlPlane
  systemComponentsPlacement:
    nodeSelector:
      app: infra
    tolerations:
    - effect: NoSchedule
      key: app
      operator: Equal
      value: infra
    - effect: NoSchedule
      key: infra
      operator: Equal
      value: rafay
  type: mks

Kubernetes Certificate Rotation

Support is now added to rotate Kubernetes certificates for upstream Kubernetes clusters via UI and API. This can be done either manually or automatically based on certificate expiry. This capability streamlines the entire certificate rotation process.

For more information on this feature, please refer here.

Cert Rotation

Note

Please note that Kubernetes certificate rotation feature described above is not supported in Windows-based upstream clusters.

Conjurer Location Change for rctl apply Path

The following changes have been made:

  • When provisioning the cluster using the rctl apply path, the Conjurer binary was previously stored in the user's home directory. This has now been changed; the location of the Conjurer binaries has been moved to /usr/bin. This update ensures that concurrent writes by multiple nodes do not cause corruption when sharing a common NFS folder path.

  • The passphrase (txt) and certificate (PEM file), which were pulled down during provisioning, are now stored in /tmp. These files are no longer needed after the node has been provisioned.

CentOS 7 EOL

Due to CentOS 7 reaching its end-of-life (EOL) on June 30, 2024, this release no longer supports creating new clusters using CentOS 7.

We recommend transitioning to alternative supported operating systems like Rocky Linux, AlmaLinux, or RHEL as replacements for CentOS 7.

Read more about CentOS 7 End of Life.


Terraform Provider

Caching

There are certain scenarios where diff is shown even when there are no changes. A caching mechanism has been implemented for TF schema to alleviate this issue for the following resources:

Important

Limited Access - This capability is selectively enabled for customer orgs. Please reach out to support if you want to get this enabled for your organization. Support for more resources will be added with future releases.

Note

If users reorder elements in a list within the Terraform configuration, Terraform sees this as a difference during re-apply. This doesn't necessarily mean that the infrastructure needs updating, it indicates that the configuration has changed. Issues related to Terraform behavior like this are not resolved by caching.


Deprecation

Cluster Templates

The Cluster Templates feature set is deprecated with this release and support for this feature will be removed in a future release. This means that no enhancements or bug fixes will be added to this feature. Users can leverage Environment Manager to achieve the same use case.

vSphere LCM

The vSphere Cluster Lifecycle Management capability is deprecated with this release, and support for this feature will be removed in a future release. Users are encouraged to migrate to the Upstream Kubernetes for Bare Metal and VMs (MKS) offering. The infrastructure for Upstream Kubernetes on vSphere environment can be deployed leveraging Environment Manager.


Environment Manager

RCTL support

With the introduction of this capability, it is possible to execute Environment Manager workflows using the RCTL Command Line interface utility. RCTL CLI can be embedded into the preferred workflow automation (e.g. CI/CD pipeline) to perform operations such as creation of templates, deployment/shutting down environments.

For more information on this feature, please refer here.

Support for HCP Terraform Provider

Platform teams will be able to seamlessly integrate and leverage existing investments in HCP Terraform with the introduction of this provider option. With this integration, Platform teams can define templates and enable self-service for developers/data scientists within Rafay while the infrastructure provisioning & management of state files can be handled by HCP Terraform.

For more information on this, please refer here.

Note: This provider option is only available to HashiCorp licensed customers.

HCP Terraform

Support for OpenTofu Provider

Support for OpenTofu provider option is now added as part of the resource template configuration. This provides customers the flexibility to choose the provider of choice for infrastructure provisioning and state management.

For more information on this and steps for migrating from Terraform Provider to OpenTofu provider, please refer here.

Note: Support for Terraform provider option is being deprecated with the introduction of this option.


Clusters

Resources page

Sorting support is now available for all columns.


Cost Management

Cost Explorer UI improvements

A number of improvements have been implemented with this release including:

  • Tool tips that detail how the efficiency scores are calculated
  • Additional columns (Resource Efficiency - CPU, Resource Efficiency - Memory, Utilized Cost, Idle Cost) on export of data
  • Availability of secondary filters based on primary filter selection
  • Sorting of data in Cost Explorer table based on Efficiency Score

Secrets Management

Vault Integration

Vault Agent Injector version that is deployed to the clusters has been updated to v1.17.2 with this release.


2.8 Release Bug Fixes

Bug ID Description
RC-35542 MKS: Containerd version is not being updated on all nodes during K8s upgrade
RC-34562 OpenCost agent is not deployed with a priorityClassName
RC-34492 AKS: Unable to update cloud credentials associated with a cluster using RCTL v3 spec
RC-32012 GKE: Cluster creation fails when the name includes rafay
RC-36073 Incorrect kubeconfig when downloaded through Terraform
RC-33947 Unintuitive error message on a blueprint sync failure when the add-on name is the same as the Rafay system add-on
RC-36557 UI: Network policies do not persist when creating a new blueprint version