Improved error messaging for the rafay_download_kubeconfig Terraform resource
RC-41733
Resolved an issue where terraform apply would display a diff in spec.variables for static resources, even when there were no actual changes
RC-41568
Fixed an issue where re-applying Terraform for resource templates with destroy OpenTofu hooks resulted in an error
RC-41128
Resolved issues with Terraform flatteners where changes to the working directory path and service account name in workflow handler specs were not detected during terraform apply --refresh-only after manual updates via the UI
RC-41035
Added validation to prevent the creation of environment templates with invalid schedule task types or agent override types
RC-40878
Enhanced clarity of error messages related to hooks in resource and environment templates
RC-40465
Resolved issue where Terraform re-apply showed diffs in certain sensitive values for the Workflow Handler
RC-40556
Fixed incorrect diffs appearing during terraform plan for Cloud Credentials v3 resources
RC-35451
Resolved issue preventing simultaneous blueprint updates and label additions to imported clusters in a single Terraform apply
RC-40669
AKS: Fixed an issue where Terraform failed with the error 'Day 2 operation SystemPlacementUpdation is not allowed because Cluster is not in running state' when attempting to start the cluster.
A new doc is now available to help users correctly configure and manage cluster sharing across projects using the Terraform provider.Cluster Sharing Best Practices
This documentation outlines supported patterns and best practices and how to avoid configuration conflicts when using rafay_cluster_sharing, rafay_cluster_sharing_single, or embedded sharing blocks.
Enhancements have been made to the RCTL experience when adding or deleting nodes in Upstream Kubernetes clusters.
Previously, when performing bulk node operations (add/delete), the response from RCTL did not clearly indicate which nodes were impacted. With this enhancement, the response message now includes the names of the affected nodes, providing users with clearer visibility and traceability.
This helps users confirm exactly which nodes were successfully added or removed from their upstream cluster.
Below is an example of the enhanced response returned when performing a bulk operation using RCTL:
{"taskset_id":"9dk3emn","operations":[{"operation":"NodeAddition","resource_name":"test-21","status":"PROVISION_TASK_STATUS_PENDING"},{"operation":"BulkNodeUpdate","resource_name":"test-21","status":"PROVISION_TASK_STATUS_PENDING"},{"operation":"BulkNodeDelete","resource_name":"test-21","status":"PROVISION_TASK_STATUS_PENDING"}],"comments":"Node add operation will be performed on: test-44, test-45. Node delete operation will be performed on: test-43. Node update operation will be performed on: test-41. The status of the operations can be fetched using taskset_id","status":"PROVISION_TASKSET_STATUS_PENDING"}
As part of provisioning clusters using RCTL with a cluster configuration file, users can now initiate MKS-specific Conjurer preflight checks directly using a new command-line flag.
Use the following command to invoke conjurer preflights during cluster provisioning:
Handling of rotate Flag in resolv.conf
When the rotate option is enabled in resolv.conf, DNS requests are round-robined across available servers, which can lead to intermittent discovery failures for Consul.
- We now warn users when the rotate flag is detected (via the conjurer binary).
- During day-2 operations, the system will automatically remove the rotate flag** to ensure consistent DNS resolution and avoid cluster issues.
These enhancements help avoid DNS-related instability and ensure reliable and consistent service discovery in upstream Kubernetes clusters, even in dynamic or custom network environments.
Note
This enhancement is applicable only for newly created upstream MKS clusters. Support for existing clusters will be included in the next release.
This enhancement enables users to upgrade an existing Data Agent from version v1.9.3 to v1.15.1.
The upgrade can be performed using either the UI or RCTL.
As part of the upgrade to v1.15.1, users can optionally enable new capabilities introduced in this version, including:
Enable CSI: Enables support for volume snapshots using the Container Storage Interface (CSI).
SSE-C Encryption Key: Allows users to configure server-side encryption with customer-provided keys for enhanced data security.
These features are available only in version v1.15.1 and are not supported in earlier version.
For more information about this feature, click here.
Previously, updating tags on self-managed node groups during Day 2 operations was not supported. Attempting to modify or add tags would result in a validation error, preventing the update from being applied.
With this release, users can now update tags on self-managed node groups as part of Day 2 operations, offering improved flexibility and lifecycle management for EKS clusters.
Note
As part of this Day 2 tag update, the nodes will be recycle as the new launch template with the updated tags will be created and applied internally to all resources, resulting in the replacement of existing nodes without requiring manual intervention.
Partner-level dashboards that will offer insights into various usage metrics, with the ability to filter data by organization will be available. These dashboards can help answer questions such as:
How many SKUs or profiles have been created?
How many instances have been launched, by whom, and under which organizations?
Which profiles are most frequently used for instance creation?
Are there any instances currently in a failed or unhealthy state?
When were instances created, and how long have they been running?
What are the usage trends over time for instance creation?
Who are the most active users across organizations?
For more information about this feature, click here.
The platform currently provides utilization metrics for instances based on Kubernetes clusters. Similar metrics—such as GPU and memory utilization are being extended to support VM-based instances as well. Users can filter by time range and view historical metrics going back up to one week.
Several enhancements are being implemented to strengthen authentication, including stricter policies that prevent local users from reusing previously used passwords.
A previous release introduced support for configuring namespace labels at the project level. Upcoming enhancements focus on performance optimization and improvements to the reconciliation loop for more efficient and reliable label management.
This enhancement enables selective execution of specific resources during an environment deployment. It is particularly useful in cases where only certain resources such as DNS updates for failover require changes, allowing all other unchanged resources to be skipped from execution.
Note
This feature will initially be supported with non-UI interfaces. Support with UI interface will be added in a subsequent release.
For more information about this feature, click here.