Skip to content

Naveen Chakrapani

Enforcing ServiceNow-Based Approvals with Rafay

Enterprises often require explicit approvals before critical actions can proceed especially when provisioning infrastructure or making configuration changes. With Rafay’s out-of-the-box (OOB) workflow handlers, customers can easily integrate with popular ITSM systems such as ServiceNow (SNOW).

Catalog

This post explains how to configure and use Rafay’s ServiceNow Workflow Handler to enforce approval gates.


Workflow Handlers in Rafay

Rafay enables platform teams to attach Workflow Handlers to key actions as pre-hooks or post-hooks:

  • Pre-hook Handlers: Triggered before an action (e.g., pause provisioning until approval is received)
  • Post-hook Handlers: Triggered after an action (e.g., notify stakeholders after infrastructure (environment) creation)

Typical Scenarios

Here are a few use cases where ServiceNow-based approvals come into play:

  • Developers request a vCluster to test their app before raising a PR
  • Platform admins initiate a Kubernetes upgrade for a fleet of clusters that requires approval

Support for Parallel Execution with Rafay's Integrated GitOps Pipeline

At Rafay, we are continuously evolving our platform to deliver powerful capabilities that streamline and accelerate the software delivery lifecycle. One such enhancement is the recent update to our GitOps pipeline engine, designed to optimize execution time and flexibility — enabling a better experience for platform teams and developers alike.

Integrated Pipeline for Diverse Use Cases

Rafay provides a tightly integrated pipeline framework that supports a range of common operational use cases, including:

  • System Synchronization: Use Git as the single source of truth to orchestrate controller configurations
  • Application Deployment: Define and automate your app deployment process directly from version-controlled pipelines
  • Approval Workflows: Insert optional approval gates to control when and how specific pipeline stages are triggered, offering an added layer of governance and compliance

This comprehensive design empowers platform teams to standardize delivery patterns while still accommodating organization-specific controls and policies.

From Sequential to Parallel Execution with DAG Support

Historically, Rafay’s GitOps pipeline executed all stages sequentially, regardless of interdependencies. While effective for simpler workflows, this model imposed time constraints for more complex operations.

With our latest update, the pipeline engine now supports Directed Acyclic Graphs (DAGs) — allowing stages to execute in parallel, wherever dependencies allow.

Simplifying Blueprint and Add-on Management with Draft Versions

Managing infrastructure at scale demands both agility and precision—especially when it comes to version control. At Rafay, we have long supported versioning for key configuration objects such as Blueprints and Add-ons, enabling platform teams to roll out changes systematically and maintain operational consistency.

However, as many teams have discovered, managing these versions during testing and validation phases can introduce unnecessary complexity. We are excited to announce a major usability enhancement: Support for Draft Versions.

Why Versioning Matters

Versioning in Rafay’s platform delivers several key advantages:

  • Change Tracking: Keep a historical record of changes made to Blueprints and Add-ons over time
  • Staged Rollouts: Gradually deploy updates across environments and clusters to minimize risk
  • Compliance Assurance: Demonstrate adherence to organizational policies and track Day-2 changes in a controlled way

These capabilities are especially crucial for teams responsible for maintaining secure, production-grade Kubernetes environments

The Challenge: Version Sprawl During Testing

While versioning is powerful, it has traditionally introduced friction during the testing and validation phase. Each time a platform engineer made a minor change to an Add-on or Blueprint, a new version needed to be created—even if the version wasn’t production-ready.

This led to:

  • Version fatigue, with large volumes of partially validated versions cluttering the system
  • Increased manual overhead and inefficiency for platform teams
  • Risk of accidental usage of incomplete configurations in downstream projects

Introducing "Schedules" on the Rafay Platform: Simplifying Cost Optimization and Compliance for Platform Teams

Platform teams today are increasingly tasked with balancing cost efficiency, compliance, and operational agility across complex cloud environments. Actions such as cost-optimization measures and compliance-related tasks are critical, yet executing these tasks consistently and effectively can be challenging.

With the recent introduction of the “Schedules” capability on the Rafay Platform, platform teams can now orchestrate one-time or recurring actions across environments in a standardized, centralized manner. This new feature enables teams to implement cost-saving policies, manage compliance actions, and ensure operational efficiency—all from a single interface. Here’s a closer look at how this feature can streamline your workflows and add value to your platform operations.

Schedules

Enhancing Security and Compliance in Break Glass Workflows with Rafay

Maintaining stringent security and compliance standards is more critical than ever today. Implementing break glass workflows for developers presents unique challenges that require careful consideration to prevent unauthorized access and ensure regulatory compliance.

In the previous blog, we introduced the concept of break glass workflows and why organizations require it. This blog post delves into how Rafay enables Platform teams to orchestrate secure and compliant break glass workflows within their organizations. Watch a video recording of this feature in Rafay.

Declarative configuration for Cluster Overrides

Cluster overrides

By default, K8s objects require certain values be set inside their specs that match the cluster's configuration. If this were to done within the add-on (or workload) manifest, it would require that many duplicate add-ons (or workloads) would need to be created for a fleet of clusters. To mitigate this, the platform supports cluster overrides. These allow the customer to use a single add-on (or workload) org wide and dynamically inject values into a manifest as it is being deployed to the cluster.

Examples include:

  • Use of a different license key for a security tool based on the business unit

  • Configuration of different resource requests for a monitoring tool based on environment type (test or prod)

  • Dynamic configuration of cluster name during deployment of a load balancer (e.g. AWS Load Balancer)

In-place Upgrades to Amazon EKS v1.28 Clusters using Rafay

In our recent release, we added support for in-place upgrades of EKS clusters based on Kubernetes v1.28.

Our customers have shared with us that they would like to provision new EKS clusters using new Kubernetes versions so that they do not have to plan/schedule for Kubernetes upgrades for these clusters right away. As a result, we generally introduce support for new cluster provisioning for the new Kubernetes version first and then follow up with support for zero touch in-place upgrades.

Note

Organizations that wish to perform sophisticated checks for API deprecation etc are strongly recommended to use Rafay's Fleet Operations for Amazon EKS.

Rightsizing exercises with Cost Explorer

As organizations increase their K8s footprint and onboard more applications, it becomes extremely critical to have an unified (cross account, cross cloud) view of resource utilization metrics across clusters. Without this, organizations will be running blind to their K8s cost structure and it will be impossible to operate their infrastructure in a cost effective manner.

A recent release introduced a new integrated capability within the platform referred to as "Cost Explorer". This capability provides organizations with necessary information to effectively undertake "cluster rightsizing" and "application rightsizing" exercises.

Implementing Chargeback/Showback for multi-tenant clusters

As organizations embrace multi-tenancy i.e. share clusters among applications/teams to reduce cluster sprawl and spend, it is imperative that granular resource utilization metrics are collected and aggregated from their clusters. Tracking and reporting costs on a per application/team basis (referred to as chargeback/showback) is essential for a number of reasons including:

  • Billing internal teams/applications (their cost center IDs) based on their consumption
  • Gaining visibility into the cost structure to determine inefficiencies and drive cost optimization exercises
  • Forecasting future spend

Rafay's integrated Cost Management solution makes it extremely simple for customers to standardize collection of metrics in a consistent manner across clusters (cloud, on-premise) and implement chargeback/showback models.