Skip to content

Architecture

Architecture

The Controller itself is a containerized, microservices based application that is packaged and distributed as a Helm chart. In addition to the controller Helm chart, an installer is also provided to help provision and operate the Kubernetes and storage infrastructure layer for the controller software. All the software components including ones that need to persist state in storage will operate in the Kubernetes cluster.

Arch 1


Multi-Tenancy

The controller supports full multi-tenancy, allowing customers to provision and manage fully isolated organizations (tenants) for different teams, business units, end customers, or operating environments.

Deployment Options

For the air-gapped, self-hosted controller, users have two deployment options:

High Availability Option

Optimized for production with a highly available configuration, consisting of three Kubernetes master nodes and at least one worker node. Users can expand the deployment anytime by adding worker nodes.

Arch 1

Single Node Option

Designed for non-production use, testing, and demos, this option allows the entire controller software stack to be provisioned on a single-node cluster, functioning as both master and worker.

Arch 1


High Availability Controller Deployment in AWS

Pre-requisites

  • Minimum 3 EC2 instances in different AZs with 16 CPU/64 GB memory (m5.4xlarge), 100 GB root disk, and 500 GB additional EBS storage.
  • Wildcard domain with DNS records pointing to load balancers.
  • ACM-signed wildcard certificate for the controller domain.
  • AWS Local User account for provisioning EKS clusters later through the controller.
  • Security groups allowing all TCP and UDP ports for EC2 instance communication.
  • Open inbound node ports for EC2 instances to allow Load Balancer connections (as detailed in the installation instructions).

HA Controller Deployment Network Architecture

This deployment requires a minimum of three EC2 instances across three availability zones for high availability. The Rafay installer deploys Kubernetes on these instances as converged master/worker nodes. The Rafay controller microservices and system services like monitoring and logging are deployed as standard Kubernetes artifacts. Load balancers redirect external traffic from users (TLS) or target clusters (mTLS) to the appropriate microservices within the controller. Persistent data for stateful services is stored in EBS storage, replicated across multiple AZs for high availability. Logging and monitoring services provide visibility, alerting, and debugging capabilities, while a backup and restore solution backs up controller data to an S3 bucket for disaster recovery.

Arch 1

Target EKS Clusters To The Controller Communication Diagram__

Each managed EKS Cluster has a Rafay Kubernetes Operator deployed in a dedicated namespace for ongoing lifecycle management. The operator communicates securely with the Controller via mTLS for all management operations.

Arch 1

Self-Hosted Bare Metal Architecture

Pre-requisites for Self-Hosted Bare Metal Deployment

Operating System:

  • CentOS 7.9
  • Ubuntu 22.04
  • RHEL 8/9

Hardware Requirements:

  • Single-Node Controller: 1 node
  • High Availability Controller: 3 master nodes and 1 worker node
  • System Size (Minimum): 16 CPU, 64 GB RAM
  • Root Disk (Minimum): 250 GB
  • Temp Directory (/tmp): Minimum 50 GB (if not part of root disk)
  • Data Disk (formatted): 500 GB (attached as /data volume, size may vary based on storage requirements)

Network Requirements:

  • RHEL Installations: Connectivity to default repository servers.
  • Inbound Traffic: Allow inbound TCP traffic on port 443 to all instances and ensure all localhost ports are reachable.
  • Non-DNS Environments: Enable UDP traffic on port 300053.

Security Settings:

  • Disable SELinux and Firewall: Disable SELinux and firewall rules on all nodes for initial deployment. You can enable them later after configuration.

Additional Considerations:

  • Ensure proper network connectivity between nodes.
  • Verify that DNS resolution is working correctly.
  • Have administrative privileges to install and configure required components.

Arch 1