Skip to content

Supported Environments

Please review the information listed below to understand the supported enviroments and operational requirements.


Operating Systems

  • Ubuntu 18.0.4 LTS (64-bit)
  • CentOS 7 (64-bit)
  • RHEL 7.9 (64-bit)
  • Ubuntu 16.0.4 LTS (64-bit)

Kubernetes Versions

Currently, the following versions of Kubernetes are supported. By default, k8s v1.18.x is used and users can override the default and choose to use an older version of Kubernetes if required.

Rafay will periodically update the supported Kubernetes versions to ensure customers have access to the latest patches and versions.

Kubernetes Version
v1.19.x
v1.18.x (default)
v1.17.x
v1.16.x

CPU, Memory and GPU

Resource Minimum Recommended
Memory per Node 8 GB >64 GB
vCPUs per Node Four (4) >Sixteen (16)
GPU (OPTIONAL) NVIDIA NVIDIA

Note

For a single node, converged cluster, Kubernetes, the Rafay Operator and the default blueprint componenents will require a baseline of "2 CPUs" and "4 GB RAM" with local storage (see below). Provision a VM/system with more than this baseline to ensure you have room to deploy your workloads.


Inter-Node Networking

For multi node clusters, ensure that the nodes are configured to communicate with each other over all UDP/TCP ports.


Cluster Networking

Additional details on CNI integrations available under Integrations -> Cluster Networking


Storage

Rafay provides multiple turnkey storage integrations as part of the standard cluster infrastructure blueprint. These integrations dramatically simplify and streamline the operational burden associated with provisioning and management of Persistent Volumes (PVs) especially for bare metal and VM based environments.

Rafay has worked to eliminate the underlying configuration and operational complexity associated with storage on Kubernetes. From a cluster administrator perspective, there is nothing to do other than "select" the required option and Rafay performs everything else underneath the covers.

These turnkey storage integrations also help ensure that stateful workloads can immediately benefit from "dynamically" provisioned PVCs.


Local PV

Required/mandatory storage class.

  • Based on OpenEBS for bare metal and VM based clusters.

  • Based on Amazon EBS for clusters provisioned on Amazon EC2 environments. Requires configuration with an appropriate AWS IAM Role for Rafay to dynamically provision EBS based PVCs for workloads.

This is particularly well suited for the following use cases:

  • Stateful workloads that already capable of performing their own replication for HA and basic data protection. This eliminates the need for the underlying storage to copy or replicate the data for these purposes. Good examples are Mongo, Redis, Cassandra and Postgres.

  • Workloads that need maximum throughput (e.g. SSDs) from the underlying storage with the guarantee that data consistency on disk

  • Single Node, converged clusters where networked, distributed storage is not available or possible (e.g. developer environments, edge deployments)


Distributed Storage

Optional and based on GlusterFS and Heketi for bare metal and VM based clusters.

Gluster is a scalable network filesystem that allows the creation of a large, distributed storage solution based on commodity hardware. Gluster storage can then be connected to Kubernetes to abstract the volume from your services.

Heketi provides a RESTful based volume management framework for GlusterFS. This is critical for supporting dynamically provisioned GlusterFS volumes acting as a glue between GlusterFS and Kubernetes.

This is particularly well suited for environments that need to provide a highly available, shared storage platform. This allows pods to be rescheduled on any worker node on the cluster and still be able to use the underlying PVC transparently.


Multiple Storage Classes

It is possible to have multiple storage classes active concurrently with one of them acting as the "default" storage class.

An example is shown below where GlusterFS was configured as the default storage class. Workloads that do not explicitly specify the storage class in their YAML for PVCs will land up using the default storage class.

$ kubectl get sc
NAME                          PROVISIONER               AGE
glusterfs-storage (default)   kubernetes.io/glusterfs   13m
local-storage                 openebs.io/local          17m

Storage Requirements

Use the information below to ensure you have provisioned sufficient storage for workloads on your cluster.

Root Disk

The root disk for each node is used for the following:

  • Docker images (cached for performance)
  • Kubernetes data and binaries
  • etcd data
  • consul data
  • system packages
  • Logs for components listed above

Logs are automatically rotated using "logrotate". From a storage capacity planning perspective, ensure that you have provisioned sufficient storage in the root disk to accommodate your specific requirements.

  • Raw, unformatted
  • Min: 50 GB, Recommended: >100 GB

Note

On a single node cluster, Rafay will require a baseline of 30 GB of storage to store logs, images etc. The remaining 20 GB will be used for PVCs used by workloads. Allocate and plan for additional storage appropriately for your workloads.


Secondary Disk

OPTIONAL and required only if the GlusterFS storage class option is selected. This is dedicated and used only for end user workload PVCs

  • Raw, unformatted
  • Min: 100 GB, Recommended: >500 GB per node