Skip to content

Supported Environments

Please review the information listed below to understand the supported environments and operational requirements.


Operating Systems

  • Ubuntu 18.0.4 LTS (64-bit)
  • Ubuntu 20.0.4 LTS (64-bit)
  • CentOS 7 (64-bit)
  • RHEL 7, 8 (64-bit)
  • Ubuntu 16.0.4 LTS (64-bit) [Deprecated]

Kubernetes Versions

The following versions of Kubernetes are currently supported.

Kubernetes Version
v1.21.x
v1.20.x
v1.19.x
v1.18.x
v1.17.x

Containerd

Starting k8s v1.20.x, support for Dockershim has been removed. New clusters will be provisioned with the containerd CRI. When older versions of k8s are upgraded in-place, they will also be upgraded to use the containerd CRI. Customers should therefore account for their k8s resources being restarted.

containerd is a high-level container runtime that implements the CRI spec. It pulls images from registries, manages them and then hands over to a lower-level runtime, which actually creates and runs the container processes. Containerd was separated out of the Docker project, to make Docker more modular.


CPU and Memory

Resource Minimum Recommended
Memory per Node 8 GB >64 GB
vCPUs per Node Four (4) >Sixteen (16)

Note

For a single node, converged cluster, Kubernetes, the k8s management operator and the default blueprint components will require a baseline of "2 CPUs" and "4 GB RAM" with local storage (see below). Provision a VM/system with more than this baseline to ensure you have room to deploy your workloads.


GPU

Follow these instructions if your workloads require GPUs.


Inter-Node Networking

For multi node clusters, ensure that the nodes are configured to communicate with each other over all UDP/TCP ports.


Cluster Networking

Details on CNI integrations available under Integrations -> Cluster Networking


Forward Proxy

Enable and configure this setting if your instances are not allowed direct connectivity to the controller and all requests have to be forwarded by a non-transparent proxy server.


Storage

Multiple turnkey storage integrations are available as part of the standard cluster infrastructure blueprint. These integrations dramatically simplify and streamline the operational burden associated with provisioning and management of Persistent Volumes (PVs) especially for bare metal and VM based environments.

We have worked to eliminate the underlying configuration and operational complexity associated with storage on Kubernetes. From a cluster administrator perspective, there is nothing to do other than "select" the required option. These turnkey storage integrations also help ensure that stateful workloads can immediately benefit from "dynamically" provisioned PVCs.


Local PV

Required/mandatory storage class.

  • Based on OpenEBS for upstream Kubernetes clusters on bare metal and VM based environments.

  • Based on Amazon EBS for upstream Kubernetes clusters provisioned on Amazon EC2 environments. Requires configuration with an appropriate AWS IAM Role for the controller to dynamically provision EBS based PVCs for workloads.

A Local PV is particularly well suited for the following use cases:

  • Stateful workloads that already capable of performing their own replication for HA and basic data protection. This eliminates the need for the underlying storage to copy or replicate the data for these purposes. Good examples are Mongo, Redis, Cassandra and Postgres.

  • Workloads that need very high throughput (e.g. SSDs) from the underlying storage with the guarantee that data consistency on disk

  • Single Node, converged clusters where networked, distributed storage is not available or possible (e.g. developer environments, edge deployments)


Distributed Storage

Optional and currently based on GlusterFS and Heketi for upstream Kubernetes clusters on bare metal and VM based environments.

Gluster is a scalable network filesystem that allows the creation of a large, distributed storage solution based on commodity hardware. Gluster storage can then be connected to Kubernetes to abstract the volume from your services. Heketi provides a RESTful based volume management framework for GlusterFS. This is critical for supporting dynamically provisioned GlusterFS volumes acting as a glue between GlusterFS and Kubernetes.

This is well suited for environments that need to provide a highly available, shared storage platform. This allows pods to be rescheduled on any worker node on the cluster and still be able to use the underlying PVC transparently.


Multiple Storage Classes

It is possible to have multiple storage classes active concurrently with one of them acting as the "default" storage class. An example is shown below where GlusterFS is configured as the default storage class. Workloads that do not explicitly specify the storage class in their YAML for PVCs will land up using the default storage class.

$ kubectl get sc
NAME                          PROVISIONER               AGE
glusterfs-storage (default)   kubernetes.io/glusterfs   13m
local-storage                 openebs.io/local          17m

Storage Requirements

Use the information below to ensure you have provisioned sufficient storage for workloads on your cluster.

Root Disk

The root disk for each node is used for the following:

  • Docker images (cached for performance)
  • Kubernetes data and binaries
  • etcd data
  • consul data
  • system packages
  • Logs for components listed above

Logs are automatically rotated using "logrotate". From a storage capacity planning perspective, ensure that you have provisioned sufficient storage in the root disk to accommodate your specific requirements.

  • Raw, unformatted
  • Min: 50 GB, Recommended: >100 GB

Note

On a single node cluster, a baseline of 30 GB of storage to store logs, images etc is required. The remaining 20 GB will be used for PVCs used by workloads. Allocate and plan for additional storage appropriately for your workloads.


Secondary Disk

OPTIONAL and required only if the GlusterFS storage class option is selected. This is dedicated and used only for end user workload PVCs

  • Raw, unformatted
  • Min: 100 GB, Recommended: >500 GB per node