Setup
This section describes the steps that an administrator from the platform team has to follow to deploy and operate "Ray as Service" on their infrastructure. Rafay Environment Manager is used to deliver the following:
- Automate and encapsulate complex workflows to deploy and configure each "Ray as Service" tenant.
- Provide end users with the option of a a self-service onboarding experience.
- Version controlled rollout of the "Ray as Service" offering to users
- Operate multiple versions of the "Ray as Service" offering concurrently on the same host cluster.
- Deprecate and remove older versions of the "Ray as Service" offering when required.
Please review the steps below describing what the administrator has to do in order to set and configure things to onboard the first end user.
Central Project¶
To ensure centralized control and org wide standardization, administrators are strongly recommended to manage the "Ray as Service" template in a centralized project under sole control of the platform team/administrators. The templates will then be shared hierarchically with downstream projects.
To ensure that users do not step on each other accidentally, we will create a separate project per user/team. We will then share the "Ray as Service" template from the central project with the downstream project.
In the above example, notice that the "Ray as Service" template is shared ONLY with the following projects "ml-team-alpha" and "ml-team-bravo". The other projects will not have access to Ray as Service.
Load Ray as Service Template¶
Please follow the steps to load the "Ray as Service" template into the central project. In the example below, notice that our template "kuberay" is managed in the central project called "eaas" and is shared hierarchically with two downstream projects:
- ray-as-service
- datascientist
Important
As Rafay makes updates and enhancements to the "Ray as Service" offering, we will publish new versions of the template. Customers just need to load the new versions of the template into their central project and share with downstream projects to use it.
Host Cluster¶
The "Ray as Service" tenants are all deployed and operated on a single, host Kubernetes cluster. They get to share and use resources from this common host cluster. The image below describes an example where there are three separate "Ray as Service" tenants operating on a single host cluster.
Configure and Provision a host Kubernetes cluster with sufficient resources. Administrators can use Rafay Kubernetes Manager to provision and manage the lifecycle of the host Kubernetes cluster. If you are provisioning and managing the lifecycle of the host cluster outside Rafay, please make sure you "import" it into the central Rafay Project.
The host cluster should be preinstalled with the following components:
- Cert-manager
- Volcano
- Vcluster-issuer
- Ingress controller
- Persistent storage
Note
Please review the support matrix for a list of supported host Kubernetes clusters and versions.