Skip to content

System Profiles

System profiles in the Rafay console provide administrators with predefined service and compute profiles, eliminating the need to create new profiles in many cases. Rafay develops and maintains a number of default profiles, which are available out of the box, allowing organizations to quickly become operational with GPU PaaS without needing to develop or curate their own profiles. These preconfigured settings save time, ensuring efficiency in service and resource deployment. While the default profiles cover most scenarios, administrators can also build on and extend them to meet specific organizational needs. The System Profiles page offers a consolidated view of available profiles, categorized by service and compute types for easy access. This feature benefits administrators by streamlining deployment, ensuring consistency across deployments, and providing flexibility to use both default and custom profiles.

Add Backup Policy

Administrators can share system profiles across one or more projects. Once shared, the profiles are available within those projects, allowing administrators to use them when creating instances within the specified projects. For example, the service profile "system-mks-kubeflow-test" is shared with a project "defaultproject." Now click Save Changes

Add Backup Policy

Once the service profile is shared, navigate to the defaultproject -> Service Profiles. Here, the user can view the shared service profile "system-mks-kubeflow-test" listed on the Service Profiles dashboard.

Add Backup Policy

Users can now use this service profile when deploying a service instance.

Add Backup Policy

Important

  • Refer to this page to create a custom service profile
  • Refer to this page to create a custom compute profile

Default Service Profiles

Notebooks

Notebooks in GPU PaaS enable users to seamlessly launch Jupyter notebooks with GPU-accelerated compute for AI, ML, and data science workloads. These are optimized for high-performance tasks, leveraging predefined profiles for efficient resource allocation.

Inferences

Inference in GPU PaaS enables deploying trained AI/ML models for real-time or batch predictions with GPU acceleration, optimizing performance for tasks like image recognition and anomaly detection. The Rafay console simplifies this by centralizing inference endpoint configuration, ensuring scalability, consistency, and secure access.

AI/ML Jobs

The AI/ML Jobs feature in GPU PaaS allows users to define, configure, and execute resource-intensive AI/ML tasks, such as model training and data processing, with ease. It streamlines workflows by automating resource provisioning, ensuring consistency, and providing scalability for both experimentation and production deployments.

Custom Services

Custom services in GPU PaaS enable users to deploy and manage services optimized for GPU-accelerated workloads. These services offer flexibility for integrating specialized AI/ML workflows, ensuring high performance and scalability.

💡 Important:
Similar to system profiles, default compute profiles can also be shared with the required project and used when deploying a compute instance within that project.