Skip to content

Schedules

Schedules allows for automated start/stop/custom actions on compute and service instances based on cron expressions and time zone.


Configuration Options

Schedules can be defined at three levels:

  1. Project (e.g. a team)
  2. Compute/Service Profile (e.g. a SKU)
  3. Instance (e.g a deployed VM or notebook)

Project

The Org admin can add a schedule at a project level. A project typically maps to a team which will have similar operational requirements. This allows them to automatically enforce that all profiles (and instances) in a specific project operate a certain way.

For example, a data science team that is based on San Francisco, CA may be associated to a project called "datascience-sf". The admin can now create a schedule for the project which will automatically shut down any running instances in the project at 6pm Pacific.

Profile

The Org admin can also add schedules on specific profiles that are made available to them in their Org by cloud providers. For example, a university admin may decide to attach a 9am to 5pm Pacific Time schedule for the XL compute SKUs. This will ensure that there is no wastage of costs due to idling instances when classes are not scheduled.

Instance

The end user can also add a schedule on a specific compute or service instance. They may wish to do this to save costs when the instances will not be used. For example, a data scientist may wish to automatically start an expensive GPU VM at 10am GMT and shut it down automatically at 5pm GMT.


Roles for Schedules

The Org (Tenant) Admin can set schedules at the profile level to enforce them across all instances launched in the Org. In addition, end users can also utilize schedules to start or stop instances based on these configurations, as long as there are no overrides in place (see priority section below).

Important

The schedule configuration of an instance is established at the time of its creation. Any changes made to the schedule configuration will not take effect until the instance is redeployed. Therefore, modifying the overridable schedule configuration at the profile or project level after the deployment will not alter the schedules of existing instances.


Priority

The hierarchy for schedule configuration is as follows: Project → Profile → Instance.

The project-level schedule configuration has the highest priority. , meaning it overrides the profile-level configurations. Similarly, profile-level configurations will take precedence over instance-level configurations.


Enforcement

The "Enforce Time Window" feature is an additional configuration that can be enabled or disabled. When this feature is activated, instances cannot be manually deployed or started outside of the specified start-stop time window by the end users.

For example, if the schedule is set for Monday from 10:00 AM to 11:00 AM, users will not be able to create ( deploy) instances outside of this timeframe. Any instances that are running will be stopped at 11 AM using the stop action. Additionally, any previously deployed instances cannot be started outside of this time window; instead, they will automatically start at 10 AM.

Info

If the "Enforce Time Window" feature is disabled, the schedule serves as a simple action trigger to start and stop the instance at the specified start and stop times.


Examples

Example 1

A good example for a use case for this is "Time-Sharing Limited GPU Resources Across Teams". For example, consider an enterprise that has a limited pool of high-end GPUs (e.g., A100s or H100s) that must be shared among multiple internal teams (e.g., research, inference, and training teams). To prevent resource contention and ensure fair access, teams are assigned non-overlapping usage windows.

How Schedules Help:

  • Research team: 8:00 AM – 12:00 PM IST
  • Training team: 12:30 PM – 4:30 PM IST
  • Inference team: 5:00 PM – 9:00 PM IST

Benefits:

  1. Enables fair and scheduled GPU sharing
  2. Prevents one team from monopolizing resources
  3. Automates transitions and avoids manual intervention
  4. Scales easily across projects using profile-level or tag-based schedules

Example 2

Shift-Left Provisioning for Slow-Starting Workloads

Some environments—such as GPU-backed training clusters, large LLM inference stacks, or complex data pipelines—can take 10–30 minutes to provision and become operational. Users expect these environments to be ready when they log in, not wait for deployment delays. This is particularly impactful in education, enterprise R&D, or MLOps platforms, where slow-starting resources can derail daily workflows or testing cycles.

Use a “start” schedule to pre-warm or provision environments in advance of expected usage windows. For example,

  • Environment scheduled to start at 7:30 AM IST
  • Users typically log in by 8:00 AM IST
  • Auto-stop schedules to shut down idle environments post-work hours.

Benefits:

  1. Eliminates cold-start delays for time-sensitive teams
  2. Improves user experience for researchers, analysts, or students
  3. Ensures readiness without manual intervention
  4. Optimizes resource scheduling across dependent systems (e.g., inference + storage + UI)