Skip to content

Capabilities

Serverless Pods provide developers and data scientists with instant access to powerful compute resources (CPU/GPU) without managing long-lived infrastructure. Each pod is an isolated Ubuntu-based container bundled with compute, networking, and optional storage.

Pods are optimized for ephemeral, event-driven workloads like inferencing, API services, and batch jobs. Resources are only consumed when the pod is running, helping teams reduce idle costs while retaining flexibility.


Key Capabilities

  • Flexible container environments supporting any container image, including GPU-accelerated ML images with pre-installed CUDA and PyTorch versions
  • Hardware resources: vCPU, memory, and GPUs (optional)
  • Container volume for OS and temporary storage
  • Network connectivity via SSH
  • Optional persistent disk for stateful workloads

Why Use Serverless Pods

  • Instant & Flexible Environment
    Quickly deploy compute-intensive tasks like ML training, data processing, or rendering.

  • Transparent Control
    Full customization of OS, software stack, networking, and storage.

  • Cost-Effective & Scalable
    Scale pods down to zero when not in use and pay only for active resources.

← Back