Modal Integration
Use Modal serverless GPUs with Hubify Labs for on-demand compute.
Note: Modal integration is currently in development. This page describes the planned functionality. RunPod is the recommended compute provider today.
Modal provides serverless GPU functions that run on-demand and bill per second. When available, Modal will complement RunPod by handling short-lived, bursty workloads.
Planned Features
Serverless GPU Functions
Instead of managing pods, deploy Python functions that run on GPU infrastructure:
# hubify_functions.py
import modal
app = modal.App("hubify-lab")
@app.function(gpu="H100", timeout=600)
def run_analysis(experiment_config: dict):
"""Runs on an H100 GPU, billed per second of execution."""
import cobaya
# Your analysis code here
return results
When to Use Modal vs RunPod
| Use Case | Recommended | Why |
|---|---|---|
| Long MCMC chains (hours) | RunPod | Persistent pod is cheaper for long runs |
| Quick analysis (< 10 min) | Modal | Per-second billing, no pod overhead |
| Figure generation | Modal | Short task, instant cold start |
| Batch inferences | Modal | Auto-scales across multiple GPUs |
| Interactive debugging | RunPod | SSH access, persistent environment |
| Training runs | RunPod | Stable long-running environment |
Auto-Routing
When Modal is available, the orchestrator will automatically route experiments to the most cost-effective provider:
Short task (< 10 min) → Modal (per-second billing)
Long task (> 10 min) → RunPod (per-minute billing, cheaper at scale)
Bursty parallel tasks → Modal (auto-scaling)
Planned Setup
# Install Modal
pip install modal
# Authenticate
modal token new
# Connect to Hubify
hubify pod provider add modal --token $MODAL_TOKEN
# Deploy functions
hubify deploy functions --provider modal --file hubify_functions.py
Planned Configuration
# Set Modal as the default for short tasks
hubify pod config --short-task-provider modal
# Set the routing threshold
hubify pod config --modal-threshold 10m # Tasks under 10 min → Modal
# View provider routing
hubify pod config --show-routing
Current Status
Modal integration is planned for Q3 2026. In the meantime:
- Use RunPod for all GPU compute
- The transition will be seamless when Modal support ships
- Existing experiments will continue to work on RunPod
- The orchestrator will automatically start routing to Modal when configured
Sign Up for Updates
To be notified when Modal integration is available:
hubify notifications subscribe --feature modal-integration