hubify pod

Manage GPU pods — create, list, SSH into, and stop compute instances for running experiments.

Manage GPU compute pods. Pods are the machines where experiments run. Currently powered by RunPod, with Modal serverless coming soon.

GPU Types

GPUVRAMBest ForTier
h200141 GBLarge-scale MCMC, foundation model inference, multi-survey sweepsPremium
h10080 GBTraining runs, medium MCMC chains, anomaly detectionStandard
a10080 GBGeneral GPU compute, smaller training jobsStandard
a100-4040 GBLight inference, prototypingEconomy
cpu--Data preprocessing, LaTeX compilation, light analysisEconomy

hubify pod list

List all pods in the active lab.

hubify pod list
hubify pod list --status running
hubify pod list --costs
POD ID          GPU     STATUS    UPTIME     EXPERIMENTS   COST/HR
pod-o76k3jf     H200    running   14d 2h     12            $3.89
pod-abc1234     H100    running   2h 15m     1             $2.49
pod-def5678     H100    stopped   --         --            --

Options:

FlagDescriptionDefault
--status <status>Filter: running, stopped, creating, errorAll
--costsShow cost breakdownfalse
--jsonOutput as JSONfalse

hubify pod create

Create and start a new GPU pod.

hubify pod create --gpu h100
hubify pod create \
  --gpu h200 \
  --name "mcmc-production" \
  --disk 200 \
  --image runpod/pytorch:2.1.0-py3.10-cuda12.1.0
hubify pod create --template mcmc-h100
hubify pod create --gpu h100 --idle-timeout 30m

Options:

FlagDescriptionDefault
--gpu <type>GPU type (see table above)Required
--name <name>Pod display nameAuto-generated
--disk <gb>Persistent disk size in GB50
--image <image>Docker imagerunpod/pytorch:2.1.0-py3.10-cuda12.1.0
--template <name>Use a saved pod templateNone
--idle-timeout <duration>Auto-stop after idle period (e.g., 30m, 2h)No auto-stop
--spotUse spot/interruptible pricingfalse

Warning: GPU pods bill by the hour while running. Use hubify pod stop when not in use, or set --idle-timeout to auto-stop idle pods. An idle H200 at $3.89/hr costs $93/day.

hubify pod ssh

Open an SSH session into a running pod.

hubify pod ssh pod-o76k3jf
hubify pod ssh pod-o76k3jf --forward 8888:8888
hubify pod ssh pod-o76k3jf --command "nvidia-smi"

Options:

FlagDescriptionDefault
--forward <local:remote>Port forwarding (repeatable)None
--command <cmd>Run a single command and exitNone
--key <path>SSH key path~/.ssh/id_ed25519

The SSH connection uses the pod's assigned IP and port. These are stored in your lab config so you don't need to remember them.

Connecting to pod-o76k3jf (root@205.196.19.52:11452)...
root@pod-o76k3jf:~#

hubify pod stop

Stop a running pod. Persistent disk is preserved.

hubify pod stop pod-abc1234
hubify pod stop --all
hubify pod stop pod-def5678 --terminate

Options:

FlagDescriptionDefault
--allStop all running podsfalse
--confirmSkip confirmation promptfalse
--terminatePermanently destroy the pod and diskfalse

Note: stop preserves the pod's persistent disk so you can restart later. Use --terminate to permanently destroy the pod and its storage. Terminated pods cannot be recovered.

hubify pod restart

Restart a stopped pod.

hubify pod restart pod-abc1234

hubify pod templates

Manage reusable pod configurations.

hubify pod templates
hubify pod templates save pod-o76k3jf --name "mcmc-h200"
hubify pod create --template mcmc-h200

Typical Workflow

# 1. Spin up a pod for an experiment
hubify pod create --gpu h100 --name "fisher-forecast" --idle-timeout 1h

# 2. Check it's running
hubify pod list --status running

# 3. SSH in for manual inspection
hubify pod ssh pod-abc1234

# 4. When done, stop to save costs
hubify pod stop pod-abc1234

# 5. For long-running work, save a template
hubify pod templates save pod-abc1234 --name "standard-h100"
← Back to docs index