Core Concepts
The building blocks of Hubify Labs: labs, agents, experiments, papers, and knowledge.
Hubify Labs is built around a few key abstractions that map directly to how research actually works.
Labs
A lab is your isolated research environment. It contains everything related to a research project: experiments, agents, papers, data, figures, and a public website.
Every lab has:
- A unique slug (e.g.,
bigbounce) - Its own agent team
- A public site at
{slug}.hubify.app - Compute resources (GPU pods)
- A knowledge wiki
Labs are the top-level container. Everything else lives inside a lab.
Agents
Hubify Labs uses a hierarchical multi-agent system:
| Role | Description | Reasoning Level |
|---|---|---|
| Orchestrator | Routes tasks, manages priorities, talks to Captain | High (Opus 4.6) |
| Lead Agents | Direct specific domains (research, papers, cosmology) | High (Opus 4.6) |
| Worker Agents | Execute specific tasks (figures, analysis, wiki updates) | Low (Haiku 4.5) |
The orchestrator routes work by reasoning level:
- High reasoning — strategy, peer review, paper writing → Orchestrator or Leads
- Medium reasoning — analysis, code generation → Leads or Workers
- Low reasoning — data processing, formatting → Workers
Cross-model peer review is mandatory. No echo chambers — reviews use GPT, Gemini, Grok, and Perplexity alongside Claude.
Experiments
An experiment is a discrete research task with:
- A unique ID (e.g.,
EXP-054) - Status:
queued→running→complete/failed - Assigned agent(s)
- GPU pod allocation
- Input data and output results
- QC (quality control) gate
Experiments are the atomic unit of research progress. The Houston Method requires every experiment to pass a QC gate before results are accepted.
Papers
The paper pipeline takes research from raw results to arXiv-ready PDF:
- Results from experiments feed into paper sections
- Lead agents draft and review sections
- Cross-model peer review catches errors
- LaTeX compilation produces the PDF
- Figures are auto-generated and placed
All papers use revtex4-2 (Physical Review D format) for consistency.
Knowledge Base
Every lab has a Karpathy-style structured wiki that grows automatically:
- Entities (objects, surveys, instruments)
- Concepts (theories, methods, parameters)
- Sources (papers, datasets, catalogs)
- Comparisons (model A vs model B)
Agents update the wiki as they work. It becomes the lab's institutional memory.
Compute
GPU compute is provisioned through:
- RunPod — H100/H200 pods for heavy computation (Phase 1, available now)
- Modal — Serverless GPU functions (coming soon)
The system auto-optimizes for cost: if a cheaper pod running longer costs more than a faster pod, it picks the faster one.