PlatformDatasetsAgentsCompareBlogSign inRequest access
SCIENTIFIC DISCOVERY PLATFORM

The platform for
scientific discovery.

Thousands of datasets. Multi-agent multi-model peer review. GPU compute on demand. One coherent IDE — web, desktop, and CLI — all in sync.

Request early accessPrivate beta · independent researchers only
ORIGIN
I built this to run my own research — not as a side project, but as full infrastructure for an independent scientific program. After months running it, I realized I'd built something other researchers needed.

— Houston Golden, Hubify Labs

THE PLATFORM

Everything in one place. Always in sync.

The full Discovery IDE — experiments, papers, agents, datasets, and GPU compute — in a single interface running in your browser, desktop, and terminal simultaneously.

app.hubify.com / labs / your-lab / experiments
hubify labs
your-lab
Captain
Projects3
Experiments14
Surveys6
Papers2
Figures18
Data9
Knowledge24
Agents11
YOyou
Experiments14 total · 1 running · 2 queued
+ New experiment
IDTitleModelDuration
EXP-014Cross-survey anomaly cross-match · 4.1σClaude Sonnet 4.62h 14m
EXP-013Literature synthesis · arXiv + PubMedClaude Haiku 4.538m
EXP-012Statistical pipeline · spectral residualsClaude Sonnet 4.6running
EXP-011Null hypothesis · randomised catalogClaude Haiku 4.5queued
EXP-010Data normalisation · DESI DR1 + SDSS DR18Claude Haiku 4.555m
EXP-009Model comparison · χ² residual analysisClaude Sonnet 4.61h 6m
● running|2 queued|11 agents activepeer review: on
Orchestrator
YO
Run anomaly detection on the full dataset. Flag anything above 3σ.
OR
Running EXP-012. Dispatched to Experiment Lead. ETA ~18 min.
OR
Found 4,291 candidates. Initiating cross-provider peer review.
PR
GPT-5.4 · Methodology check passed. Statistical approach sound. Clarify σ threshold selection rationale.
YO
Which surveys contributed the most signal?
OR
DESI DR1 · 62% · SDSS DR18 · 31% · LAMOST · 7%.
Ask your lab anything…
3 SURFACES · 1 LAB

Your lab, wherever you work.

All three surfaces share the same state — same experiments, same agents, same data. Switch between them mid-session without losing anything.

app.hubify.com
Web IDE
Full research environment in the browser. Experiments, papers, agents, datasets, and GPU compute — accessible from any machine.
macOS · native
Desktop App
Native macOS app with faster performance, offline capability, and deep OS integration — keyboard shortcuts, notifications, file system access.
terminal
CLI · TUI
The full IDE in your terminal. Script automation, headless pipelines, SSH-friendly. Complete feature parity — not a reduced interface.
Full parity across all three. The same experiment can be launched from the web, inspected in the desktop app, and monitored from the CLI — simultaneously.
WHAT YOU CONNECT TO

Thousands of datasets.
Every domain in science.

First-class connectors to HuggingFace, NASA, arXiv, Wolfram, the K-Dense 250-database catalog, and the long tail of domain-specific archives. Pull data. Train models. Publish enhanced catalogs back.

10,000+
datasets
HuggingFace · NASA · arXiv · Wolfram · ESO · NOAA · NIH · the K-Dense catalog · and the long tail.
250+
live database connectors
PubMed · ChEMBL · UniProt · SDSS · Gaia · DESI · SEC EDGAR · FRED · BioPython · BioServices · and more.
200+
scientific data formats
FITS · HDF5 · BAM · VCF · mzML · DICOM · CIF · GeoTIFF · NWB · Zarr · Parquet · and the rest of the zoo.
14
scientific domains
Genomics · astronomy · chemistry · materials · neuroscience · imaging · geospatial · and beyond.
HuggingFace
1.5M+ models
NASA
MAST · ADS · IPAC
arXiv
2.4M+ papers
Wolfram
Data Repository
PubMed
36M+ citations
ChEMBL
2.4M molecules
UniProt
250M proteins
Gaia DR3
1.8B stars
SDSS
2.3M spectra
DESI DR1
22.5M spectra
LAMOST
11.4M spectra
eROSITA
930K sources
Mat. Project
150K crystals
NOAA
climate · ocean
SEC EDGAR
filings · finance
+ 235 more
long-tail archives
Genomics & Sequencing
Sequence & Phylogenetics
Chemistry & Molecular
Materials Science
Imaging & Pathology
Mass Spectrometry
Astronomy
Neuroscience
Single-Cell & Arrays
Geospatial & Climate
Cosmology
Particle Physics
Finance & Macro
Documents & Outputs
Built on the K-Dense scientific skills baseline · extended with HuggingFace · arXiv · Wolfram · NASA · and 235 more long-tail connectors.
WHO RUNS IT FOR YOU

11 agents. 4 model providers.
Zero echo chamber.

Every lab ships with a pre-wired team running 24/7 — 1 orchestrator, 4 domain leads, 6 workers, and 4 cross-provider reviewers. You stay in the Captain's seat. They handle the rest — and argue with each other before showing you anything.

Captain1 · the human
You
Sets direction · approves work · owns all judgement calls
always in the driver's seat
Orchestrator1 · lab brain
Lab Orchestrator
routes work · synthesises results · runs 24/7
Claude Opus 4.6
Leads4 · domain owners
Paper Lead
manuscripts · figures
Claude Sonnet 4.6
Experiment Lead
dispatch · pipelines
Claude Sonnet 4.6
Anomaly Lead
discoveries · novelty
Claude Sonnet 4.6
GPU Manager
routing · costs · pods
Claude Sonnet 4.6
Workers6 · executors
DataLoader
ingest · cleanup
Claude Haiku 4.5
Coder
scripts · pipelines
Claude Haiku 4.5
Analyst
stats · viz
Claude Haiku 4.5
Wiki-Worker
knowledge graph
Claude Haiku 4.5
+ 2 more
specialised exec.
Haiku · Sonnet
Fully configurable. Swap any model, add domain-specific agents, adjust routing, wire in external tools. Build the team your research needs.
ADVERSARIAL PEER REVIEW

AI assistants, by design,
tell you what you want to hear.

The sycophancy problem in AI research is real. Single-model setups validate weak hypotheses, overlook methodological flaws, and amplify confidence when it should be questioned — because they are trained to be helpful, not adversarial. When your orchestrator and your reviewer are the same model, you are in an echo chamber.

The only antidote is adversarial review from models trained by different teams, with different architectures, different inherent biases, and explicit instructions to find problems — not confirm them. If all four independent reviewers agree on something, it's worth publishing. If any one flags it, it goes back.

OpenAI
GPT-5.4
Methodology & statistical review
  • Ill-posed experimental designs
  • Statistical sins (p-hacking, underpowered tests)
  • Unstated assumptions in the methodology
Google
Gemini 3.1
Long-context cross-check
  • Internal contradictions across the full paper
  • Claims that conflict with cited sources
  • Inconsistencies between methods and results
Anthropic
Claude · Skeptic
Devil's advocate
  • The most obvious alternative explanation you didn't consider
  • Overconfident conclusions from limited data
  • Theory assumptions that haven't been earned
Perplexity
Sonar Pro
Live fact verification
  • Hallucinated citations and URLs
  • Numbers that don't match source material
  • Claims about prior work that are subtly wrong
Total: 11 lab agents + 4 cross-provider reviewers (Anthropic · OpenAI · Google · Perplexity). All auditable in the activity feed.
Read the agent architecture
WHAT YOU GET

Everything your research needs.
Nothing it doesn't.

Each lab is a self-contained discovery environment — agents, compute, a public site, and the publishing pipeline already wired up. Start with one lab. Add more as your research grows.

11 agents, pre-wired
1 orchestrator (Opus 4.6) + 4 leads + 6 workers + 4 cross-provider reviewers (GPT-5.4, Gemini 3.1, Claude Skeptic, Sonar Pro). All running, all auditable.
Always-on compute
GPU pods + serverless on demand. The orchestrator picks the cheapest credible target per job. Live credit monitoring. You never touch the compute provider directly.
Your own lab site
Your lab gets its own subdomain with a public site that auto-syncs from your research. Papers, figures, and experiments flow there automatically.
Paper generation pipeline
5-round autonomous publishing: mechanical QC, cross-model peer review, completeness audit, final visual pass, and arXiv package. Reaches 100% or rejects with a full report.
4-layer memory
User, agent, lab, and global memory. Agents read the right scope automatically. Preferences in user. Lab knowledge in lab. Cross-lab insights in global.
24/7 orchestrator
A small always-on machine runs your lab while you sleep. Cron jobs every 5 min. Standups 3× a day. Idle-GPU watchdog. The publish-ready loop runs overnight.
Pricing — currently in private beta
Starter
Free
1 lab · community compute pool · all surfaces
Lab
TBD
Unlimited labs · reserved GPU hours · white-glove onboarding
Founding member pricing is locked in at the rate you sign up at and will not increase. All prices subject to change during beta.
WHY NOT JUST USE X?

Built for science.
Not retrofitted.

Jupyter is for notebooks. k-dense has incredible dataset coverage but no agents. feynman.is is CLI-first but has no GPU, no paper pipeline, and no memory. Hubify Labs was built from scratch for independent researchers who need real discoveries.

CapabilityHubify Labsk-dense.aifeynman.isJupyter / Colab
Multi-agent orchestration
Cross-model review (GPT-5.4 · Gemini 3.1 · Sonnet · Sonar)
GPU compute integration (H200, credits)~
Publish-ready loop (paper → arXiv → HuggingFace)~
Novelty scoring
4-layer agent memory
250+ scientific dataset connectors~
Scientific skills catalog~
Captain-configurable public lab site
Web + Desktop + CLI TUI~
Always-on orchestrator (24/7, no babysitting)

✓ full support  ·  ~ partial  ·  ✗ not supported  ·  Based on public information as of early 2026.

THE WINDOW

For the first time in history, one person
can do the work of a department.

I ran an autoencoder on 17.65 million spectra from DESI DR1 — every publicly available spectrum from their first data release. It found 195,829 objects that don't match any known pattern. 99.8% aren't in SIMBAD. The total compute cost was about $200.

The question that gnaws at me isn't whether the results are real. It's why nobody had done this before. Five things had to be true simultaneously — public data, cheap GPUs, capable AI agents, a cultural gap between ML and astronomy, and academic incentive structures that discourage it. They are all true right now.

They won't all be true forever. The window is roughly 2025–2027. Every week that passes without publishing is a week closer to someone else publishing first.

— Houston Golden, March 2026