Houston Method v2

The 9-step post-experiment protocol. Every experiment that completes triggers this ritual — pass or fail.

Every experiment that finishes — pass or fail — triggers a mandatory 9-step completion loop. Running the script is Step 1 of 9. Here is what the full protocol requires and why each step exists.

Why This Exists

"There must be more experiments. Do not accept 'complete' easily."

A script finishing is not a result. A result without cross-matching is not science. Science without site/paper updates is wasted work. The Houston Method enforces a loop that pushes every research path as far as it can go before marking anything complete.

The 9-Step Loop

RUN → QC → ANALYZE → INTERPRET → CONNECT → SYNC → EXPAND → BACKUP → COMPLETE
  ↑                                                    |
  └────────────────────────────────────────────────────┘

Step 7 (EXPAND) feeds new tasks back into Step 1. The queue never empties because every result generates new work.


Step 1: RUN

Execute the computation. Save raw outputs (parquet, JSON, FITS, CSV, figures).

Nothing else counts until the raw output exists.


Step 2: QC Gate

Automated checks run immediately after the experiment finishes.

CheckFailure Condition
Null coordinates>5% of top anomalies at RA=0, Dec=0
Training qualityval_loss > 1,000 or no convergence
Cluster degeneracy>80% of objects in a single cluster
Score explosionmax(anomaly_score) > 10^6
Spatial concentrationAll top 20 anomalies within 5° radius
Empty output0 anomalies found
NaN/Inf valuesAny NaN or Inf in scores or coordinates

If ANY check fails: mark needs-rerun, add a re-run task with the specific fix, move to the next experiment. Do not proceed to Step 3.


Step 3: ANALYZE

Scientific analysis of the results.

  • Cross-match top 100 anomalies against SIMBAD (2"), NED (5"), and survey-specific VizieR catalogs
  • Compute novelty fraction (% not in any catalog)
  • Spatial distribution — clustered (astrophysical) or uniform (instrumental)?
  • Score distribution — log-normal (physical) or power-law with cutoff (systematic)?
  • Classify anomaly types: QSO, star, galaxy, artifact, unknown

For CMB experiments: cross-match anomalous patches with known CMB features, check galactic foreground correlation, run multipole-by-multipole analysis, run null tests.


Step 4: INTERPRET

Connect the result to the science.

  1. Does this improve the f_NL constraint?
  2. Does this test the birefringence prediction (β = 0.27°)?
  3. Does this constrain the NANOGrav GW spectrum (γ = 3.0)?
  4. Does this support or challenge the quintom w-crossing (w₀ + wₐ < −1)?
  5. Does this open a new observational channel for bounce cosmology?

Never stop at "null result." A negative result still teaches something. What constraints does it place on models? What does it open?


Step 5: CONNECT

How does this result relate to the rest of the program?

  • Cross-reference with every other completed survey
  • Update the bounce cosmology portfolio table (which channels strengthened/weakened?)
  • Update the f_NL sensitivity forecast (did σ improve?)
  • Check if any paper drafts need updates based on this finding

Step 6: SYNC

Update all affected website pages and paper drafts within 24 hours.

Pages that always need updating after any experiment: activity.html, status.html. Then check data-explorer.html, figures.html, paper.html, index.html, explained.html, and the relevant articles.

The rule: if a number, claim, or figure appears on any page and the underlying data changed, that page must be updated. Use grep to find all occurrences of changed values.

In Hubify, this means updating the relevant figures, papers, and activity feed in your lab.

# View which papers reference this experiment
hubify paper list --experiment EXP-001

# Add result to a paper section
hubify paper claim add paper-1 --evidence EXP-001

Step 7: EXPAND

Generate new tasks from this result. Every experiment should produce 5–15 new tasks.

Task types to generate:

  1. Cross-match tasks — one per other survey (N×N matrix)
  2. Deeper analysis — download spectra, detailed classification for top anomalies
  3. Runs on the runs — UMAP clustering, emission line extraction, photo-z estimation
  4. Architecture variants — re-run with transformer/VAE/contrastive if results are promising
  5. New dataset search — search for newly released public datasets (arxiv, survey calendars)
  6. Advanced simulations — MCMC or numerical simulation to refine theoretical predictions
  7. Paper integration — draft/update task if publishable
  8. Follow-up observation — prepare target list if discovery candidate

If you generated fewer than 5 tasks, you have not thought hard enough.

# Add follow-up experiments to the queue
hubify experiment queue add --from EXP-001 --type cross-match

Step 8: BACKUP

Results must exist in 3+ locations before marking complete.

  1. scp results to local disk
  2. git add + commit + push (auto-deploys the lab site)
  3. Every 5 experiments: backup to Backblaze B2
  4. Every 10 experiments: upload model artifacts to HuggingFace
  5. Write checkpoint.json with full experiment metadata
# Backup outputs
hubify experiment backup EXP-001

# Verify backup locations
hubify experiment status EXP-001 --show-backups

Step 9: COMPLETE

Only after steps 1–8 are done. Mark the experiment complete everywhere:

hubify experiment complete EXP-001

This updates the experiment status to complete, logs the completion to the activity feed, and triggers the site update hook.


Anti-Patterns

What happenedWhy it is not complete
Script finished runningThat is Step 1 of 9
Results saved to diskThat is Step 8 of 9 (backup only, not analysis)
"COMPLETE" badge addedBadge without QC is a lie
Anomaly count reportedCount without classification is meaningless
Cross-match done against one catalogNeed SIMBAD + NED + VizieR minimum
"Null result" reportedWhat does the null result open? (Step 4)
No new tasks generatedThink harder (Step 7)

Using the Houston Method in Hubify

The orchestrator applies this protocol automatically after every experiment QC pass:

  1. Research Lead runs Steps 3–4 (analysis + interpretation)
  2. Paper Lead checks Steps 5–6 (connect + sync papers)
  3. Orchestrator runs Step 7 (generates queue expansion tasks)
  4. Backup agent runs Step 8
  5. Captain reviews Step 9 in the Discoveries view

You can also trigger a manual completion loop:

hubify experiment complete-loop EXP-001 --steps analyze,interpret,expand
← Back to docs index