Run the Publish-Ready Loop

Trigger the autonomous 5-round publish loop on a paper. What each round checks, how to read the preflight scorecard, and how to export the arXiv package.

The publish-ready loop is a 5-round autonomous review cycle that drives a paper from draft to submission-ready. Each round runs checks across multiple AI models, generates revision tasks, and updates the readiness scorecard. You review after all 5 rounds and decide when to export.

When to Run This

  • Paper readiness is ≥ 60% (Claims, Figures, and Bibliography columns all non-empty)
  • You have at least one completed experiment linked to the paper
  • All major results are in the claims table — do not start the loop with unverified claims

Start the Loop

1. Go to **Papers** in the sidebar
2. Select your paper
3. Click **Publish-Ready Loop** in the paper toolbar
4. Set rounds to `5` (default)
5. Click **Start**

The loop runs in the background. You will see round-by-round status in the paper activity feed.


```bash
hubify paper publish-loop paper-1 --rounds 5
```

Follow progress:
```bash
hubify paper loop-status paper-1 --follow
```

What Each Round Does

Round 1: Accuracy Pass

Checks every claim against the evidence experiments.

  • Are stated numbers consistent with the actual data outputs?
  • Are error bars computed correctly?
  • Are units consistent throughout?
  • Any overclaiming relative to the data?

Output: list of accuracy concerns, each tagged to the specific claim and section.

Round 2: Completeness Pass

Looks for gaps.

  • Missing citations for major claims
  • Analyses mentioned but not shown (figures or tables)
  • Standard methodology checks expected by the target journal (PRD, MNRAS, etc.)
  • Comparison with prior work — is the prior art properly characterized?

Output: list of missing elements with suggested additions.

Round 3: Clarity Pass

Reviews readability and structure.

  • Abstract covers all major results?
  • Introduction motivates the work clearly?
  • Results section flows logically from setup to findings?
  • Figures are self-contained (captions tell the full story)?
  • Jargon defined on first use?

Output: section-by-section clarity score + specific rewrites flagged for your review.

Round 4: Adversarial Pass

Tries to find ways to reject the paper.

This round acts as a skeptical referee: What is the weakest claim? What alternative explanation has not been ruled out? What systematic error could invalidate the main result? What is missing from the null tests section?

Output: list of referee-style concerns ordered by severity. High-severity items block submission.

Round 5: Final Preflight

Runs technical compilation checks and submission-readiness verification.

  • LaTeX compiles with 0 errors
  • All figures embedded (PDFLaTeX resolves every \includegraphics)
  • No undefined references (\ref{}, \cite{})
  • Bibliography complete (no "?" entries)
  • Page count within journal limit
  • Author list and affiliations present
  • arXiv metadata (title, abstract, keywords) complete

Output: the preflight scorecard.


Reading the Preflight Scorecard

Paper: Observational Constraints on Bounce Cosmology
Loop: 5/5 rounds complete

Content:      ██████████  100%  (7/7 sections)
Figures:      ██████████  100%  (11/11 placed)
Bibliography: █████████░   90%  (57/63 resolved)
Claims:       ██████████  100%  (12/12 verified)
Compilation:  ██████████  100%  (0 errors)
Review:       ██████████  100%  (5/5 rounds)

Accuracy:     PASS (0 high-severity)
Completeness: PASS (2 low-severity suggestions)
Clarity:      PASS
Adversarial:  PASS (1 medium — recommend adding null test)
Preflight:    PASS

Overall Readiness: 97%  ✓ READY FOR EXPORT

READY FOR EXPORT means all high-severity issues are resolved and technical preflight passed. Medium-severity items are listed as suggestions, not blockers.

NOT READY means at least one of: Accuracy FAIL, Adversarial high-severity open, or Preflight FAIL. Resolve those items before exporting.


Resolving Loop Issues

After the loop, the paper issue list shows every item from all 5 rounds. Resolve them in order: accuracy first, then completeness, then adversarial.

# View all open issues
hubify paper issues paper-1

# Apply a specific fix
hubify paper revise paper-1 --apply issue-7

# Dismiss a low-severity item (with reason)
hubify paper issues paper-1 dismiss issue-12 --reason "addressed in supplementary"

# Re-run a specific round after fixes
hubify paper publish-loop paper-1 --round 4

Export the arXiv Package

Once the scorecard shows READY FOR EXPORT:

hubify paper export paper-1 --format arxiv

This creates paper-1-submission.tar.gz containing:

FileDescription
ms.texMain LaTeX source
ms.pdfCompiled PDF
figures/All figure files (PNG/PDF)
references.bibComplete bibliography
supplementary.texSupplementary material (if any)
arxiv-metadata.jsonTitle, abstract, categories, authors

Upload the .tar.gz directly to arxiv.org/submit.


Best Practices

  • Lock claims before starting. The loop cannot verify unlinked claims. If a claim has no evidence experiment, it will fail Round 1 every time.
  • Run the loop end-to-end before making edits. Reading all 5 round outputs together gives you a prioritized edit list. Fixing between rounds means re-running earlier rounds.
  • Do not dismiss high-severity adversarial findings. They represent real referee risk. Fix them.
  • Re-run Round 5 after any bibliography changes. Citation resolution errors are the most common cause of a blocked export.
← Back to docs index