Novelty Scoring
AI-powered assessment of how novel your findings are — from incremental to field-changing, calibrated against existing literature.
Novelty Scoring is an AI-powered system that evaluates how novel a research finding is, calibrated against existing literature and known results. It helps you prioritize which results to pursue, which to publish, and which are incremental.
How It Works
When an experiment produces a result, the novelty scorer:
- Extracts the key finding — What is the scientific claim?
- Searches existing literature — Has this been reported before? How does it compare?
- Evaluates significance — Statistical strength, theoretical implications, testability
- Cross-references the knowledge base — Does this connect to other findings in the lab?
- Assigns a score — 1 to 10 scale with a written justification
Scoring Scale
| Score | Label | Meaning |
|---|---|---|
| 1-2 | Incremental | Confirms known results. Marginal improvement over prior work. |
| 3-4 | Useful | New data point in a known area. Strengthens existing evidence. |
| 5-6 | Notable | Meaningfully extends the field. Worth a short paper or letter. |
| 7-8 | Significant | New constraint, prediction, or method. Worth a full paper. |
| 9-10 | Field-Changing | Challenges established paradigms. Requires immediate follow-up. |
Example Scores
| Finding | Score | Rationale |
|---|---|---|
| "MCMC confirms H0 = 67.68 in standard LCDM" | 2 | Matches known value. No new physics. |
| "Matter bounce predicts f_NL = -4.375, testable by SPHEREx" | 8 | Parameter-free prediction. Falsifiable by 2027. Novel across all bounce models. |
| "Quintom-B favored at 2.3 sigma over LCDM" | 7 | Strong evidence for new physics, but not yet at discovery threshold. |
| "ALP birefringence prediction matches 3.6 sigma observation" | 9 | Predicted value (0.27 deg) matches independent observation (0.342 +/- 0.094 deg). |
Using Novelty Scores
Novelty scores feed into several workflows:
- Experiment prioritization — Higher-novelty follow-ups get queued first
- Paper readiness — A paper's overall novelty influences publication priority
- Lab site highlights — High-novelty results are featured prominently on the public site
- Resource allocation — GPU time is prioritized toward high-novelty research directions
Cross-Model Calibration
Novelty scoring uses cross-model evaluation to avoid inflated scores:
- The primary model scores the finding
- A second model from a different provider reviews the score
- If scores diverge by more than 2 points, a third model breaks the tie
- The final score is the median of all evaluations
CLI
# Score a specific experiment's results
hubify experiment score EXP-054
# View novelty scores for all experiments
hubify experiment list --sort novelty
# Get detailed novelty report
hubify experiment score EXP-054 --verbose
API
curl "https://api.hubify.com/v1/labs/bigbounce/experiments/EXP-054/novelty" \
-H "Authorization: Bearer $HUBIFY_API_KEY"
{
"experiment_id": "EXP-054",
"score": 8,
"label": "Significant",
"finding": "Matter bounce predicts f_NL = -4.375, parameter-free and testable by SPHEREx",
"rationale": "Parameter-free prediction distinguishes bounce from inflation. SPHEREx forecast shows 4.7-12 sigma detection by 2027. No prior work has derived this specific value.",
"literature_matches": 3,
"reviewed_by": ["claude-opus-4-6", "gpt-5.4", "gemini-2.5-pro"]
}