MCP Prompts
Pre-built MCP prompts for research analysis, paper writing, experiment design, and review workflows.
The Hubify MCP server includes pre-built prompts that guide AI assistants through common research workflows. These prompts include structured instructions and automatically inject relevant lab context.
Available Prompts
analyze_experiment
Guides the assistant through a structured analysis of experiment results.
Arguments:
| Name | Type | Required | Description |
|---|---|---|---|
experiment_id | string | Yes | Experiment to analyze |
What it does:
- Reads experiment results and QC report
- Compares results to prior experiments
- Identifies significant findings and anomalies
- Proposes follow-up experiments (5-15 per the Houston Method)
- Suggests knowledge base updates
Example invocation:
Use the analyze_experiment prompt for EXP-054
draft_section
Guides the assistant through drafting a paper section.
Arguments:
| Name | Type | Required | Description |
|---|---|---|---|
paper_id | string | Yes | Target paper |
section | string | Yes | Section name (e.g., results, discussion) |
What it does:
- Reads the paper outline and claims table
- Fetches relevant experiment results
- Searches the knowledge base for supporting context
- Generates a section draft in LaTeX
- Cross-references claims against evidence
design_experiment
Helps design a new experiment with proper configuration.
Arguments:
| Name | Type | Required | Description |
|---|---|---|---|
goal | string | Yes | What the experiment should investigate |
What it does:
- Reviews the lab's research history for related work
- Suggests experimental parameters and methodology
- Recommends GPU type and estimated runtime
- Defines QC gate criteria
- Generates an
experiment.yamlconfig file
review_results
Performs a structured review of recent results.
Arguments:
| Name | Type | Required | Description |
|---|---|---|---|
since | string | No | Time period (default: 7d) |
What it does:
- Lists all experiments completed in the time period
- Summarizes findings and their significance
- Identifies patterns across experiments
- Flags results that contradict expectations
- Recommends priority adjustments to the task queue
knowledge_synthesis
Synthesizes knowledge base entries on a topic into a coherent summary.
Arguments:
| Name | Type | Required | Description |
|---|---|---|---|
topic | string | Yes | Topic to synthesize |
What it does:
- Searches the knowledge base for relevant entries
- Combines entity, concept, source, and comparison entries
- Produces a structured summary with citations
- Identifies gaps in the knowledge base
- Suggests new entries to create
Using Prompts
In Claude Code or other MCP-compatible assistants, you can invoke prompts directly:
Use the design_experiment prompt with goal "Test whether quintom-B dark energy model
fits DESI BAO data better than LCDM"
The assistant receives the prompt template with your lab's data injected, then follows the structured workflow to produce high-quality output.
Custom Prompts
You can add custom prompts to the MCP server:
hubify mcp prompt add \
--name "weekly_summary" \
--template "Summarize all experiments from the past week. Include: results, cost, QC outcomes, and recommended next steps." \
--args "weeks:number:1"
Custom prompts are stored in your lab configuration and available to all MCP hosts.