Agent Configuration

Configure your multi-agent team — choose models, define specialties, set up cross-model review, and tune routing.

Every lab has an agent team that you can customize to match your research domain. This guide covers how to add, remove, and configure agents.

Default Agent Team

When you create a lab from a template, you get a pre-configured team:

AgentRoleDefault ModelSpecialty
OrchestratorRouter + managerClaude OpusTask routing, priority management
Research LeadResearch strategyClaude OpusDomain-specific research
Paper LeadManuscript managementClaude OpusWriting, citations, claims
Analysis WorkerData analysisClaude SonnetStatistics, data processing
Figure WorkerVisualizationClaude HaikuPlot generation, formatting
Data WorkerData managementClaude HaikuIngestion, transformation, wiki

Adding Agents

1. Go to your lab's **Agents** view
2. Click **Add Agent**
3. Configure:
   - **Name** — Descriptive name (e.g., "Cosmology Lead")
   - **Role** — Lead or Worker
   - **Model** — The AI model this agent uses
   - **Specialty** — Free-text description of the agent's domain expertise
4. Click **Create**


```bash
# Add a specialized lead
hubify agent add \
  --role lead \
  --name "Cosmology Lead" \
  --model claude-opus \
  --specialty "MCMC analysis, CMB power spectra, dark energy, bounce cosmology"

# Add a worker
hubify agent add \
  --role worker \
  --name "LaTeX Worker" \
  --model claude-haiku \
  --specialty "LaTeX compilation, bibliography management, figure placement"
```

Choosing Models

Select models based on the reasoning level required:

Reasoning LevelRecommended ModelsCost
HighClaude Opus, GPT-4o$$$
MediumClaude Sonnet, GPT-4o-mini$$
LowClaude Haiku, GPT-3.5-turbo$

Note: Leads can do work themselves (not just delegate). A Claude Opus lead can handle complex analysis directly, while a Haiku worker handles formatting tasks. Match the model to the agent's responsibilities.

Cross-Model Review Setup

Cross-model review is mandatory. Configure which external models participate:

# Enable GPT-5 as a reviewer
hubify agent review-config --add-reviewer gpt-5.4 --api-key $OPENAI_API_KEY

# Enable Gemini as a reviewer
hubify agent review-config --add-reviewer gemini-2.5-pro --api-key $GOOGLE_API_KEY

# View review configuration
hubify agent review-config --show

The system automatically assigns reviewers from different model families:

  • Claude output is reviewed by GPT-4 or Gemini
  • GPT-4 output is reviewed by Claude or Grok
  • Gemini output is reviewed by Claude or GPT-4

Updating Agents

# Change an agent's model
hubify agent update "Analysis Worker" --model claude-sonnet

# Update specialty
hubify agent update "Research Lead" --specialty "Galaxy surveys, anomaly detection, statistical methods"

# Disable an agent temporarily
hubify agent update "Figure Worker" --status inactive

# Remove an agent
hubify agent remove "Data Worker"

Viewing the Team

# List all agents
hubify agent list

# Tree view showing hierarchy
hubify agent list --tree

# View agent metrics
hubify agent metrics "Research Lead"
AGENT             ROLE      MODEL         STATUS   TASKS   QC RATE
Orchestrator      orch      claude-opus   active   342     N/A
Research Lead     lead      claude-opus   active   128     94%
Paper Lead        lead      claude-opus   active   87      97%
Analysis Worker   worker    claude-sonnet active   203     91%
Figure Worker     worker    claude-haiku  active   156     88%
Data Worker       worker    claude-haiku  active   189     95%

Standup Configuration

Configure the orchestrator's standup schedule:

# Set standup times (3x daily recommended)
hubify agent update orchestrator --standup-schedule "8:00,12:00,18:00"

# Set timezone
hubify agent update orchestrator --timezone "America/Los_Angeles"

Standups summarize what happened since the last check-in, flag blockers, and recommend next actions.

Auto-Scheduling

Enable auto-scheduling so the orchestrator picks the next experiment when pods are idle:

hubify agent update orchestrator --auto-schedule true

When enabled, the orchestrator monitors pod utilization and automatically deploys queued experiments to idle pods. This ensures GPUs are never sitting idle.

Best Practices

  • Use Opus for orchestrator and leads that make strategic decisions
  • Use Sonnet for workers that do analysis and code generation
  • Use Haiku for workers that do formatting, data transformation, and wiki updates
  • Always have at least two external model providers for cross-model review
  • Start with the default team and add specialists as your research demands grow
← Back to docs index