Documentation AI Agent Guide¶
Version: 2.1 | Last Updated: 2026-01-06 | Status: Stable
Overview¶
This guide explains how AI agents can use the MBPanel DocumentationAgent to create, query, and manage documentation programmatically. Covers prompt engineering, context building, and practical examples for AI-assisted documentation workflows.
Architecture¶
graph TB
AI[AI Agent] --> DA[DocumentationAgent]
DA --> DI[Dependency Injection]
DI --> GEN[Document Generators]
GEN --> ADR[ADR Generator]
GEN --> PRD[PRD Generator]
GEN --> TASK[Task Generator]
GEN --> ADD[ADD Generator]
GEN --> IMPL[Implementation Generator]
GEN --> ERR[Error Generator]
GEN --> TEST[Test Report Generator]
DA --> IDX[Cross-Reference Indexer]
IDX --> CTX[Context Retrieval]
CTX --> ADRDocs[ADR Documents]
CTX --> APIDocs[API Documents]
CTX --> TestDocs[Test Results]
CTX --> ErrDocs[Error Docs]
CTX --> ImplDocs[Implementation Notes]
DA --> CACHE[Build Cache]
CACHE --> VAL[Validation]
style AI fill:#e1f5ff
style DA fill:#fff4e1
style IDX fill:#e8f5e9
Quick Start for AI Agents¶
Initialization Pattern¶
from pathlib import Path
from tools.docs.agents import DocumentationAgent
from tools.docs.generators import (
ADRGenerator,
PRDGenerator,
TaskGenerator,
ADDGenerator,
ImplementationGenerator,
ErrorGenerator,
TestReportGenerator,
)
# Initialize agent with docs root
docs_root = Path("/path/to/mbpanelapi/docs")
agent = DocumentationAgent(docs_root)
# Register all generators
agent.register_generator("adr", ADRGenerator(docs_root))
agent.register_generator("prd", PRDGenerator(docs_root))
agent.register_generator("task", TaskGenerator(docs_root))
agent.register_generator("add", ADDGenerator(docs_root))
agent.register_generator("implementation", ImplementationGenerator(docs_root))
agent.register_generator("error", ErrorGenerator(docs_root))
agent.register_generator("test_report", TestReportGenerator(docs_root))
Basic Usage Pattern¶
# Get context for a component
context = agent.get_context("sites")
# Create documentation based on context
if context["adrs"]:
# Related ADRs exist, reference them
related_adrs = [adr["title"] for adr in context["adrs"]]
# Create task
task_path = agent.create_task(
title="Add site cloning feature",
description="Implement site cloning with template selection",
status="backlog",
assignee="team-backend",
priority="P1"
)
Prompt Engineering for Documentation¶
Principle: Structured Context¶
AI agents must provide structured, complete context when creating documentation. Avoid vague descriptions.
BAD PROMPT:
GOOD PROMPT:
Create an ADR with:
- Title: Use PostgreSQL for session storage
- Context: Current Redis-based sessions lack durability and ACID compliance.
Sessions are lost on restart. Need persistent storage.
- Decision: Migrate to PostgreSQL with JSONB column for session data.
- Consequences: Migration required, improved durability, transactional integrity.
- Alternatives: 1) Memcached (faster but no persistence), 2) MongoDB (schema-less but less mature)
Principle: Evidence-Based Decisions¶
ADRs must include technical justification with evidence.
TEMPLATE:
Create an ADR for {DECISION} with:
Context:
- Current state: {DESCRIPTION}
- Problem: {SPECIFIC PROBLEM}
- Impact: {METRICS/ERRORS}
Decision:
- Choice: {TECHNOLOGY/APPROACH}
- Technical justification:
- Performance: {BENCHMARKS}
- Scalability: {LIMITS AND GROWTH}
- Maintainability: {CODE EXAMPLE}
Consequences:
- Benefits: {SPECIFIC IMPROVEMENTS}
- Drawbacks: {KNOWN LIMITATIONS}
- Migration: {STEPS REQUIRED}
Alternatives:
1. {OPTION 1}
- Pros: {LIST}
- Cons: {LIST}
2. {OPTION 2}
- Pros: {LIST}
- Cons: {LIST}
Related Documents:
- {LINK TO RELATED ADR/PRD/TASK}
Principle: Measurable Success Criteria¶
PRDs must include quantifiable success metrics.
TEMPLATE:
Create a PRD for {FEATURE} with:
Problem Statement:
- User pain point: {SPECIFIC FRUSTRATION}
- Business impact: {REVENUE/RETENTION IMPACT}
- Current limitation: {WHAT'S MISSING}
Proposed Solution:
- Feature description: {DETAILED DESCRIPTION}
- User workflow: {STEP-BY-STEP USER EXPERIENCE}
- Technical approach: {HIGH-LEVEL IMPLEMENTATION}
Success Metrics (MUST BE QUANTIFIABLE):
- Performance: {SPECIFIC LATENCY/THROUGHPUT}
- Reliability: {UPTIME PERCENTAGE}
- Adoption: {USER ADOPTION RATE}
- Quality: {ERROR RATE/BUG COUNT}
Acceptance Criteria:
- [ ] {SPECIFIC, TESTABLE CRITERION}
- [ ] {SPECIFIC, TESTABLE CRITERION}
- [ ] {SPECIFIC, TESTABLE CRITERION}
Stakeholders:
- Product: {NAME}
- Engineering: {TEAM}
- Support: {TEAM}
Principle: Actionable Tasks¶
Tasks must have clear acceptance criteria and ownership.
TEMPLATE:
Create a task for {WORK} with:
Title: {VERB NOUN DESCRIPTION}
Description:
- Objective: {GOAL STATEMENT}
- Approach: {IMPLEMENTATION STRATEGY}
- Dependencies: {WHAT MUST BE DONE FIRST}
Acceptance Criteria:
- [ ] {TESTABLE OUTCOME 1}
- [ ] {TESTABLE OUTCOME 2}
- [ ] {TESTABLE OUTCOME 3}
Related Documents:
- Parent PRD: {LINK}
- Related ADR: {LINK}
- Blocking tasks: {LINKS}
Assignee: {SPECIFIC PERSON/TEAM}
Priority: {P0/P1/P2/P3}
Status: {backlog/active/blocked/completed}
Context Building Patterns¶
Pattern 1: Component Analysis¶
When analyzing a component, gather complete context first.
def analyze_component(agent: DocumentationAgent, component: str) -> dict:
"""Gather complete context for a component."""
# Get all documentation
context = agent.get_context(component)
# Analyze architecture decisions
adrs = context.get("adrs", [])
print(f"Found {len(adrs)} ADRs for {component}")
# Check API documentation
api_docs = context.get("api_docs", [])
print(f"Found {len(api_docs)} API docs")
# Review test coverage
test_results = context.get("test_results", {})
if test_results:
coverage = test_results.get("coverage", 0)
print(f"Test coverage: {coverage}%")
# Check for errors
errors = context.get("errors", [])
active_errors = [e for e in errors if e.get("status") == "active"]
print(f"Active errors: {len(active_errors)}")
# Review implementation notes
impl_notes = context.get("implementation_notes", [])
print(f"Implementation notes: {len(impl_notes)}")
return {
"component": component,
"health_score": calculate_health_score(context),
"documentation_gaps": identify_gaps(context),
"recommendations": generate_recommendations(context)
}
Pattern 2: Gap Analysis¶
Identify missing documentation by comparing code to docs.
def identify_documentation_gaps(agent: DocumentationAgent, component: str) -> list:
"""Find missing documentation for a component."""
context = agent.get_context(component)
gaps = []
# Check for ADRs for architectural changes
if not context.get("adrs"):
gaps.append({
"type": "adr",
"reason": "No architecture decisions documented",
"priority": "high" if is_core_component(component) else "medium"
})
# Check for API documentation
if not context.get("api_docs"):
gaps.append({
"type": "api_docs",
"reason": "No API reference documentation",
"priority": "high"
})
# Check for test documentation
if not context.get("test_results"):
gaps.append({
"type": "test_report",
"reason": "No test execution reports",
"priority": "medium"
})
# Check for implementation notes
if not context.get("implementation_notes"):
gaps.append({
"type": "implementation",
"reason": "No implementation notes",
"priority": "low"
})
return gaps
Pattern 3: Dependency Mapping¶
Map relationships between documents and components.
def map_document_dependencies(agent: DocumentationAgent, component: str) -> dict:
"""Map how documents relate to each other."""
context = agent.get_context(component)
dependencies = {
"upstream": [], # Documents this depends on
"downstream": [], # Documents that depend on this
"related": [] # Related documents
}
# ADRs often reference PRDs and other ADRs
for adr in context.get("adrs", []):
related = adr.get("related_documents", [])
dependencies["upstream"].extend(related)
# Tasks reference PRDs and ADRs
for task in context.get("tasks", []):
# Tasks would have parent PRDs/ADRs
pass
return dependencies
Semantic Search for AI Agents¶
Overview¶
The DocumentationAgent.semantic_search() method enables AI agents to find relevant documentation by meaning rather than exact keyword matching. Uses vector embeddings to understand semantic similarity.
When to Use Semantic Search¶
Use semantic_search() when: - Discovering cross-domain relationships - Finding ADRs by topic (e.g., "database architecture") - Troubleshooting queries (e.g., "why do jobs fail?") - Exploring unknown codebases - Natural language questions
Use get_context() when: - Retrieving all docs for a specific component - Building comprehensive context - Exact component name is known - Structured retrieval needed
Basic Semantic Search¶
from pathlib import Path
from tools.docs.agents import DocumentationAgent
agent = DocumentationAgent(docs_root=Path("docs"))
# Semantic search by topic
results = agent.semantic_search("How do we handle multi-tenancy?", limit=5)
for result in results:
print(f"{result['doc_id']}: {result['score']:.2f}")
print(f" {result['content'][:150]}...")
print(f" Type: {result['metadata'].get('type', 'unknown')}")
Pattern: Finding Related Architecture Decisions¶
def find_related_adrs(agent: DocumentationAgent, topic: str) -> list[dict]:
"""Find ADRs related to a topic using semantic search."""
results = agent.semantic_search(topic, limit=10)
# Filter to ADR documents only
adr_results = [
r for r in results
if r['metadata'].get('type') == 'adr'
]
return adr_results
# Usage
agent = DocumentationAgent(Path("docs"))
related_adrs = find_related_adrs(agent, "database connection pooling")
for adr in related_adrs:
print(f"ADR: {adr['metadata']['title']}")
print(f" Relevance: {adr['score']:.2f}")
print(f" Path: {adr['doc_id']}")
Pattern: Cross-Domain Discovery¶
def discover_cross_domain_relationships(agent: DocumentationAgent, feature: str) -> dict:
"""Find how a feature relates across different domains."""
results = agent.semantic_search(f"{feature} implementation", limit=20)
domains = {}
for result in results:
domain = result['metadata'].get('domain', 'unknown')
if domain not in domains:
domains[domain] = []
domains[domain].append(result)
return domains
# Usage
agent = DocumentationAgent(Path("docs"))
relationships = discover_cross_domain_relationships(agent, "site creation")
for domain, docs in relationships.items():
print(f"{domain}: {len(docs)} documents")
Pattern: Troubleshooting Assistant¶
def troubleshooting_search(agent: DocumentationAgent, error_description: str) -> list[dict]:
"""Find relevant documentation for troubleshooting."""
# Convert error description to semantic search query
query = f"How to fix: {error_description}"
results = agent.semantic_search(query, limit=10)
# Prioritize by relevance score
high_relevance = [r for r in results if r['score'] > 0.7]
medium_relevance = [r for r in results if 0.5 < r['score'] <= 0.7]
return {
"high_confidence": high_relevance,
"possible_solutions": medium_relevance
}
# Usage
agent = DocumentationAgent(Path("docs"))
solutions = troubleshooting_search(agent, "job timeout error")
print("High confidence solutions:")
for sol in solutions["high_confidence"]:
print(f" - {sol['doc_id']}: {sol['score']:.2f}")
Pattern: Context Building with Semantic Search¶
def build_semantic_context(agent: DocumentationAgent, query: str) -> dict:
"""Build comprehensive context using semantic search."""
# Get semantically similar docs
results = agent.semantic_search(query, limit=15)
context = {
"query": query,
"total_results": len(results),
"high_relevance": [],
"medium_relevance": [],
"by_type": {}
}
for result in results:
# Categorize by relevance
if result['score'] > 0.7:
context['high_relevance'].append(result)
elif result['score'] > 0.5:
context['medium_relevance'].append(result)
# Group by document type
doc_type = result['metadata'].get('type', 'unknown')
if doc_type not in context['by_type']:
context['by_type'][doc_type] = []
context['by_type'][doc_type].append(result)
return context
# Usage
agent = DocumentationAgent(Path("docs"))
context = build_semantic_context(agent, "authentication and authorization")
print(f"Found {context['total_results']} results")
print(f"High relevance: {len(context['high_relevance'])}")
print(f"By type: {[(k, len(v)) for k, v in context['by_type'].items()]}")
Integration with Literal Search¶
def combined_search(agent: DocumentationAgent, query: str) -> dict:
"""Combine semantic and literal search for comprehensive results."""
# Semantic search for meaning-based results
semantic_results = agent.semantic_search(query, limit=10)
# Literal search for exact component matches
literal_context = agent.get_context(query.lower().replace(" ", "_"))
# Merge results
combined = {
"semantic": semantic_results,
"literal": literal_context,
"recommendations": []
}
# Generate recommendations based on both
if not semantic_results and not literal_context.get('adrs'):
combined['recommendations'].append(
"No documentation found. Consider creating an ADR."
)
return combined
Semantic Search Use Cases for AI Agents¶
| Use Case | Query Example | Expected Results |
|---|---|---|
| Find ADRs by topic | "database architecture" | ADRs about PostgreSQL, caching |
| Troubleshooting | "job timeout errors" | Error docs, Celery config |
| Cross-domain | "site creation workflow" | Sites, backups, domains docs |
| Implementation | "how to add RBAC" | RBAC module, auth docs |
| Security | "XSS prevention" | Security ADRs, validation code |
Error Handling¶
def safe_semantic_search(agent: DocumentationAgent, query: str) -> list[dict]:
"""Semantic search with fallback for common errors."""
try:
results = agent.semantic_search(query, limit=5)
if not results:
print(f"No semantic results for: {query}")
print("Try rephrasing or use agent.get_context() instead")
return []
return results
except RuntimeError as e:
if "not available" in str(e):
print("Semantic search not available. Install dependencies:")
print(" pip install sentence-transformers qdrant-client")
elif "initialization failed" in str(e):
print("Search engine initialization failed. Check Qdrant:")
print(" docker compose ps qdrant")
else:
print(f"Error: {e}")
return []
except Exception as e:
print(f"Unexpected error: {e}")
return []
AI Agent Workflows¶
Workflow 1: Document After Implementation¶
Create documentation after code changes are complete.
def document_implementation(
agent: DocumentationAgent,
component: str,
changes_description: str,
technical_details: dict
) -> dict:
"""Create documentation after implementing a feature."""
# 1. Create implementation note
impl_path = agent.create_implementation_note(
component=component,
title=f"Implementation: {changes_description}",
description=technical_details["approach"],
challenges=technical_details.get("challenges", []),
lessons_learned=technical_details.get("lessons", []),
author=technical_details.get("author", "AI Agent")
)
# 2. Update or create task as completed
task_title = f"Implement {changes_description}"
# Find existing task or create new
task_path = agent.create_task(
title=task_title,
description=changes_description,
status="completed",
assignee=technical_details.get("assignee", "team-backend")
)
# 3. Document test results
if "test_results" in technical_details:
report_path = agent.document_test_run(
test_results_path=technical_details["test_results"],
coverage_path=technical_details.get("coverage")
)
return {
"implementation_note": impl_path,
"task": task_path,
"test_report": report_path.get("test_results")
}
Workflow 2: Error Discovery and Documentation¶
Document errors when they're discovered in logs or monitoring.
def document_error(
agent: DocumentationAgent,
error_message: str,
stack_trace: str,
context: dict
) -> Path:
"""Create error documentation when new error discovered."""
# Extract error type from message
error_type = extract_error_type(error_message)
# Check if already documented
existing = find_existing_error_doc(agent, error_type)
if existing:
# Update existing error doc
return update_error_documentation(existing, context)
else:
# Create new error doc
return agent.create_error_doc(
error_type=error_type,
error_message=error_message,
stack_trace=stack_trace,
symptoms=context.get("symptoms", []),
reproduction_steps=context.get("reproduction", []),
initial_diagnosis=context.get("diagnosis", "Unknown"),
status="active"
)
Workflow 3: Architecture Decision Documentation¶
Create ADRs when making architectural changes.
def document_architecture_decision(
agent: DocumentationAgent,
decision: str,
analysis: dict
) -> Path:
"""Create ADR after architectural analysis."""
# Gather alternatives considered
alternatives = []
for option in analysis.get("alternatives", []):
alternatives.append({
"name": option["name"],
"description": option["description"],
"pros": option.get("pros", []),
"cons": option.get("cons", [])
})
# Find related documents
related_docs = []
if analysis.get("related_adrs"):
for adr_id in analysis["related_adrs"]:
related_docs.append({
"title": f"ADR {adr_id}",
"link": f"../../architecture/adr/{adr_id}",
"type": "adr"
})
# Create ADR
return agent.create_adr(
title=decision,
context=analysis["context"],
decision=analysis["decision"],
consequences=analysis["consequences"],
status="Proposed",
alternatives=alternatives,
related_documents=related_docs
)
Workflow 4: Test Report Generation¶
Generate test reports after test runs.
def generate_test_report(
agent: DocumentationAgent,
test_results_path: str,
coverage_path: str = None
) -> dict:
"""Generate and analyze test report."""
# Generate report
report_path = agent.document_test_run(
test_results_path=test_results_path,
coverage_path=coverage_path
)
# Analyze failures
test_data = load_test_results(test_results_path)
failures = test_data.get("tests", {}).get("failed", [])
# Create tasks for failures
failure_tasks = []
for failure in failures:
task_path = agent.create_task(
title=f"Fix failing test: {failure['name']}",
description=f"Test failed: {failure['error']}",
status="backlog",
assignee="test-engineer",
priority="P1" if is_critical_test(failure) else "P2"
)
failure_tasks.append(task_path)
return {
"report_path": report_path,
"failure_tasks": failure_tasks,
"total_failures": len(failures)
}
Context Building Examples¶
Example 1: Getting Component Context¶
from tools.docs.agents import DocumentationAgent
from pathlib import Path
agent = DocumentationAgent(Path("docs"))
# Register generators
from tools.docs.generators import ADRGenerator
agent.register_generator("adr", ADRGenerator(Path("docs")))
# Get context for sites component
context = agent.get_context("sites")
# Use context to inform decisions
if context["adrs"]:
latest_adr = context["adrs"][0]
print(f"Latest ADR: {latest_adr['title']}")
print(f"Status: {latest_adr['status']}")
if context["errors"]:
active_errors = [e for e in context["errors"] if e["status"] == "active"]
if active_errors:
print(f"WARNING: {len(active_errors)} active errors")
Example 2: Creating Linked Documentation¶
# Create PRD
prd_path = agent.create_prd(
title="Site Cloning Feature",
problem="Users need to duplicate sites with same configuration",
solution="Add site cloning with template selection",
success_metrics=[
"Clone time < 30 seconds",
"Zero data loss",
"99.9% success rate"
],
status="Draft",
priority="P1",
stakeholders=["Product", "Engineering"]
)
# Create ADR for technical approach
adr_path = agent.create_adr(
title="Use database transactions for site cloning",
context="Must ensure atomicity of clone operation",
decision="Use database transactions with rollback on failure",
consequences="Improved reliability, complex error handling",
status="Proposed",
related_documents=[
{
"title": "Site Cloning Feature",
"link": f"../../{prd_path.parent.name}/{prd_path.name}",
"type": "prd"
}
]
)
# Create task for implementation
task_path = agent.create_task(
title="Implement site cloning API endpoint",
description="POST /api/v1/sites/{id}/clone",
status="backlog",
assignee="team-backend",
priority="P1"
)
Example 3: Error Documentation Workflow¶
# After discovering error in logs
error_doc = agent.create_error_doc(
error_type="DatabaseConnectionPoolExhausted",
error_message="Pool exhausted: timeout waiting for connection",
stack_trace="...",
symptoms=[
"API timeouts during peak load",
"Increased response times",
"Pool exhaustion errors in logs"
],
reproduction_steps=[
"Send 1000 concurrent requests",
"Observe connection pool errors",
"Check pool metrics in dashboard"
],
initial_diagnosis="Connection pool size too small for traffic spike",
status="active"
)
# After fixing error
# (You would update the error document with resolution)
Best Practices for AI Agents¶
1. Always Validate Before Creating¶
# Check if documentation already exists
def safe_create_adr(agent: DocumentationAgent, title: str, **kwargs):
"""Create ADR only if doesn't exist."""
# Check for existing ADR with similar title
context = agent.get_context("all")
existing = [
adr for adr in context.get("adrs", [])
if title.lower() in adr["title"].lower()
]
if existing:
print(f"ADR already exists: {existing[0]['title']}")
return None
# Create new ADR
return agent.create_adr(title=title, **kwargs)
2. Use Structured Metadata¶
# Always provide complete metadata
agent.create_task(
title="Implement feature X",
description="Detailed description with:",
status="backlog",
assignee="specific-team", # Not "team" or "TBD"
priority="P1", # Specific priority
estimated_hours=8, # Custom field
complexity="medium", # Custom field
dependencies=["task-001", "task-002"] # Linked tasks
)
3. Link Related Documents¶
# Always link to related documents
agent.create_adr(
title="Decision X",
context="...",
decision="...",
consequences="...",
related_documents=[
{
"title": "PRD 001",
"link": "../../prd/001-feature.md",
"type": "prd"
},
{
"title": "Task 012",
"link": "../../task/active/012-implementation.md",
"type": "task"
}
]
)
4. Include Quantifiable Metrics¶
# Use specific, measurable metrics
agent.create_prd(
title="Performance Improvement",
problem="API slow",
solution="Add caching",
success_metrics=[
"95th percentile latency < 100ms", # Specific
"Throughput > 1000 req/sec", # Measurable
"Cache hit rate > 80%", # Quantifiable
"P99 latency < 200ms" # Time-bound
]
)
5. Handle Errors Gracefully¶
from tools.docs.agents import (
GeneratorNotFoundError,
InvalidMetadataError,
FileLoadError
)
try:
adr_path = agent.create_adr(
title="Test ADR",
context="...",
decision="...",
consequences="..."
)
except InvalidMetadataError as e:
print(f"Invalid metadata: {e}")
print(f"Missing fields: {e.missing_fields}")
# Fix and retry
except GeneratorNotFoundError as e:
print(f"Generator not registered: {e.generator_name}")
print(f"Available: {e.available_generators}")
# Register generator and retry
except Exception as e:
print(f"Unexpected error: {e}")
# Log and investigate
Prompt Templates¶
Template: Create ADR from Analysis¶
Based on the following technical analysis, create an Architecture Decision Record:
ANALYSIS:
{ANALYSIS_TEXT}
Create ADR with:
- Title: {EXTRACTED_TITLE}
- Context: Summarize the problem and current state
- Decision: State the technical decision clearly
- Consequences: List positive and negative impacts
- Alternatives: List 2-3 alternatives with pros/cons
- Status: "Proposed"
- Related Documents: Link to related PRDs, ADRs, tasks
Format according to ADR template in tools/docs/templates/adr.md.j2
Template: Create PRD from User Request¶
Based on the following user feature request, create a Product Requirements Document:
USER REQUEST:
{USER_REQUEST_TEXT}
Create PRD with:
- Title: Clear feature name
- Problem Statement: User pain point and business impact
- Proposed Solution: High-level technical approach
- User Workflow: Step-by-step user experience
- Success Metrics: 3-5 quantifiable metrics
- Acceptance Criteria: 5-10 testable criteria
- Status: "Draft"
- Priority: P0/P1/P2 based on user urgency
- Stakeholders: List relevant teams
Focus on WHAT and WHY, not HOW.
Template: Create Task from PRD¶
Based on the following PRD, create implementation tasks:
PRD:
{PRD_CONTENT}
Break down into tasks with:
- Title: Verb-noun description
- Description: Specific implementation detail
- Acceptance Criteria: 3-5 testable outcomes
- Related Documents: Link to parent PRD
- Assignee: Specific team
- Priority: Inherit from PRD
- Status: "backlog"
Create one task per major implementation component.
Template: Document Error from Logs¶
Based on the following error logs, create error documentation:
ERROR LOGS:
{LOG_CONTENT}
Extract:
- Error Type: {CLASSIFY_ERROR}
- Error Message: {EXTRACT_MESSAGE}
- Symptoms: What user/observer sees
- Reproduction Steps: How to trigger error
- Initial Diagnosis: Root cause hypothesis
- Status: "active"
- Component: {AFFECTED_COMPONENT}
Include full stack trace in document.
Integration Examples¶
Example: Claude Code Integration¶
# In Claude Code workflow
def create_documentation_for_changes(agent: DocumentationAgent, changes: list):
"""Create documentation for code changes."""
for change in changes:
if change["type"] == "architectural":
# Create ADR
agent.create_adr(
title=change["title"],
context=change["rationale"],
decision=change["decision"],
consequences=change["impact"]
)
elif change["type"] == "feature":
# Create PRD
agent.create_prd(
title=change["title"],
problem=change["problem"],
solution=change["solution"],
success_metrics=change["metrics"]
)
elif change["type"] == "bug_fix":
# Create or update error doc
agent.create_error_doc(
error_type=change["error_type"],
error_message=change["error_message"],
symptoms=change["symptoms"],
status="resolved"
)
Example: CI/CD Integration¶
# In CI/CD pipeline
def document_test_results(agent: DocumentationAgent, test_results: dict):
"""Generate test documentation in CI."""
# Generate test report
report_path = agent.document_test_run(
test_results_path="test-results.json",
coverage_path="coverage.json"
)
# If tests failed, create tasks
if test_results["failed"] > 0:
for failure in test_results["failures"]:
agent.create_task(
title=f"Fix test: {failure['name']}",
description=failure["error"],
status="backlog",
assignee="test-engineer",
priority="P1"
)
Troubleshooting AI Agent Issues¶
Problem: Generator Not Registered¶
Error: GeneratorNotFoundError: adr
Solution:
# Register generator before use
from tools.docs.generators import ADRGenerator
agent.register_generator("adr", ADRGenerator(docs_root))
Problem: Invalid Metadata¶
Error: InvalidMetadataError: Missing required fields: ['title', 'context']
Solution:
# Provide all required fields
agent.create_adr(
title="Complete Title", # Required
context="Full context", # Required
decision="Clear decision", # Required
consequences="Specific consequences" # Required
)
Problem: Context Returns Empty¶
Issue: agent.get_context("component") returns empty lists
Solution:
# Index must be built first
from tools.docs.cli import cli
# Build index
cli.callback()(build_index)
# Then get context
context = agent.get_context("component")