- Add Chrome browser access to 6 visual agents (18 tools each) - Add Playwright access to 2 testing agents (22 tools each) - Add 4 MCP servers: Postgres Pro, Redis, Lighthouse, Docker (.mcp.json) - Add 3 new rules: testing.md, security.md, remotion-service.md - Add Context7 library references to all domain agents - Add CLI tool instructions per agent (curl, ffprobe, k6, semgrep, etc.) - Update team protocol with new capabilities column - Add orchestrator dispatch guidance for new agent capabilities - Init git repo tracking docs + Claude config only Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
42 KiB
Agent Team Upgrade Implementation Plan
For agentic workers: REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (
- [ ]) syntax for tracking.
Goal: Upgrade all 16 agents with domain-specific tools (browser, MCP, CLI), Context7 refs, new rules, hooks, and permissions.
Architecture: No application code changes. All work is config files (.mcp.json, settings.local.json, pyproject.toml), rule files (.claude/rules/*.md), and agent prompt files (.claude/agents/*.md). Phases are parallelizable — infrastructure first, then agents.
Tech Stack: Claude Code config (YAML frontmatter, JSON settings), uv (Python deps), brew (binaries), MCP servers (uvx/bunx)
Spec: docs/superpowers/specs/2026-03-21-agent-team-upgrade-design.md
Phase 1: Infrastructure (Tasks 1-5)
These tasks have no dependencies on each other and can run in parallel.
Task 1: Create .mcp.json with 4 MCP servers
Files:
-
Create:
.mcp.json -
Step 1: Create
.mcp.jsonin project root
{
"mcpServers": {
"postgres": {
"command": "uvx",
"args": ["postgres-mcp", "--access-mode=unrestricted"],
"env": {
"DATABASE_URI": "postgresql://postgres:postgres@localhost:5332/cofee"
}
},
"redis": {
"command": "uvx",
"args": ["--from", "redis-mcp-server@latest", "redis-mcp-server", "--url", "redis://localhost:6379/0"]
},
"lighthouse": {
"command": "bunx",
"args": ["@danielsogl/lighthouse-mcp@latest"]
},
"docker": {
"command": "uvx",
"args": ["mcp-server-docker"]
}
}
}
- Step 2: Verify Postgres MCP connects
Run: DATABASE_URI="postgresql://postgres:postgres@localhost:5332/cofee" uvx postgres-mcp --access-mode=unrestricted
Expected: Server starts, connects to PostgreSQL. Ctrl+C to stop.
- Step 3: Verify Redis MCP connects
Run: uvx --from redis-mcp-server@latest redis-mcp-server --url redis://localhost:6379/0
Expected: Server starts, connects to Redis. Ctrl+C to stop.
- Step 4: Verify Lighthouse MCP starts
Run: bunx @danielsogl/lighthouse-mcp@latest
Expected: Server starts. Ctrl+C to stop.
- Step 5: Verify Docker MCP starts
Run: uvx mcp-server-docker
Expected: Server starts, connects to Docker socket. Ctrl+C to stop.
Task 2: Add Python tools dependency group
Files:
-
Modify:
cofee_backend/pyproject.toml:30-37 -
Step 1: Add tools group to pyproject.toml
After the existing dev group (line 37), add:
tools = [
"semgrep",
"bandit",
"pip-audit",
"schemathesis",
"radon",
]
The [dependency-groups] section should now look like:
[dependency-groups]
dev = [
"mypy>=1.19.1",
"ruff>=0.6.0",
"pytest>=8.0.0",
"pytest-asyncio>=0.23.0",
"aiosqlite>=0.20.0",
]
tools = [
"semgrep",
"bandit",
"pip-audit",
"schemathesis",
"radon",
]
- Step 2: Install the tools group
Run: cd cofee_backend && uv sync --group tools
Expected: All 5 packages install successfully.
- Step 3: Verify tools run
Run: cd cofee_backend && uv run --group tools bandit --version
Expected: Prints bandit version.
Run: cd cofee_backend && uv run --group tools radon --version
Expected: Prints radon version.
Task 3: Install brew binaries
- Step 1: Install gitleaks, k6, hyperfine
Run: brew install gitleaks k6 hyperfine
Expected: All three install successfully.
- Step 2: Verify installations
Run: gitleaks version && k6 version && hyperfine --version
Expected: All three print version numbers.
Task 4: Create 3 new rules files
Files:
-
Create:
.claude/rules/testing.md -
Create:
.claude/rules/security.md -
Create:
.claude/rules/remotion-service.md -
Step 1: Create
.claude/rules/testing.md
# Testing Conventions
## Backend Tests
- Real DB + real Redis. No mocks. conftest.py has shared fixtures.
- Location: cofee_backend/tests/integration/<module>.py
- Naming: test_<action>_<scenario> (e.g., test_create_project_without_name)
- Run: cd cofee_backend && uv run pytest
- Single test: uv run pytest -k "test_name"
- API fuzzing: cd cofee_backend && uv run --group tools schemathesis run http://localhost:8000/api/schema/ --checks all
## Frontend E2E Tests
- Playwright with data-testid selectors on every interactive element
- Location: cofee_frontend/tests/
- Run: cd cofee_frontend && bun run test:e2e
- Every component root element must have data-testid
## General
- Never mock the database — use real test DB
- Tests must be deterministic — no Date.now(), no Math.random()
- Test error paths, not just happy paths
- Step 2: Create
.claude/rules/security.md
# Security Conventions
## Authentication
- JWT tokens via get_current_user dependency injection
- Passwords: bcrypt hash, never plain text
- Token refresh: handled by users module
## File Uploads
- Validated by extension + MIME type in files module
- Upload via uploadFile() from @shared/api/uploadFile — never raw FormData
- Endpoint: /api/files/upload/
## Secrets Management
- All config via get_settings() (cached @lru_cache) — never hardcode
- S3/MinIO credentials: env vars only, never in code or commits
- JWT secret: env var, never in code
## Data Protection
- Soft deletes: is_deleted flag — ensure deleted records never leak through API responses
- CORS: configured in main.py — restrict to frontend origin in production
- SQL injection: prevented by SQLAlchemy parameterized queries — never use raw SQL strings
- XSS: React auto-escapes — never use dangerouslySetInnerHTML
## Scanning Tools (for Security Auditor agent)
- Python SAST: semgrep + bandit (via uv run --group tools)
- Dependency CVEs: pip-audit (via uv run --group tools)
- Secret detection: gitleaks (via brew)
- Step 3: Create
.claude/rules/remotion-service.md
---
paths:
- "remotion_service/**"
---
# Remotion Service Rules
## Animations
- ONLY use Remotion interpolate()/spring() for all animations
- NEVER use CSS transitions, CSS animations, or Framer Motion
- All timing must be frame-based, not time-based
## Compositions
- Deterministic frame rendering: no Date.now(), no Math.random(), no network calls during render
- All data must be passed via inputProps from the server
- useCurrentFrame() and useVideoConfig() for all timing calculations
## Server
- ElysiaJS, single POST /api/render endpoint
- Flow: receive S3 path + transcription → Remotion CLI render → upload to S3 → return path
- Health check: GET /health
## Captions
- All caption presets live in src/components/captions/
- Caption data format: Word[] with start/end timestamps from transcription module
## Video Inspection
- Use ffprobe (installed) to validate input video codec/resolution/fps before render
- Use ffprobe to verify output after render
- Use ffmpeg to extract single frames for visual caption verification
- Use mediainfo for detailed container metadata
Task 5: Update hooks and permissions in settings.local.json
Files:
-
Modify:
.claude/settings.local.json -
Step 1: Add new Bash permissions
Add these entries to the permissions.allow array:
"Bash(uv run --group tools:*)",
"Bash(gitleaks:*)",
"Bash(k6:*)",
"Bash(hyperfine:*)",
"Bash(ffprobe:*)",
"Bash(ffmpeg:*)",
"Bash(mediainfo:*)",
"Bash(aws s3:*)",
"Bash(bunx pa11y:*)",
"Bash(bunx knip:*)",
"Bash(bunx squawk:*)",
"Bash(curl:*)",
"Bash(uv run ruff format:*)",
"Bash(uv run alembic:*)"
- Step 2: Upgrade backend ruff hook
Replace the third PostToolUse hook (the ruff check one) — change:
"command": "filepath=$(cat | jq -r '.tool_input.file_path // empty') && case \"$filepath\" in */cofee_backend/cpv3/*.py) cd cofee_backend && uv run ruff check \"$filepath\" 2>&1 | head -20 ;; esac; exit 0"
To:
"command": "filepath=$(cat | jq -r '.tool_input.file_path // empty') && case \"$filepath\" in */cofee_backend/cpv3/*.py) cd cofee_backend && uv run ruff check --fix \"$filepath\" 2>&1 | head -20 && uv run ruff format \"$filepath\" 2>&1 | head -5 ;; esac; exit 0"
- Step 3: Add PreCompact hook
Add a new top-level key "PreCompact" to the hooks object:
"PreCompact": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "echo 'PRESERVE ACROSS COMPACTION: 1) All modified files and their purposes 2) Test results (pass/fail with commands) 3) Architecture decisions made this session 4) Error messages and resolutions 5) Current subproject (frontend/backend/remotion) 6) Pending agent handoff requests 7) Current task/phase in any active plan'"
}
]
}
]
- Step 4: Add Notification hooks (macOS + Telegram)
Add a new top-level key "Notification" to the hooks object:
"Notification": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "osascript -e 'display notification \"Claude Code needs your attention\" with title \"Cofee Project\"' 2>/dev/null; exit 0"
},
{
"type": "command",
"command": "CHAT_ID=$(cat ~/.claude/channels/telegram/access.json 2>/dev/null | python3 -c \"import sys,json; a=json.load(sys.stdin); print(a['allowFrom'][0] if a.get('allowFrom') else '')\" 2>/dev/null) && TOKEN=$(grep TELEGRAM_BOT_TOKEN ~/.claude/channels/telegram/.env 2>/dev/null | cut -d= -f2-) && [ -n \"$CHAT_ID\" ] && [ -n \"$TOKEN\" ] && curl -s -X POST \"https://api.telegram.org/bot$TOKEN/sendMessage\" -d \"chat_id=$CHAT_ID\" -d \"text=Claude Code needs your attention (Cofee Project)\" > /dev/null 2>&1; exit 0"
}
]
}
]
- Step 5: Verify the final settings.local.json is valid JSON
Run: python3 -c "import json; json.load(open('.claude/settings.local.json')); print('Valid JSON')"
Expected: Valid JSON
Phase 2: Agent Updates (Tasks 6-13)
These tasks can run in parallel. Each updates a group of related agents. Read the spec Section 7 for full details — the spec contains the exact instruction blocks to add.
Important: For every agent, the existing tools: line in frontmatter must be extended (not replaced). The base tools (Read, Grep, Glob, Bash, WebSearch, WebFetch, mcp__context7__resolve-library-id, mcp__context7__query-docs) stay. New tools are appended.
Task 6: Update Chrome agents — UI/UX Designer + Product Strategist
Files:
- Modify:
.claude/agents/ui-ux-designer.md - Modify:
.claude/agents/product-strategist.md
The Chrome tools string to append to tools: for both agents:
, mcp__claude-in-chrome__tabs_context_mcp, mcp__claude-in-chrome__tabs_create_mcp, mcp__claude-in-chrome__navigate, mcp__claude-in-chrome__computer, mcp__claude-in-chrome__read_page, mcp__claude-in-chrome__find, mcp__claude-in-chrome__form_input, mcp__claude-in-chrome__get_page_text, mcp__claude-in-chrome__javascript_tool, mcp__claude-in-chrome__read_console_messages, mcp__claude-in-chrome__read_network_requests, mcp__claude-in-chrome__resize_window, mcp__claude-in-chrome__gif_creator, mcp__claude-in-chrome__upload_image, mcp__claude-in-chrome__shortcuts_execute, mcp__claude-in-chrome__shortcuts_list, mcp__claude-in-chrome__switch_browser, mcp__claude-in-chrome__update_plan
- Step 1: Update ui-ux-designer.md frontmatter
Append the Chrome tools string above to the existing tools: line.
- Step 2: Add Chrome Session Protocol to ui-ux-designer.md
After the Identity section, add the Chrome Session Protocol block from spec Section 1. Then add:
## Browser Focus
Your primary Chrome tools:
- `gif_creator` — record interaction demos when proposing animations or multi-step flows
- `resize_window` — verify designs at mobile (375x812), tablet (768x1024), desktop (1440x900)
- `computer` with `screenshot` — capture visual state for comparison
When proposing a design, if the dev server is running, navigate to localhost:3000 to see the current UI state before recommending changes.
- Step 3: Add Context7 block to ui-ux-designer.md
## Context7 Documentation Lookup
When you need current API docs, use these pre-resolved library IDs — call query-docs directly (no resolve-library-id needed):
| Library | ID | When to query |
|---------|----|---------------|
| Radix Primitives | `/websites/radix-ui_primitives` | Available components, API constraints, slot structure |
If query-docs returns no results, fall back to resolve-library-id to get the current ID.
- Step 4: Update product-strategist.md frontmatter
Append the same Chrome tools string to the existing tools: line.
- Step 5: Add Chrome Session Protocol to product-strategist.md
After the Identity section, add the Chrome Session Protocol block, then:
## Browser Focus
Your primary Chrome tools:
- `read_page` + `find` — understand page structure and discover interactive elements
- `computer` with `screenshot` — capture conversion-critical pages
- `form_input` — fill sign-up/onboarding forms to test conversion funnel end-to-end
When evaluating the product, navigate localhost:3000 as a first-time user would. Document: what do they see first? What's the path to value? Where is friction?
When comparing competitors, navigate to competitor sites and screenshot relevant flows.
- Step 6: Add Context7 instruction to product-strategist.md
## Context7 Documentation Lookup
Use context7 generically — query any library relevant to what you're researching.
Example: mcp__context7__query-docs with libraryId="/vercel/next.js" and topic="pricing page patterns"
Task 7: Update Chrome agents — Design Auditor + Frontend Architect
Files:
-
Modify:
.claude/agents/design-auditor.md -
Modify:
.claude/agents/frontend-architect.md -
Step 1: Update design-auditor.md frontmatter
Append Chrome tools string (same as Task 6) to tools: line. This agent also gets Lighthouse MCP tools — but since exact Lighthouse tool names are discovered at runtime, for now just add Chrome tools. Add a comment after the frontmatter closing ---:
<!-- TODO: Add Lighthouse MCP tool names after server discovery -->
- Step 2: Add Chrome + Lighthouse + CLI blocks to design-auditor.md
After the Identity section, add the Chrome Session Protocol block, then:
## Browser Focus
Your primary Chrome tools:
- `javascript_tool` — extract computed styles: `getComputedStyle(document.querySelector('[data-testid="..."]'))` and cross-reference against `_variables.scss` tokens
- `get_page_text` + `read_page` — read content and a11y tree for semantic structure
- `resize_window` — screenshot components at mobile/tablet/desktop breakpoints
Cross-reference Lighthouse accessibility issues with visual Chrome inspection — Lighthouse catches ARIA violations, Chrome shows visual presentation.
## CLI Tools
### Accessibility audit
bunx pa11y http://localhost:3000 --standard WCAG2AA --reporter json
### Dead FSD export detection
cd cofee_frontend && bunx knip --include files,exports,dependencies
## Context7 Documentation Lookup
When you need current API docs, use these pre-resolved library IDs — call query-docs directly:
| Library | ID | When to query |
|---------|----|---------------|
| Radix Primitives | `/websites/radix-ui_primitives` | Correct props, slot structure, accessibility patterns |
If query-docs returns no results, fall back to resolve-library-id.
- Step 3: Update frontend-architect.md frontmatter
Append Chrome tools string to tools: line.
- Step 4: Add Chrome + CLI + Context7 blocks to frontend-architect.md
After the Identity section, add the Chrome Session Protocol block, then:
## Browser Focus
Your primary Chrome tools:
- `read_page` — inspect a11y tree to verify component structure
- `computer` with `screenshot` — spot-check rendering after architectural changes
- `resize_window` — verify layout at different viewports
After recommending architectural changes, spot-check the result in Chrome to verify components render correctly and hydration succeeds.
## CLI Tools
### Dead export detection
cd cofee_frontend && bunx knip --include files,exports,dependencies
## Context7 Documentation Lookup
When you need current API docs, use these pre-resolved library IDs — call query-docs directly:
| Library | ID | When to query |
|---------|----|---------------|
| Next.js | `/vercel/next.js` | App Router, Server Components, caching, ISR |
| TanStack Query | `/tanstack/query` | v5 hooks, queries, mutations, testing |
| Radix Primitives | `/websites/radix-ui_primitives` | Component APIs, slot structure |
If query-docs returns no results, fall back to resolve-library-id.
Task 8: Update Chrome agents — Debug Specialist + Performance Engineer
Files:
-
Modify:
.claude/agents/debug-specialist.md -
Modify:
.claude/agents/performance-engineer.md -
Step 1: Update debug-specialist.md frontmatter
Append Chrome tools string to tools: line. This agent also gets Redis MCP — add comment:
<!-- TODO: Add Redis MCP tool names after server discovery -->
- Step 2: Add Chrome + Redis blocks to debug-specialist.md
After the Identity section, add the Chrome Session Protocol block, then:
## Browser Focus
Your primary Chrome tools:
- `read_console_messages` — filter by pattern "error|warn|Error" to find JS errors
- `read_network_requests` — filter by urlPattern "/api/" to find failed API calls (4xx/5xx)
- `javascript_tool` — execute diagnostic JS in page context
For UI bugs, reproduce in Chrome before investigating code. Navigate to the affected page, interact with it, check console and network.
## Redis MCP (Dramatiq / WebSocket debugging)
When Redis MCP tools are available:
- For notification delivery bugs, inspect Redis pub/sub channels directly to determine if the backend published the event
- For stuck Dramatiq jobs, inspect Redis keys to see queue depth and job state
- Step 3: Update performance-engineer.md frontmatter
Append Chrome tools string to tools: line. Also gets Lighthouse and Postgres MCP — add comment:
<!-- TODO: Add Lighthouse MCP + Postgres MCP tool names after server discovery -->
- Step 4: Add Chrome + Lighthouse + CLI + Context7 blocks to performance-engineer.md
After the Identity section, add the Chrome Session Protocol block, then:
## Browser Focus
Your primary Chrome tools:
- `javascript_tool` — execute `performance.getEntries()` to extract LCP/FID/CLS, measure TTFB
- `read_network_requests` — monitor network waterfall for slow `/api/` calls
- `resize_window` — test performance at different viewports
For frontend performance, run Lighthouse audit first (pass `url: 'http://localhost:3000'` as tool parameter), then use Chrome JS execution for targeted measurements.
## Postgres MCP (query performance)
When Postgres MCP tools are available:
- Query pg_stat_statements for the slowest queries across the 11 modules
- Check index health: unused indexes, missing indexes on foreign keys
## CLI Tools
### Load testing
k6 run --vus 50 --duration 30s <script>.js
### Benchmarking
hyperfine 'cd cofee_frontend && bun run build' --warmup 1
hyperfine 'cd cofee_backend && uv run pytest tests/' --min-runs 3
## Context7 Documentation Lookup
When you need current API docs, use these pre-resolved library IDs — call query-docs directly:
| Library | ID | When to query |
|---------|----|---------------|
| Next.js | `/vercel/next.js` | Caching, ISR, static generation |
| FastAPI | `/websites/fastapi_tiangolo` | Middleware, async patterns |
| Redis | `/redis/redis-py` | Connection pooling, pipelines |
If query-docs returns no results, fall back to resolve-library-id.
Task 9: Update Playwright agents — Frontend QA + Backend QA
Files:
- Modify:
.claude/agents/frontend-qa.md - Modify:
.claude/agents/backend-qa.md
The Playwright tools string to append to tools::
, mcp__playwright__browser_click, mcp__playwright__browser_close, mcp__playwright__browser_console_messages, mcp__playwright__browser_drag, mcp__playwright__browser_evaluate, mcp__playwright__browser_file_upload, mcp__playwright__browser_fill_form, mcp__playwright__browser_handle_dialog, mcp__playwright__browser_hover, mcp__playwright__browser_install, mcp__playwright__browser_navigate, mcp__playwright__browser_navigate_back, mcp__playwright__browser_network_requests, mcp__playwright__browser_press_key, mcp__playwright__browser_resize, mcp__playwright__browser_run_code, mcp__playwright__browser_select_option, mcp__playwright__browser_snapshot, mcp__playwright__browser_tabs, mcp__playwright__browser_take_screenshot, mcp__playwright__browser_type, mcp__playwright__browser_wait_for
- Step 1: Update frontend-qa.md frontmatter
Append the Playwright tools string above to the existing tools: line.
- Step 2: Add Playwright + Context7 blocks to frontend-qa.md
After the Identity section, add the Playwright Protocol block from spec Section 1, then:
## Browser Focus
Use `browser_snapshot` to inspect the accessibility tree of components under test. Verify every interactive element has `data-testid`. Use the snapshot refs to design reliable test selectors.
Reproduce edge cases before recommending tests: navigate to the page, trigger empty states, error states, and loading states via Playwright to confirm the behavior you're testing for.
Use `browser_file_upload` to test file upload flows, `browser_drag` for drag-and-drop, `browser_handle_dialog` for confirmation dialogs.
## Context7 Documentation Lookup
When you need current API docs, use these pre-resolved library IDs — call query-docs directly:
| Library | ID | When to query |
|---------|----|---------------|
| Playwright | `/websites/playwright_dev` | Locators, expect, fixtures |
| Playwright (repo) | `/microsoft/playwright` | Test config, reporters |
| TanStack Query | `/tanstack/query` | Testing patterns for data fetching |
If query-docs returns no results, fall back to resolve-library-id.
- Step 3: Update backend-qa.md frontmatter
Append the same Playwright tools string to the existing tools: line.
- Step 4: Add Playwright + CLI + Context7 blocks to backend-qa.md
After the Identity section, add the Playwright Protocol block from spec Section 1, then:
## Browser Focus
For integration testing, use Playwright to verify that API responses render correctly in the frontend — navigate to the page, trigger the action, check network requests match expected contracts.
Use `browser_run_code` for complex multi-step verification sequences.
## CLI Tools
### API Fuzzing (schemathesis)
cd cofee_backend && uv run --group tools schemathesis run http://localhost:8000/api/schema/ --checks all --workers 4
This auto-generates edge-case payloads for all 11 module endpoints.
Requires the backend to be running (docker-compose up or uv run uvicorn).
### API Testing with curl
Authenticated request (replace <token> with a valid JWT):
curl -s -H "Authorization: Bearer <token>" -H "Content-Type: application/json" http://localhost:8000/api/projects/ | python3 -m json.tool
POST with JSON body:
curl -s -X POST -H "Authorization: Bearer <token>" -H "Content-Type: application/json" -d '{"name": "test"}' http://localhost:8000/api/projects/ | python3 -m json.tool
Measure response time:
curl -o /dev/null -s -w "HTTP %{http_code} in %{time_total}s\n" -H "Authorization: Bearer <token>" http://localhost:8000/api/projects/
Health check:
curl -s http://localhost:8000/api/system/health | python3 -m json.tool
Always include Authorization header for protected endpoints. Use -s (silent) and pipe through python3 -m json.tool for readable output.
## Context7 Documentation Lookup
When you need current API docs, use these pre-resolved library IDs — call query-docs directly:
| Library | ID | When to query |
|---------|----|---------------|
| FastAPI | `/websites/fastapi_tiangolo` | TestClient, dependency overrides |
| Pydantic | `/pydantic/pydantic` | Schema edge cases, validation |
| Dramatiq | `/bogdanp/dramatiq` | Test broker, StubBroker |
For curl patterns, use resolve-library-id with query "curl" if needed.
If query-docs returns no results, fall back to resolve-library-id.
Task 10: Update MCP agents — DB Architect + Backend Architect
Files:
-
Modify:
.claude/agents/db-architect.md -
Modify:
.claude/agents/backend-architect.md -
Step 1: Update db-architect.md frontmatter
Add comment after closing ---:
<!-- TODO: Add Postgres MCP tool names after server discovery -->
- Step 2: Add Postgres MCP + CLI + Context7 blocks to db-architect.md
After the Identity section, add:
## Postgres MCP (live database inspection)
When Postgres MCP tools are available:
- Use Postgres MCP to inspect the live schema rather than reading models.py — the live database is the source of truth, models.py may be out of sync during migration development
- Use pg_stat_statements to identify the slowest queries and recommend index improvements
- Check index health: unused indexes, missing indexes on foreign keys across 11 modules
- Run EXPLAIN ANALYZE to validate query plans
## CLI Tools
### Migration linting
Before approving any Alembic migration, lint the generated SQL:
cd cofee_backend && uv run alembic upgrade <prev>:head --sql 2>/dev/null | bunx squawk
Replace `<prev>` with the revision ID before the new migration (find it with `uv run alembic history`).
Do NOT lint all migrations from base — only lint the new one.
## Context7 Documentation Lookup
When you need current API docs, use these pre-resolved library IDs — call query-docs directly:
| Library | ID | When to query |
|---------|----|---------------|
| SQLAlchemy 2.1 | `/websites/sqlalchemy_en_21` | Alembic, DDL, type system |
| SQLAlchemy ORM | `/websites/sqlalchemy_en_20_orm` | Relationship loading, hybrid properties |
If query-docs returns no results, fall back to resolve-library-id.
- Step 3: Update backend-architect.md frontmatter
Add comment after closing ---:
<!-- TODO: Add Redis MCP + Postgres MCP tool names after server discovery -->
- Step 4: Add Redis MCP + CLI + Context7 blocks to backend-architect.md
After the Identity section, add:
## Redis MCP (Dramatiq queue inspection)
When Redis MCP tools are available:
- Inspect Dramatiq queue state when designing or reviewing task processing patterns
- Check pending/failed jobs, queue depths
- Monitor pub/sub channels for WebSocket notification debugging
## CLI Tools
### Code complexity analysis
cd cofee_backend && uv run --group tools radon cc cpv3/modules/*/service.py -a -nc
Grade C or worse = too complex, recommend extraction.
### API testing with curl
Verify endpoints you've designed or modified:
curl -s -H "Authorization: Bearer <token>" -H "Content-Type: application/json" http://localhost:8000/api/<endpoint>/ | python3 -m json.tool
curl -s -X POST -H "Authorization: Bearer <token>" -H "Content-Type: application/json" -d '{"key": "value"}' http://localhost:8000/api/<endpoint>/ | python3 -m json.tool
curl -o /dev/null -s -w "HTTP %{http_code} in %{time_total}s\n" -H "Authorization: Bearer <token>" http://localhost:8000/api/<endpoint>/
Always test your endpoint changes before finalizing recommendations.
### MinIO / S3 browsing
aws s3 ls --endpoint-url http://localhost:9000 s3://cofee-media/ --recursive
aws s3 ls --endpoint-url http://localhost:9000 s3://cofee-renders/
Requires AWS CLI configured with MinIO credentials (see .env).
## Context7 Documentation Lookup
When you need current API docs, use these pre-resolved library IDs — call query-docs directly:
| Library | ID | When to query |
|---------|----|---------------|
| FastAPI | `/websites/fastapi_tiangolo` | Dependency injection, middleware |
| SQLAlchemy 2.1 | `/websites/sqlalchemy_en_21` | Async sessions, relationships |
| Pydantic | `/pydantic/pydantic` | v2 validators, model_config |
| Dramatiq | `/bogdanp/dramatiq` | Actors, middleware, retry |
If query-docs returns no results, fall back to resolve-library-id.
Task 11: Update Security Auditor + DevOps Engineer
Files:
-
Modify:
.claude/agents/security-auditor.md -
Modify:
.claude/agents/devops-engineer.md -
Step 1: Add CLI + Context7 blocks to security-auditor.md
No frontmatter changes (no new MCP tools). After the Identity section, add:
## Security Scanning Tools
Run these from the project root via Bash:
### Python SAST (backend)
cd cofee_backend && uv run --group tools semgrep scan --config p/python --config p/jwt cpv3/
cd cofee_backend && uv run --group tools bandit -r cpv3/ -ll # medium+ severity only
### Python dependency vulnerabilities
cd cofee_backend && uv run --group tools pip-audit
### Frontend SAST
Note: semgrep is installed in the backend's uv tools group but scans any language.
cd cofee_backend && uv run --group tools semgrep scan --config p/typescript --include "*.ts" --include "*.tsx" ../cofee_frontend/src/
### Secret detection (git history)
gitleaks detect --source . --report-format json --no-banner
All tools are installed project-locally (Python via uv tools group) or via brew (gitleaks).
Do NOT install new tools — use only what is listed above.
Start every security review by running these scanning tools. Report findings with severity, file:line, and remediation recommendation.
## Context7 Documentation Lookup
When you need current API docs, use these pre-resolved library IDs — call query-docs directly:
| Library | ID | When to query |
|---------|----|---------------|
| FastAPI | `/websites/fastapi_tiangolo` | OAuth2, JWT, Security dependencies |
| Pydantic | `/pydantic/pydantic` | Strict mode, input validation |
If query-docs returns no results, fall back to resolve-library-id.
- Step 2: Update devops-engineer.md frontmatter
Add comment after closing ---:
<!-- TODO: Add Docker MCP tool names after server discovery -->
- Step 3: Add Docker MCP + CLI + Context7 blocks to devops-engineer.md
After the Identity section, add:
## Docker MCP (container management)
When Docker MCP tools are available:
- Inspect container health across compose stack (postgres, redis, minio, api, worker, remotion)
- Tail logs per container to debug worker crashes, Remotion render failures
- Restart stuck services
- Manage compose stack start/stop
Use Docker MCP instead of crafting docker CLI commands.
## CLI Tools
### MinIO / S3 browsing
aws s3 ls --endpoint-url http://localhost:9000 s3://cofee-media/ --recursive
Requires AWS CLI configured with MinIO credentials (see .env).
## Context7 Documentation Lookup
When you need current API docs, use these pre-resolved library IDs — call query-docs directly:
| Library | ID | When to query |
|---------|----|---------------|
| Next.js | `/vercel/next.js` | Standalone output, Docker build |
| FastAPI | `/websites/fastapi_tiangolo` | Workers, deployment settings |
If query-docs returns no results, fall back to resolve-library-id.
Task 12: Update Remotion Engineer + ML/AI Engineer + Technical Writer
Files:
-
Modify:
.claude/agents/remotion-engineer.md -
Modify:
.claude/agents/ml-ai-engineer.md -
Modify:
.claude/agents/technical-writer.md -
Step 1: Add CLI + Context7 blocks to remotion-engineer.md
No frontmatter changes. After the Identity section, add:
## Video Inspection Tools
Validate input video before Remotion render:
ffprobe -v quiet -print_format json -show_format -show_streams /path/to/input.mp4
Check output after render (verify caption overlay, resolution, codec):
ffprobe -v quiet -print_format json -show_entries stream=width,height,r_frame_rate,codec_name /path/to/output.mp4
Extract specific frame to verify caption positioning:
ffmpeg -i /path/to/output.mp4 -vf "select=eq(n\,100)" -frames:v 1 /tmp/frame_100.png
Get container metadata (duration, bitrate, audio channels):
mediainfo --Output=JSON /path/to/video.mp4
## Context7 Documentation Lookup
When you need current API docs, use these pre-resolved library IDs — call query-docs directly:
| Library | ID | When to query |
|---------|----|---------------|
| Remotion (docs) | `/websites/remotion_dev` | interpolate, spring, composition config |
| Remotion (repo) | `/remotion-dev/remotion` | Bundle, render CLI |
| Remotion Skills | `/remotion-dev/skills` | Best practices |
If query-docs returns no results, fall back to resolve-library-id.
- Step 2: Add Context7 block to ml-ai-engineer.md
No frontmatter changes. After the Identity section, add:
## Context7 Documentation Lookup
When you need current API docs, use these pre-resolved library IDs — call query-docs directly:
| Library | ID | When to query |
|---------|----|---------------|
| FastAPI | `/websites/fastapi_tiangolo` | BackgroundTasks, streaming |
| Dramatiq | `/bogdanp/dramatiq` | Actor retry, timeout, priority |
When modifying transcription actors, query Dramatiq docs for retry/timeout configuration and middleware patterns.
If query-docs returns no results, fall back to resolve-library-id.
- Step 3: Add Context7 instruction to technical-writer.md
No frontmatter changes. After the Identity section, add:
## Context7 Documentation Lookup
Use context7 generically — query any library relevant to what you're documenting.
When documenting APIs, query the FastAPI docs for the current endpoint decorator patterns to ensure documentation matches implementation.
Example: mcp__context7__query-docs with libraryId="/websites/fastapi_tiangolo" and topic="response model decorator"
Task 13: Update Orchestrator + Shared Team Protocol
Files:
-
Modify:
.claude/agents/orchestrator.md -
Modify:
.claude/agents-shared/team-protocol.md -
Step 1: Update team roster in team-protocol.md
In .claude/agents-shared/team-protocol.md, replace the | Agent | What they do | Request when | table with an updated version that adds a "New Tools" column:
| Agent | What they do | New Tools | Request when |
|---|---|---|---|
| Orchestrator | Task decomposition, agent routing, context packaging | — | You don't — main session dispatches you |
| Frontend Architect | Next.js/React/FSD patterns, component architecture | Chrome browser, knip | Frontend architecture decisions, component design |
| Backend Architect | FastAPI/Python patterns, service design, API contracts | Redis MCP, Postgres MCP, radon, curl | Backend architecture, API design |
| DB Architect | PostgreSQL schema, query optimization, migrations | Postgres MCP, squawk | Schema design, query performance, migration strategy |
| UI/UX Designer | Visual design, interaction patterns, premium aesthetics | Chrome browser, GIF recording | New UI flows, design direction |
| Design Auditor | Visual consistency, component compliance, accessibility | Chrome browser, Lighthouse MCP, pa11y, knip | UI review, consistency checks, accessibility audits |
| Frontend QA | Playwright E2E, React testing, edge case discovery | Playwright MCP (all tools) | Frontend test planning, test case design |
| Backend QA | pytest, integration tests, API contracts, edge cases | Playwright MCP, schemathesis, curl | Backend test planning, API contract testing |
| Remotion Engineer | Compositions, animation, video processing, captions | ffprobe, mediainfo, ffmpeg | Remotion code, video processing, caption styling |
| Security Auditor | OWASP, auth, data protection, dependency auditing | semgrep, bandit, pip-audit, gitleaks | Security review, vulnerability assessment |
| Performance Engineer | Profiling, caching, bundle analysis, query performance | Chrome browser, Lighthouse MCP, Postgres MCP, k6, hyperfine | Performance issues, optimization |
| Debug Specialist | Root cause analysis, cross-service debugging | Chrome browser, Redis MCP | Bug investigation, root cause analysis |
| DevOps Engineer | CI/CD, Docker, K8s, infrastructure | Docker MCP | Infrastructure, deployment, CI/CD |
| Product Strategist | Monetization, conversion, feature prioritization | Chrome browser | Business decisions, pricing, feature priority |
| Technical Writer | Feature docs, API docs, architecture decision records | — | Documentation needs |
| ML/AI Engineer | Speech-to-text, transcription models, ML deployment | — | Transcription, ML model decisions |
- Step 2: Add Context7 instruction to orchestrator.md
After the Identity section, add:
## Context7 Documentation Lookup
Use context7 generically — query any library relevant to the task you're decomposing.
Example: mcp__context7__query-docs with libraryId="/vercel/next.js" and topic="app router caching"
- Step 3: Add upgraded capabilities dispatch guidance
Find the section where the orchestrator describes agent capabilities (team roster or dispatch logic). Add after it:
## Agent Capabilities (Post-Upgrade)
When dispatching agents, leverage their new capabilities:
### Visual inspection tasks
UI/UX Designer, Design Auditor, Debug Specialist, Frontend Architect, Performance Engineer, Product Strategist — all have Chrome browser access. Include "Use Chrome browser tools to..." in dispatch context when the task involves visual UI work.
### Database tasks
DB Architect, Performance Engineer, Backend Architect — have Postgres MCP for live schema inspection, slow query analysis, and EXPLAIN ANALYZE. Dispatch DB Architect for schema/migration work; Performance Engineer for query optimization.
### Dramatiq / Redis debugging
Debug Specialist, Backend Architect — have Redis MCP for queue inspection and pub/sub monitoring. Dispatch Debug Specialist for stuck jobs or missing WebSocket notifications.
### Security scanning
Security Auditor — has semgrep, bandit, pip-audit, gitleaks via CLI. Dispatch for any security review, dependency audit, or pre-deployment check.
### Performance auditing
Performance Engineer — has Lighthouse MCP for Core Web Vitals, Chrome for JS performance API, k6 for load testing. Dispatch for frontend or backend performance investigation.
### Browser testing
Frontend QA, Backend QA — have Playwright MCP for structured a11y snapshots and cross-browser testing. Dispatch for test plan design and integration verification.
### Container management
DevOps Engineer — has Docker MCP for container health, logs, and compose management. Dispatch for infrastructure issues.
Phase 3: Verification (Task 14)
Depends on all Phase 1 and Phase 2 tasks completing.
Task 14: End-to-end verification
- Step 1: Verify settings.local.json is valid
Run: python3 -c "import json; json.load(open('.claude/settings.local.json')); print('Valid JSON')"
Expected: Valid JSON
- Step 2: Verify all agent frontmatter is valid YAML
Run: for f in .claude/agents/*.md; do echo -n "$(basename $f): "; python3 -c "import yaml; yaml.safe_load(open('$f').read().split('---')[1]); print('OK')" 2>&1; done
Expected: All 16 agents print OK.
- Step 3: Verify .mcp.json is valid
Run: python3 -c "import json; json.load(open('.mcp.json')); print('Valid JSON')"
Expected: Valid JSON
- Step 4: Verify rules files exist
Run: ls -la .claude/rules/
Expected: 6 files — backend-modules.md, frontend-fsd.md, localization.md, testing.md, security.md, remotion-service.md
- Step 5: Verify Python tools are available
Run: cd cofee_backend && uv run --group tools bandit --version && uv run --group tools radon --version && uv run --group tools semgrep --version
Expected: All print version numbers.
- Step 6: Verify brew tools are available
Run: gitleaks version && k6 version && hyperfine --version
Expected: All print version numbers.
- Step 7: Count Chrome tools in agent frontmatter
Run: grep -c "mcp__claude-in-chrome" .claude/agents/ui-ux-designer.md
Expected: At least 18 (the 18 Chrome tools in the tools: line).
- Step 8: Count Playwright tools in agent frontmatter
Run: grep -c "mcp__playwright" .claude/agents/frontend-qa.md
Expected: At least 22 (the 22 Playwright tools in the tools: line).
Task 15: Discover MCP tool names and update agent frontmatter
Depends on Task 1 (.mcp.json created) and Task 14 (servers verified).
- Step 1: Start a new Claude Code session to trigger MCP server discovery
The .mcp.json file will cause Claude Code to start all 4 MCP servers on next session launch. After launching, the MCP tool names become visible in the deferred tools list.
- Step 2: Discover Postgres MCP tool names
Run: Use ToolSearch with query "postgres" to find all mcp__postgres__* tools. Record the tool names.
- Step 3: Discover Redis MCP tool names
Run: Use ToolSearch with query "redis" to find all mcp__redis__* tools. Record the tool names.
- Step 4: Discover Lighthouse MCP tool names
Run: Use ToolSearch with query "lighthouse" to find all mcp__lighthouse__* tools. Record the tool names.
- Step 5: Discover Docker MCP tool names
Run: Use ToolSearch with query "docker" to find all mcp__docker__* tools. Record the tool names.
- Step 6: Update agent frontmatter with discovered tool names
For each agent with a <!-- TODO: Add [X] MCP tool names --> comment:
- Replace the TODO comment with nothing (remove it)
- Append the discovered MCP tool names to the agent's
tools:line
Agents to update:
-
design-auditor.md— add Lighthouse tools -
debug-specialist.md— add Redis tools -
performance-engineer.md— add Lighthouse + Postgres tools -
db-architect.md— add Postgres tools -
backend-architect.md— add Redis + Postgres tools -
devops-engineer.md— add Docker tools -
Step 7: Add MCP tool permissions to settings.local.json
Add all discovered MCP tool names to permissions.allow in settings.local.json. Format: "mcp__postgres__<tool_name>" etc.
Task 16: Final commit
- Step 1: Commit all changes
git add .mcp.json .claude/rules/testing.md .claude/rules/security.md .claude/rules/remotion-service.md .claude/settings.local.json .claude/agents/*.md cofee_backend/pyproject.toml cofee_backend/uv.lock
git commit -m "feat: upgrade agent team with browser, MCP, CLI tools, rules, and hooks
- Add Chrome browser access to 6 visual agents (18 tools each)
- Add Playwright access to 2 testing agents (22 tools each)
- Add 4 MCP servers: Postgres Pro, Redis, Lighthouse, Docker
- Add Python tools group (semgrep, bandit, pip-audit, schemathesis, radon)
- Add 3 new rules: testing.md, security.md, remotion-service.md
- Add PreCompact + Notification hooks, upgrade ruff hook
- Add Context7 library references to all domain agents
- Add CLI tool instructions per agent (curl, ffprobe, k6, etc.)
- Add Bash permissions for all new CLI tools"