Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
9.2 KiB
CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Monorepo Structure
Three independent projects:
cofee_frontend/— Next.js 16 + TypeScript frontend (FSD architecture)cofee_backend/— FastAPI + Python backend (layered module pattern)remotion_service/— ElysiaJS + Remotion video captioning microservice
Each subproject has its own CLAUDE.md and AGENTS.md — read the relevant one before starting work.
Cross-Service Data Flow
Frontend (Next.js :3000) → Backend API (FastAPI :8000) → Remotion Service (Elysia :3001)
↕ ↕
PostgreSQL :5332 S3/MinIO :9000
Redis :6379 (pub/sub + task queue)
- Frontend calls Backend API via typed
openapi-fetchclient with JWT auth - Backend submits background jobs via Dramatiq (Redis broker) — e.g. transcription, silence detection
- Backend sends video + transcription to Remotion Service for caption rendering
- Remotion renders captions onto video, uploads result to S3, returns S3 path
- Backend notifies Frontend of job completion via WebSocket (Redis pub/sub)
Frontend Commands
bun dev # Dev server (localhost:3000)
bun run build # Production build
bunx tsc --noEmit # Type-check (lint scripts are broken)
bun run gc <layer> <Name> # Generate FSD component
bun run gicons # Convert raw SVGs to React icon components
bun run gen:api-types # Regenerate API types from OpenAPI schema (needs backend running)
bun run test:e2e # Playwright E2E tests
Backend Commands
uv sync # Install dependencies
uv run uvicorn cpv3.main:app --reload # Dev server (localhost:8000)
uv run pytest # Run all tests
uv run pytest tests/integration/<file>.py # Single test file
uv run pytest -k "test_name" # Single test by name
uv run dramatiq cpv3.modules.tasks.service # Start background worker
uv run alembic revision --autogenerate -m "msg" # Create migration
uv run alembic upgrade head # Apply migrations
uv run ruff check cpv3/ # Lint
uv run ruff format cpv3/ # Auto-format
Remotion Service Commands
cd remotion_service
bun install # Install dependencies
bun run server # Start API server (localhost:3001)
bun run dev # Remotion Studio for visual debugging
bunx tsc --noEmit # Type-check
Frontend Architecture (FSD)
Strict unidirectional imports: pages -> widgets -> features -> entities -> shared. No cross-slice imports within the same layer. Enforced by eslint-plugin-boundaries.
Features are module-aware — grouped by domain (features/profile/, features/project/), not flat.
Path aliases: @app/*, @pages/*, @widgets/*, @features/*, @entities/*, @shared/* map to src/<layer>/*.
See cofee_frontend/CLAUDE.md for full details on components, API client, styling, and gotchas.
Backend Architecture
Layered module pattern. Each module has exactly: __init__.py, models.py, schemas.py, repository.py, service.py, router.py. No extra files, no subdirectories within modules. When in doubt, put logic in service.py.
11 modules: users, projects, media, files, transcription, captions, jobs, notifications, tasks, webhooks, system.
Flow: Router → Service → Repository → Database (async SQLAlchemy + PostgreSQL).
See cofee_backend/CLAUDE.md for full details on patterns, commands, and gotchas.
Remotion Service Architecture
Standalone video captioning microservice. Two layers sharing types:
- Server (
server/): ElysiaJS API, singlePOST /api/renderendpoint — receives S3 video path + transcription, spawns Remotion CLI render, uploads captioned video to S3. - Composition (
src/): Remotion React components for deterministic frame rendering. All animations must use Remotion'sinterpolate()/spring(), never CSS transitions or Framer Motion.
See remotion_service/CLAUDE.md for full details.
Docker Services
postgres → localhost:5332 minio → localhost:9000 (console: 9001)
redis → localhost:6379 api → localhost:8000 (OpenAPI at /api/schema/)
worker → Dramatiq bg jobs remotion → localhost:3001
cd cofee_backend && docker-compose up # DB, Redis, MinIO, API, Worker
cd remotion_service && docker-compose up # Remotion service (dev)
Localization
All user-facing UI text must be in Russian. The only exception is the brand name "Coffee Project" / "Cofee Project" — it stays in English.
Code Style (Both Projects)
- Simple over clever, early returns over deep nesting
- Max ~30 lines per function — extract helpers if longer
- Named constants instead of magic values
- Descriptive names:
getUserByIdnotgetData - Store user-facing error messages in named constants (
ERROR_prefix), not inline strings
Agent Team
This project has a team of 16 specialist agents (15 specialists + 1 Orchestrator).
Agent files: .claude/agents/. Shared protocol: .claude/agents-shared/team-protocol.md.
Developer Team Consultation
For ANY non-trivial task, you MUST consult with the developer team:
- Announce: "Consulting with the developer team to [task summary]"
- Dispatch the
orchestratoragent with your analysis — it selects the right specialists - Built-in agents (code-reviewer, code-explorer, etc.) may be used alongside the team, but the project's specialist agents must always be consulted
- Credit specialists in your final response — state which agents contributed
When to Use the Orchestrator
For ANY non-trivial task (feature, bug fix, audit, optimization, research, infrastructure, review, documentation), you MUST:
- Think about the task yourself first — understand scope, affected areas, risks
- Dispatch the
orchestratoragent with your analysis as context - Follow its dispatch plan exactly
Skip the Orchestrator ONLY for trivial tasks: rename a variable, fix a typo, answer a quick factual question.
Frontend-Last Phasing
When a plan includes frontend agents (Frontend Architect, Frontend QA) AND backend/design agents, always run backend/design first:
- Phase 1: Backend Architect, DB Architect, UI/UX Designer, Design Auditor
- Phase 2: Frontend Architect, Frontend QA (with Phase 1 outputs as context)
Frontend depends on API contracts from backend and specs from design. Running them later prevents rework. If only frontend agents are needed, they run in Phase 1 normally.
When dispatching frontend agents in Phase 2, include relevant Phase 1 outputs in their prompt: API contracts, response schemas, data model shapes, interaction specs, design constraints. Summarize each to key decisions (~200 words max), not raw output.
Dispatch Loop
After receiving the Orchestrator's plan:
- Dispatch all Phase 1 agents (in parallel when the plan says parallel). When dispatching, include any specialist memory context the Orchestrator specified in "SPECIALIST MEMORY TO INCLUDE" and any relevant past decisions from "RELEVANT PAST DECISIONS".
- Collect results from all Phase 1 agents
- For each agent result, check for "## Handoff Requests" sections
- If handoffs exist: a. Dispatch the requested agents with the context provided in the handoff b. Collect handoff results c. Re-invoke the original agent with continuation context (see Continuation Format) d. Check the continuation result for NEW handoff requests
- Track chain history — never re-invoke an agent already in the current chain
- Max chain depth: 3. If exceeded, stop and present partial results to the user.
- After all chains resolve, check if the Orchestrator specified Phase 2 agents that depend on Phase 1 results — dispatch them with the results
- Repeat until all phases complete
- Synthesize all agent outputs into a coherent response
Continuation Format
When re-invoking an agent after their handoff is fulfilled:
"Continue your work on:
Your previous analysis (summarized to key points): <summarize their Completed Work section — max 500 words>
Handoff results: <for each handoff, include the responding agent's name and their full output>
Resume your Continuation Plan."
Context Triggers
After each agent returns, check their output against the Orchestrator's "CONTEXT TRIGGERS TO WATCH" list. If a trigger fires, dispatch the specified agent with the relevant finding as context.
Conflict Handling
If two agents' outputs contradict each other:
- If one has clear domain authority → use their recommendation
- If ambiguous → present both to the user with your analysis
Compact Instructions
When compacting, always preserve:
- List of all modified files and their purposes
- Test command results (pass/fail)
- Architecture decisions made in this session
- Error messages and their resolutions
- Which subproject (frontend/backend/remotion) is being worked on