mirror of
https://github.com/affaan-m/everything-claude-code.git
synced 2026-03-30 13:43:26 +08:00
docs: enhance 5 thin commands and add Rust API example
Commands enhanced with multi-language support, error recovery strategies, and structured step-by-step workflows: - build-fix: build system detection table, fix loop, recovery strategies - test-coverage: framework detection, test generation rules, before/after report - refactor-clean: safety tiers (SAFE/CAUTION/DANGER), multi-language tools - update-codemaps: codemap format spec, diff detection, metadata headers - update-docs: source-of-truth mapping, staleness checks, generated markers New example: - rust-api-CLAUDE.md: Axum + SQLx + PostgreSQL with layered architecture, thiserror patterns, compile-time SQL verification, integration test examples
This commit is contained in:
@@ -1,29 +1,62 @@
|
|||||||
# Build and Fix
|
# Build and Fix
|
||||||
|
|
||||||
Incrementally fix TypeScript and build errors:
|
Incrementally fix build and type errors with minimal, safe changes.
|
||||||
|
|
||||||
1. Run build: npm run build or pnpm build
|
## Step 1: Detect Build System
|
||||||
|
|
||||||
2. Parse error output:
|
Identify the project's build tool and run the build:
|
||||||
- Group by file
|
|
||||||
- Sort by severity
|
|
||||||
|
|
||||||
3. For each error:
|
| Indicator | Build Command |
|
||||||
- Show error context (5 lines before/after)
|
|-----------|---------------|
|
||||||
- Explain the issue
|
| `package.json` with `build` script | `npm run build` or `pnpm build` |
|
||||||
- Propose fix
|
| `tsconfig.json` (TypeScript only) | `npx tsc --noEmit` |
|
||||||
- Apply fix
|
| `Cargo.toml` | `cargo build 2>&1` |
|
||||||
- Re-run build
|
| `pom.xml` | `mvn compile` |
|
||||||
- Verify error resolved
|
| `build.gradle` | `./gradlew compileJava` |
|
||||||
|
| `go.mod` | `go build ./...` |
|
||||||
|
| `pyproject.toml` | `python -m py_compile` or `mypy .` |
|
||||||
|
|
||||||
4. Stop if:
|
## Step 2: Parse and Group Errors
|
||||||
- Fix introduces new errors
|
|
||||||
- Same error persists after 3 attempts
|
|
||||||
- User requests pause
|
|
||||||
|
|
||||||
5. Show summary:
|
1. Run the build command and capture stderr
|
||||||
- Errors fixed
|
2. Group errors by file path
|
||||||
- Errors remaining
|
3. Sort by dependency order (fix imports/types before logic errors)
|
||||||
- New errors introduced
|
4. Count total errors for progress tracking
|
||||||
|
|
||||||
Fix one error at a time for safety!
|
## Step 3: Fix Loop (One Error at a Time)
|
||||||
|
|
||||||
|
For each error:
|
||||||
|
|
||||||
|
1. **Read the file** — Use Read tool to see error context (10 lines around the error)
|
||||||
|
2. **Diagnose** — Identify root cause (missing import, wrong type, syntax error)
|
||||||
|
3. **Fix minimally** — Use Edit tool for the smallest change that resolves the error
|
||||||
|
4. **Re-run build** — Verify the error is gone and no new errors introduced
|
||||||
|
5. **Move to next** — Continue with remaining errors
|
||||||
|
|
||||||
|
## Step 4: Guardrails
|
||||||
|
|
||||||
|
Stop and ask the user if:
|
||||||
|
- A fix introduces **more errors than it resolves**
|
||||||
|
- The **same error persists after 3 attempts** (likely a deeper issue)
|
||||||
|
- The fix requires **architectural changes** (not just a build fix)
|
||||||
|
- Build errors stem from **missing dependencies** (need `npm install`, `cargo add`, etc.)
|
||||||
|
|
||||||
|
## Step 5: Summary
|
||||||
|
|
||||||
|
Show results:
|
||||||
|
- Errors fixed (with file paths)
|
||||||
|
- Errors remaining (if any)
|
||||||
|
- New errors introduced (should be zero)
|
||||||
|
- Suggested next steps for unresolved issues
|
||||||
|
|
||||||
|
## Recovery Strategies
|
||||||
|
|
||||||
|
| Situation | Action |
|
||||||
|
|-----------|--------|
|
||||||
|
| Missing module/import | Check if package is installed; suggest install command |
|
||||||
|
| Type mismatch | Read both type definitions; fix the narrower type |
|
||||||
|
| Circular dependency | Identify cycle with import graph; suggest extraction |
|
||||||
|
| Version conflict | Check `package.json` / `Cargo.toml` for version constraints |
|
||||||
|
| Build tool misconfiguration | Read config file; compare with working defaults |
|
||||||
|
|
||||||
|
Fix one error at a time for safety. Prefer minimal diffs over refactoring.
|
||||||
|
|||||||
@@ -1,28 +1,80 @@
|
|||||||
# Refactor Clean
|
# Refactor Clean
|
||||||
|
|
||||||
Safely identify and remove dead code with test verification:
|
Safely identify and remove dead code with test verification at every step.
|
||||||
|
|
||||||
1. Run dead code analysis tools:
|
## Step 1: Detect Dead Code
|
||||||
- knip: Find unused exports and files
|
|
||||||
- depcheck: Find unused dependencies
|
|
||||||
- ts-prune: Find unused TypeScript exports
|
|
||||||
|
|
||||||
2. Generate comprehensive report in .reports/dead-code-analysis.md
|
Run analysis tools based on project type:
|
||||||
|
|
||||||
3. Categorize findings by severity:
|
| Tool | What It Finds | Command |
|
||||||
- SAFE: Test files, unused utilities
|
|------|--------------|---------|
|
||||||
- CAUTION: API routes, components
|
| knip | Unused exports, files, dependencies | `npx knip` |
|
||||||
- DANGER: Config files, main entry points
|
| depcheck | Unused npm dependencies | `npx depcheck` |
|
||||||
|
| ts-prune | Unused TypeScript exports | `npx ts-prune` |
|
||||||
|
| vulture | Unused Python code | `vulture src/` |
|
||||||
|
| deadcode | Unused Go code | `deadcode ./...` |
|
||||||
|
| cargo-udeps | Unused Rust dependencies | `cargo +nightly udeps` |
|
||||||
|
|
||||||
4. Propose safe deletions only
|
If no tool is available, use Grep to find exports with zero imports:
|
||||||
|
```
|
||||||
|
# Find exports, then check if they're imported anywhere
|
||||||
|
```
|
||||||
|
|
||||||
5. Before each deletion:
|
## Step 2: Categorize Findings
|
||||||
- Run full test suite
|
|
||||||
- Verify tests pass
|
|
||||||
- Apply change
|
|
||||||
- Re-run tests
|
|
||||||
- Rollback if tests fail
|
|
||||||
|
|
||||||
6. Show summary of cleaned items
|
Sort findings into safety tiers:
|
||||||
|
|
||||||
Never delete code without running tests first!
|
| Tier | Examples | Action |
|
||||||
|
|------|----------|--------|
|
||||||
|
| **SAFE** | Unused utilities, test helpers, internal functions | Delete with confidence |
|
||||||
|
| **CAUTION** | Components, API routes, middleware | Verify no dynamic imports or external consumers |
|
||||||
|
| **DANGER** | Config files, entry points, type definitions | Investigate before touching |
|
||||||
|
|
||||||
|
## Step 3: Safe Deletion Loop
|
||||||
|
|
||||||
|
For each SAFE item:
|
||||||
|
|
||||||
|
1. **Run full test suite** — Establish baseline (all green)
|
||||||
|
2. **Delete the dead code** — Use Edit tool for surgical removal
|
||||||
|
3. **Re-run test suite** — Verify nothing broke
|
||||||
|
4. **If tests fail** — Immediately revert with `git checkout -- <file>` and skip this item
|
||||||
|
5. **If tests pass** — Move to next item
|
||||||
|
|
||||||
|
## Step 4: Handle CAUTION Items
|
||||||
|
|
||||||
|
Before deleting CAUTION items:
|
||||||
|
- Search for dynamic imports: `import()`, `require()`, `__import__`
|
||||||
|
- Search for string references: route names, component names in configs
|
||||||
|
- Check if exported from a public package API
|
||||||
|
- Verify no external consumers (check dependents if published)
|
||||||
|
|
||||||
|
## Step 5: Consolidate Duplicates
|
||||||
|
|
||||||
|
After removing dead code, look for:
|
||||||
|
- Near-duplicate functions (>80% similar) — merge into one
|
||||||
|
- Redundant type definitions — consolidate
|
||||||
|
- Wrapper functions that add no value — inline them
|
||||||
|
- Re-exports that serve no purpose — remove indirection
|
||||||
|
|
||||||
|
## Step 6: Summary
|
||||||
|
|
||||||
|
Report results:
|
||||||
|
|
||||||
|
```
|
||||||
|
Dead Code Cleanup
|
||||||
|
──────────────────────────────
|
||||||
|
Deleted: 12 unused functions
|
||||||
|
3 unused files
|
||||||
|
5 unused dependencies
|
||||||
|
Skipped: 2 items (tests failed)
|
||||||
|
Saved: ~450 lines removed
|
||||||
|
──────────────────────────────
|
||||||
|
All tests passing ✅
|
||||||
|
```
|
||||||
|
|
||||||
|
## Rules
|
||||||
|
|
||||||
|
- **Never delete without running tests first**
|
||||||
|
- **One deletion at a time** — Atomic changes make rollback easy
|
||||||
|
- **Skip if uncertain** — Better to keep dead code than break production
|
||||||
|
- **Don't refactor while cleaning** — Separate concerns (clean first, refactor later)
|
||||||
|
|||||||
@@ -1,27 +1,69 @@
|
|||||||
# Test Coverage
|
# Test Coverage
|
||||||
|
|
||||||
Analyze test coverage and generate missing tests:
|
Analyze test coverage, identify gaps, and generate missing tests to reach 80%+ coverage.
|
||||||
|
|
||||||
1. Run tests with coverage: npm test --coverage or pnpm test --coverage
|
## Step 1: Detect Test Framework
|
||||||
|
|
||||||
2. Analyze coverage report (coverage/coverage-summary.json)
|
| Indicator | Coverage Command |
|
||||||
|
|-----------|-----------------|
|
||||||
|
| `jest.config.*` or `package.json` jest | `npx jest --coverage --coverageReporters=json-summary` |
|
||||||
|
| `vitest.config.*` | `npx vitest run --coverage` |
|
||||||
|
| `pytest.ini` / `pyproject.toml` pytest | `pytest --cov=src --cov-report=json` |
|
||||||
|
| `Cargo.toml` | `cargo llvm-cov --json` |
|
||||||
|
| `pom.xml` with JaCoCo | `mvn test jacoco:report` |
|
||||||
|
| `go.mod` | `go test -coverprofile=coverage.out ./...` |
|
||||||
|
|
||||||
3. Identify files below 80% coverage threshold
|
## Step 2: Analyze Coverage Report
|
||||||
|
|
||||||
4. For each under-covered file:
|
1. Run the coverage command
|
||||||
- Analyze untested code paths
|
2. Parse the output (JSON summary or terminal output)
|
||||||
- Generate unit tests for functions
|
3. List files **below 80% coverage**, sorted worst-first
|
||||||
- Generate integration tests for APIs
|
4. For each under-covered file, identify:
|
||||||
- Generate E2E tests for critical flows
|
- Untested functions or methods
|
||||||
|
- Missing branch coverage (if/else, switch, error paths)
|
||||||
|
- Dead code that inflates the denominator
|
||||||
|
|
||||||
5. Verify new tests pass
|
## Step 3: Generate Missing Tests
|
||||||
|
|
||||||
6. Show before/after coverage metrics
|
For each under-covered file, generate tests following this priority:
|
||||||
|
|
||||||
7. Ensure project reaches 80%+ overall coverage
|
1. **Happy path** — Core functionality with valid inputs
|
||||||
|
2. **Error handling** — Invalid inputs, missing data, network failures
|
||||||
|
3. **Edge cases** — Empty arrays, null/undefined, boundary values (0, -1, MAX_INT)
|
||||||
|
4. **Branch coverage** — Each if/else, switch case, ternary
|
||||||
|
|
||||||
Focus on:
|
### Test Generation Rules
|
||||||
- Happy path scenarios
|
|
||||||
- Error handling
|
- Place tests adjacent to source: `foo.ts` → `foo.test.ts` (or project convention)
|
||||||
- Edge cases (null, undefined, empty)
|
- Use existing test patterns from the project (import style, assertion library, mocking approach)
|
||||||
- Boundary conditions
|
- Mock external dependencies (database, APIs, file system)
|
||||||
|
- Each test should be independent — no shared mutable state between tests
|
||||||
|
- Name tests descriptively: `test_create_user_with_duplicate_email_returns_409`
|
||||||
|
|
||||||
|
## Step 4: Verify
|
||||||
|
|
||||||
|
1. Run the full test suite — all tests must pass
|
||||||
|
2. Re-run coverage — verify improvement
|
||||||
|
3. If still below 80%, repeat Step 3 for remaining gaps
|
||||||
|
|
||||||
|
## Step 5: Report
|
||||||
|
|
||||||
|
Show before/after comparison:
|
||||||
|
|
||||||
|
```
|
||||||
|
Coverage Report
|
||||||
|
──────────────────────────────
|
||||||
|
File Before After
|
||||||
|
src/services/auth.ts 45% 88%
|
||||||
|
src/utils/validation.ts 32% 82%
|
||||||
|
──────────────────────────────
|
||||||
|
Overall: 67% 84% ✅
|
||||||
|
```
|
||||||
|
|
||||||
|
## Focus Areas
|
||||||
|
|
||||||
|
- Functions with complex branching (high cyclomatic complexity)
|
||||||
|
- Error handlers and catch blocks
|
||||||
|
- Utility functions used across the codebase
|
||||||
|
- API endpoint handlers (request → response flow)
|
||||||
|
- Edge cases: null, undefined, empty string, empty array, zero, negative numbers
|
||||||
|
|||||||
@@ -1,17 +1,72 @@
|
|||||||
# Update Codemaps
|
# Update Codemaps
|
||||||
|
|
||||||
Analyze the codebase structure and update architecture documentation:
|
Analyze the codebase structure and generate token-lean architecture documentation.
|
||||||
|
|
||||||
1. Scan all source files for imports, exports, and dependencies
|
## Step 1: Scan Project Structure
|
||||||
2. Generate token-lean codemaps in the following format:
|
|
||||||
- codemaps/architecture.md - Overall architecture
|
|
||||||
- codemaps/backend.md - Backend structure
|
|
||||||
- codemaps/frontend.md - Frontend structure
|
|
||||||
- codemaps/data.md - Data models and schemas
|
|
||||||
|
|
||||||
3. Calculate diff percentage from previous version
|
1. Identify the project type (monorepo, single app, library, microservice)
|
||||||
4. If changes > 30%, request user approval before updating
|
2. Find all source directories (src/, lib/, app/, packages/)
|
||||||
5. Add freshness timestamp to each codemap
|
3. Map entry points (main.ts, index.ts, app.py, main.go, etc.)
|
||||||
6. Save reports to .reports/codemap-diff.txt
|
|
||||||
|
|
||||||
Use TypeScript/Node.js for analysis. Focus on high-level structure, not implementation details.
|
## Step 2: Generate Codemaps
|
||||||
|
|
||||||
|
Create or update codemaps in `docs/CODEMAPS/` (or `.reports/codemaps/`):
|
||||||
|
|
||||||
|
| File | Contents |
|
||||||
|
|------|----------|
|
||||||
|
| `architecture.md` | High-level system diagram, service boundaries, data flow |
|
||||||
|
| `backend.md` | API routes, middleware chain, service → repository mapping |
|
||||||
|
| `frontend.md` | Page tree, component hierarchy, state management flow |
|
||||||
|
| `data.md` | Database tables, relationships, migration history |
|
||||||
|
| `dependencies.md` | External services, third-party integrations, shared libraries |
|
||||||
|
|
||||||
|
### Codemap Format
|
||||||
|
|
||||||
|
Each codemap should be token-lean — optimized for AI context consumption:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Backend Architecture
|
||||||
|
|
||||||
|
## Routes
|
||||||
|
POST /api/users → UserController.create → UserService.create → UserRepo.insert
|
||||||
|
GET /api/users/:id → UserController.get → UserService.findById → UserRepo.findById
|
||||||
|
|
||||||
|
## Key Files
|
||||||
|
src/services/user.ts (business logic, 120 lines)
|
||||||
|
src/repos/user.ts (database access, 80 lines)
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
- PostgreSQL (primary data store)
|
||||||
|
- Redis (session cache, rate limiting)
|
||||||
|
- Stripe (payment processing)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Step 3: Diff Detection
|
||||||
|
|
||||||
|
1. If previous codemaps exist, calculate the diff percentage
|
||||||
|
2. If changes > 30%, show the diff and request user approval before overwriting
|
||||||
|
3. If changes <= 30%, update in place
|
||||||
|
|
||||||
|
## Step 4: Add Metadata
|
||||||
|
|
||||||
|
Add a freshness header to each codemap:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
<!-- Generated: 2026-02-11 | Files scanned: 142 | Token estimate: ~800 -->
|
||||||
|
```
|
||||||
|
|
||||||
|
## Step 5: Save Analysis Report
|
||||||
|
|
||||||
|
Write a summary to `.reports/codemap-diff.txt`:
|
||||||
|
- Files added/removed/modified since last scan
|
||||||
|
- New dependencies detected
|
||||||
|
- Architecture changes (new routes, new services, etc.)
|
||||||
|
- Staleness warnings for docs not updated in 90+ days
|
||||||
|
|
||||||
|
## Tips
|
||||||
|
|
||||||
|
- Focus on **high-level structure**, not implementation details
|
||||||
|
- Prefer **file paths and function signatures** over full code blocks
|
||||||
|
- Keep each codemap under **1000 tokens** for efficient context loading
|
||||||
|
- Use ASCII diagrams for data flow instead of verbose descriptions
|
||||||
|
- Run after major feature additions or refactoring sessions
|
||||||
|
|||||||
@@ -1,31 +1,84 @@
|
|||||||
# Update Documentation
|
# Update Documentation
|
||||||
|
|
||||||
Sync documentation from source-of-truth:
|
Sync documentation with the codebase, generating from source-of-truth files.
|
||||||
|
|
||||||
1. Read package.json scripts section
|
## Step 1: Identify Sources of Truth
|
||||||
- Generate scripts reference table
|
|
||||||
- Include descriptions from comments
|
|
||||||
|
|
||||||
2. Read .env.example
|
| Source | Generates |
|
||||||
- Extract all environment variables
|
|--------|-----------|
|
||||||
- Document purpose and format
|
| `package.json` scripts | Available commands reference |
|
||||||
|
| `.env.example` | Environment variable documentation |
|
||||||
|
| `openapi.yaml` / route files | API endpoint reference |
|
||||||
|
| Source code exports | Public API documentation |
|
||||||
|
| `Dockerfile` / `docker-compose.yml` | Infrastructure setup docs |
|
||||||
|
|
||||||
3. Generate docs/CONTRIB.md with:
|
## Step 2: Generate Script Reference
|
||||||
- Development workflow
|
|
||||||
- Available scripts
|
|
||||||
- Environment setup
|
|
||||||
- Testing procedures
|
|
||||||
|
|
||||||
4. Generate docs/RUNBOOK.md with:
|
1. Read `package.json` (or `Makefile`, `Cargo.toml`, `pyproject.toml`)
|
||||||
- Deployment procedures
|
2. Extract all scripts/commands with their descriptions
|
||||||
- Monitoring and alerts
|
3. Generate a reference table:
|
||||||
- Common issues and fixes
|
|
||||||
- Rollback procedures
|
|
||||||
|
|
||||||
5. Identify obsolete documentation:
|
```markdown
|
||||||
- Find docs not modified in 90+ days
|
| Command | Description |
|
||||||
- List for manual review
|
|---------|-------------|
|
||||||
|
| `npm run dev` | Start development server with hot reload |
|
||||||
|
| `npm run build` | Production build with type checking |
|
||||||
|
| `npm test` | Run test suite with coverage |
|
||||||
|
```
|
||||||
|
|
||||||
6. Show diff summary
|
## Step 3: Generate Environment Documentation
|
||||||
|
|
||||||
Single source of truth: package.json and .env.example
|
1. Read `.env.example` (or `.env.template`, `.env.sample`)
|
||||||
|
2. Extract all variables with their purposes
|
||||||
|
3. Categorize as required vs optional
|
||||||
|
4. Document expected format and valid values
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
| Variable | Required | Description | Example |
|
||||||
|
|----------|----------|-------------|---------|
|
||||||
|
| `DATABASE_URL` | Yes | PostgreSQL connection string | `postgres://user:pass@host:5432/db` |
|
||||||
|
| `LOG_LEVEL` | No | Logging verbosity (default: info) | `debug`, `info`, `warn`, `error` |
|
||||||
|
```
|
||||||
|
|
||||||
|
## Step 4: Update Contributing Guide
|
||||||
|
|
||||||
|
Generate or update `docs/CONTRIBUTING.md` with:
|
||||||
|
- Development environment setup (prerequisites, install steps)
|
||||||
|
- Available scripts and their purposes
|
||||||
|
- Testing procedures (how to run, how to write new tests)
|
||||||
|
- Code style enforcement (linter, formatter, pre-commit hooks)
|
||||||
|
- PR submission checklist
|
||||||
|
|
||||||
|
## Step 5: Update Runbook
|
||||||
|
|
||||||
|
Generate or update `docs/RUNBOOK.md` with:
|
||||||
|
- Deployment procedures (step-by-step)
|
||||||
|
- Health check endpoints and monitoring
|
||||||
|
- Common issues and their fixes
|
||||||
|
- Rollback procedures
|
||||||
|
- Alerting and escalation paths
|
||||||
|
|
||||||
|
## Step 6: Staleness Check
|
||||||
|
|
||||||
|
1. Find documentation files not modified in 90+ days
|
||||||
|
2. Cross-reference with recent source code changes
|
||||||
|
3. Flag potentially outdated docs for manual review
|
||||||
|
|
||||||
|
## Step 7: Show Summary
|
||||||
|
|
||||||
|
```
|
||||||
|
Documentation Update
|
||||||
|
──────────────────────────────
|
||||||
|
Updated: docs/CONTRIBUTING.md (scripts table)
|
||||||
|
Updated: docs/ENV.md (3 new variables)
|
||||||
|
Flagged: docs/DEPLOY.md (142 days stale)
|
||||||
|
Skipped: docs/API.md (no changes detected)
|
||||||
|
──────────────────────────────
|
||||||
|
```
|
||||||
|
|
||||||
|
## Rules
|
||||||
|
|
||||||
|
- **Single source of truth**: Always generate from code, never manually edit generated sections
|
||||||
|
- **Preserve manual sections**: Only update generated sections; leave hand-written prose intact
|
||||||
|
- **Mark generated content**: Use `<!-- AUTO-GENERATED -->` markers around generated sections
|
||||||
|
- **Don't create docs unprompted**: Only create new doc files if the command explicitly requests it
|
||||||
|
|||||||
285
examples/rust-api-CLAUDE.md
Normal file
285
examples/rust-api-CLAUDE.md
Normal file
@@ -0,0 +1,285 @@
|
|||||||
|
# Rust API Service — Project CLAUDE.md
|
||||||
|
|
||||||
|
> Real-world example for a Rust API service with Axum, PostgreSQL, and Docker.
|
||||||
|
> Copy this to your project root and customize for your service.
|
||||||
|
|
||||||
|
## Project Overview
|
||||||
|
|
||||||
|
**Stack:** Rust 1.78+, Axum (web framework), SQLx (async database), PostgreSQL, Tokio (async runtime), Docker
|
||||||
|
|
||||||
|
**Architecture:** Layered architecture with handler → service → repository separation. Axum for HTTP, SQLx for type-checked SQL at compile time, Tower middleware for cross-cutting concerns.
|
||||||
|
|
||||||
|
## Critical Rules
|
||||||
|
|
||||||
|
### Rust Conventions
|
||||||
|
|
||||||
|
- Use `thiserror` for library errors, `anyhow` only in binary crates or tests
|
||||||
|
- No `.unwrap()` or `.expect()` in production code — propagate errors with `?`
|
||||||
|
- Prefer `&str` over `String` in function parameters; return `String` when ownership transfers
|
||||||
|
- Use `clippy` with `#![deny(clippy::all, clippy::pedantic)]` — fix all warnings
|
||||||
|
- Derive `Debug` on all public types; derive `Clone`, `PartialEq` only when needed
|
||||||
|
- No `unsafe` blocks unless justified with a `// SAFETY:` comment
|
||||||
|
|
||||||
|
### Database
|
||||||
|
|
||||||
|
- All queries use SQLx `query!` or `query_as!` macros — compile-time verified against the schema
|
||||||
|
- Migrations in `migrations/` using `sqlx migrate` — never alter the database directly
|
||||||
|
- Use `sqlx::Pool<Postgres>` as shared state — never create connections per request
|
||||||
|
- All queries use parameterized placeholders (`$1`, `$2`) — never string formatting
|
||||||
|
|
||||||
|
```rust
|
||||||
|
// BAD: String interpolation (SQL injection risk)
|
||||||
|
let q = format!("SELECT * FROM users WHERE id = '{}'", id);
|
||||||
|
|
||||||
|
// GOOD: Parameterized query, compile-time checked
|
||||||
|
let user = sqlx::query_as!(User, "SELECT * FROM users WHERE id = $1", id)
|
||||||
|
.fetch_optional(&pool)
|
||||||
|
.await?;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Error Handling
|
||||||
|
|
||||||
|
- Define a domain error enum per module with `thiserror`
|
||||||
|
- Map errors to HTTP responses via `IntoResponse` — never expose internal details
|
||||||
|
- Use `tracing` for structured logging — never `println!` or `eprintln!`
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use thiserror::Error;
|
||||||
|
|
||||||
|
#[derive(Debug, Error)]
|
||||||
|
pub enum AppError {
|
||||||
|
#[error("Resource not found")]
|
||||||
|
NotFound,
|
||||||
|
#[error("Validation failed: {0}")]
|
||||||
|
Validation(String),
|
||||||
|
#[error("Unauthorized")]
|
||||||
|
Unauthorized,
|
||||||
|
#[error(transparent)]
|
||||||
|
Internal(#[from] anyhow::Error),
|
||||||
|
}
|
||||||
|
|
||||||
|
impl IntoResponse for AppError {
|
||||||
|
fn into_response(self) -> Response {
|
||||||
|
let (status, message) = match &self {
|
||||||
|
Self::NotFound => (StatusCode::NOT_FOUND, self.to_string()),
|
||||||
|
Self::Validation(msg) => (StatusCode::BAD_REQUEST, msg.clone()),
|
||||||
|
Self::Unauthorized => (StatusCode::UNAUTHORIZED, self.to_string()),
|
||||||
|
Self::Internal(err) => {
|
||||||
|
tracing::error!(?err, "internal error");
|
||||||
|
(StatusCode::INTERNAL_SERVER_ERROR, "Internal error".into())
|
||||||
|
}
|
||||||
|
};
|
||||||
|
(status, Json(json!({ "error": message }))).into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
|
||||||
|
- Unit tests in `#[cfg(test)]` modules within each source file
|
||||||
|
- Integration tests in `tests/` directory using a real PostgreSQL (Testcontainers or Docker)
|
||||||
|
- Use `#[sqlx::test]` for database tests with automatic migration and rollback
|
||||||
|
- Mock external services with `mockall` or `wiremock`
|
||||||
|
|
||||||
|
### Code Style
|
||||||
|
|
||||||
|
- Max line length: 100 characters (enforced by rustfmt)
|
||||||
|
- Group imports: `std`, external crates, `crate`/`super` — separated by blank lines
|
||||||
|
- Modules: one file per module, `mod.rs` only for re-exports
|
||||||
|
- Types: PascalCase, functions/variables: snake_case, constants: UPPER_SNAKE_CASE
|
||||||
|
|
||||||
|
## File Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
src/
|
||||||
|
main.rs # Entrypoint, server setup, graceful shutdown
|
||||||
|
lib.rs # Re-exports for integration tests
|
||||||
|
config.rs # Environment config with envy or figment
|
||||||
|
router.rs # Axum router with all routes
|
||||||
|
middleware/
|
||||||
|
auth.rs # JWT extraction and validation
|
||||||
|
logging.rs # Request/response tracing
|
||||||
|
handlers/
|
||||||
|
mod.rs # Route handlers (thin — delegate to services)
|
||||||
|
users.rs
|
||||||
|
orders.rs
|
||||||
|
services/
|
||||||
|
mod.rs # Business logic
|
||||||
|
users.rs
|
||||||
|
orders.rs
|
||||||
|
repositories/
|
||||||
|
mod.rs # Database access (SQLx queries)
|
||||||
|
users.rs
|
||||||
|
orders.rs
|
||||||
|
domain/
|
||||||
|
mod.rs # Domain types, error enums
|
||||||
|
user.rs
|
||||||
|
order.rs
|
||||||
|
migrations/
|
||||||
|
001_create_users.sql
|
||||||
|
002_create_orders.sql
|
||||||
|
tests/
|
||||||
|
common/mod.rs # Shared test helpers, test server setup
|
||||||
|
api_users.rs # Integration tests for user endpoints
|
||||||
|
api_orders.rs # Integration tests for order endpoints
|
||||||
|
```
|
||||||
|
|
||||||
|
## Key Patterns
|
||||||
|
|
||||||
|
### Handler (Thin)
|
||||||
|
|
||||||
|
```rust
|
||||||
|
async fn create_user(
|
||||||
|
State(ctx): State<AppState>,
|
||||||
|
Json(payload): Json<CreateUserRequest>,
|
||||||
|
) -> Result<(StatusCode, Json<UserResponse>), AppError> {
|
||||||
|
let user = ctx.user_service.create(payload).await?;
|
||||||
|
Ok((StatusCode::CREATED, Json(UserResponse::from(user))))
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Service (Business Logic)
|
||||||
|
|
||||||
|
```rust
|
||||||
|
impl UserService {
|
||||||
|
pub async fn create(&self, req: CreateUserRequest) -> Result<User, AppError> {
|
||||||
|
if self.repo.find_by_email(&req.email).await?.is_some() {
|
||||||
|
return Err(AppError::Validation("Email already registered".into()));
|
||||||
|
}
|
||||||
|
|
||||||
|
let password_hash = hash_password(&req.password)?;
|
||||||
|
let user = self.repo.insert(&req.email, &req.name, &password_hash).await?;
|
||||||
|
|
||||||
|
Ok(user)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Repository (Data Access)
|
||||||
|
|
||||||
|
```rust
|
||||||
|
impl UserRepository {
|
||||||
|
pub async fn find_by_email(&self, email: &str) -> Result<Option<User>, sqlx::Error> {
|
||||||
|
sqlx::query_as!(User, "SELECT * FROM users WHERE email = $1", email)
|
||||||
|
.fetch_optional(&self.pool)
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn insert(
|
||||||
|
&self,
|
||||||
|
email: &str,
|
||||||
|
name: &str,
|
||||||
|
password_hash: &str,
|
||||||
|
) -> Result<User, sqlx::Error> {
|
||||||
|
sqlx::query_as!(
|
||||||
|
User,
|
||||||
|
r#"INSERT INTO users (email, name, password_hash)
|
||||||
|
VALUES ($1, $2, $3) RETURNING *"#,
|
||||||
|
email, name, password_hash,
|
||||||
|
)
|
||||||
|
.fetch_one(&self.pool)
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Integration Test
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_create_user() {
|
||||||
|
let app = spawn_test_app().await;
|
||||||
|
|
||||||
|
let response = app
|
||||||
|
.client
|
||||||
|
.post(&format!("{}/api/v1/users", app.address))
|
||||||
|
.json(&json!({
|
||||||
|
"email": "alice@example.com",
|
||||||
|
"name": "Alice",
|
||||||
|
"password": "securepassword123"
|
||||||
|
}))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.expect("Failed to send request");
|
||||||
|
|
||||||
|
assert_eq!(response.status(), StatusCode::CREATED);
|
||||||
|
let body: serde_json::Value = response.json().await.unwrap();
|
||||||
|
assert_eq!(body["email"], "alice@example.com");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_create_user_duplicate_email() {
|
||||||
|
let app = spawn_test_app().await;
|
||||||
|
// Create first user
|
||||||
|
create_test_user(&app, "alice@example.com").await;
|
||||||
|
// Attempt duplicate
|
||||||
|
let response = create_user_request(&app, "alice@example.com").await;
|
||||||
|
assert_eq!(response.status(), StatusCode::BAD_REQUEST);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Environment Variables
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Server
|
||||||
|
HOST=0.0.0.0
|
||||||
|
PORT=8080
|
||||||
|
RUST_LOG=info,tower_http=debug
|
||||||
|
|
||||||
|
# Database
|
||||||
|
DATABASE_URL=postgres://user:pass@localhost:5432/myapp
|
||||||
|
|
||||||
|
# Auth
|
||||||
|
JWT_SECRET=your-secret-key-min-32-chars
|
||||||
|
JWT_EXPIRY_HOURS=24
|
||||||
|
|
||||||
|
# Optional
|
||||||
|
CORS_ALLOWED_ORIGINS=http://localhost:3000
|
||||||
|
```
|
||||||
|
|
||||||
|
## Testing Strategy
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run all tests
|
||||||
|
cargo test
|
||||||
|
|
||||||
|
# Run with output
|
||||||
|
cargo test -- --nocapture
|
||||||
|
|
||||||
|
# Run specific test module
|
||||||
|
cargo test api_users
|
||||||
|
|
||||||
|
# Check coverage (requires cargo-llvm-cov)
|
||||||
|
cargo llvm-cov --html
|
||||||
|
open target/llvm-cov/html/index.html
|
||||||
|
|
||||||
|
# Lint
|
||||||
|
cargo clippy -- -D warnings
|
||||||
|
|
||||||
|
# Format check
|
||||||
|
cargo fmt -- --check
|
||||||
|
```
|
||||||
|
|
||||||
|
## ECC Workflow
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Planning
|
||||||
|
/plan "Add order fulfillment with Stripe payment"
|
||||||
|
|
||||||
|
# Development with TDD
|
||||||
|
/tdd # cargo test-based TDD workflow
|
||||||
|
|
||||||
|
# Review
|
||||||
|
/code-review # Rust-specific code review
|
||||||
|
/security-scan # Dependency audit + unsafe scan
|
||||||
|
|
||||||
|
# Verification
|
||||||
|
/verify # Build, clippy, test, security scan
|
||||||
|
```
|
||||||
|
|
||||||
|
## Git Workflow
|
||||||
|
|
||||||
|
- `feat:` new features, `fix:` bug fixes, `refactor:` code changes
|
||||||
|
- Feature branches from `main`, PRs required
|
||||||
|
- CI: `cargo fmt --check`, `cargo clippy`, `cargo test`, `cargo audit`
|
||||||
|
- Deploy: Docker multi-stage build with `scratch` or `distroless` base
|
||||||
Reference in New Issue
Block a user