mirror of
https://github.com/affaan-m/everything-claude-code.git
synced 2026-04-14 22:13:41 +08:00
Compare commits
22 Commits
9f56521e36
...
v1.9.0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
29277ac273 | ||
|
|
6836e9875d | ||
|
|
cfb3370df8 | ||
|
|
d697f2ebac | ||
|
|
0efd6ed914 | ||
|
|
72c013d212 | ||
|
|
27234fb790 | ||
|
|
a6bd90713d | ||
|
|
9c58d1edb5 | ||
|
|
04f8675624 | ||
|
|
f37c92cfe2 | ||
|
|
fec871e1cb | ||
|
|
1b21e082fa | ||
|
|
beb11f8d02 | ||
|
|
90c3486e03 | ||
|
|
9ceb699e9a | ||
|
|
a9edf54d2f | ||
|
|
4bdbf57d98 | ||
|
|
fce4513d58 | ||
|
|
7cf07cac17 | ||
|
|
b6595974c2 | ||
|
|
f12bb90924 |
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "ecc-universal",
|
||||
"version": "1.8.0",
|
||||
"version": "1.9.0",
|
||||
"description": "Everything Claude Code (ECC) plugin for OpenCode - agents, commands, hooks, and skills",
|
||||
"main": "dist/index.js",
|
||||
"types": "dist/index.d.ts",
|
||||
|
||||
16
AGENTS.md
16
AGENTS.md
@@ -1,6 +1,8 @@
|
||||
# Everything Claude Code (ECC) — Agent Instructions
|
||||
|
||||
This is a **production-ready AI coding plugin** providing 21 specialized agents, 102 skills, 52 commands, and automated hook workflows for software development.
|
||||
This is a **production-ready AI coding plugin** providing 27 specialized agents, 109 skills, 57 commands, and automated hook workflows for software development.
|
||||
|
||||
**Version:** 1.9.0
|
||||
|
||||
## Core Principles
|
||||
|
||||
@@ -23,6 +25,9 @@ This is a **production-ready AI coding plugin** providing 21 specialized agents,
|
||||
| e2e-runner | End-to-end Playwright testing | Critical user flows |
|
||||
| refactor-cleaner | Dead code cleanup | Code maintenance |
|
||||
| doc-updater | Documentation and codemaps | Updating docs |
|
||||
| docs-lookup | Documentation and API reference research | Library/API documentation questions |
|
||||
| cpp-reviewer | C++ code review | C++ projects |
|
||||
| cpp-build-resolver | C++ build errors | C++ build failures |
|
||||
| go-reviewer | Go code review | Go projects |
|
||||
| go-build-resolver | Go build errors | Go build failures |
|
||||
| kotlin-reviewer | Kotlin code review | Kotlin/Android/KMP projects |
|
||||
@@ -30,11 +35,14 @@ This is a **production-ready AI coding plugin** providing 21 specialized agents,
|
||||
| database-reviewer | PostgreSQL/Supabase specialist | Schema design, query optimization |
|
||||
| python-reviewer | Python code review | Python projects |
|
||||
| java-reviewer | Java and Spring Boot code review | Java/Spring Boot projects |
|
||||
| java-build-resolver | Java/Maven/Gradle build errors | Java build failures |
|
||||
| chief-of-staff | Communication triage and drafts | Multi-channel email, Slack, LINE, Messenger |
|
||||
| loop-operator | Autonomous loop execution | Run loops safely, monitor stalls, intervene |
|
||||
| harness-optimizer | Harness config tuning | Reliability, cost, throughput |
|
||||
| rust-reviewer | Rust code review | Rust projects |
|
||||
| rust-build-resolver | Rust build errors | Rust build failures |
|
||||
| pytorch-build-resolver | PyTorch runtime/CUDA/training errors | PyTorch build/training failures |
|
||||
| typescript-reviewer | TypeScript/JavaScript code review | TypeScript/JavaScript projects |
|
||||
|
||||
## Agent Orchestration
|
||||
|
||||
@@ -133,9 +141,9 @@ Troubleshoot failures: check test isolation → verify mocks → fix implementat
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
agents/ — 21 specialized subagents
|
||||
skills/ — 102 workflow skills and domain knowledge
|
||||
commands/ — 52 slash commands
|
||||
agents/ — 27 specialized subagents
|
||||
skills/ — 109 workflow skills and domain knowledge
|
||||
commands/ — 57 slash commands
|
||||
hooks/ — Trigger-based automations
|
||||
rules/ — Always-follow guidelines (common + per-language)
|
||||
scripts/ — Cross-platform Node.js utilities
|
||||
|
||||
103
CHANGELOG.md
103
CHANGELOG.md
@@ -1,5 +1,108 @@
|
||||
# Changelog
|
||||
|
||||
## 1.9.0 - 2026-03-20
|
||||
|
||||
### Highlights
|
||||
|
||||
- Selective install architecture with manifest-driven pipeline and SQLite state store.
|
||||
- Language coverage expanded to 10+ ecosystems with 6 new agents and language-specific rules.
|
||||
- Observer reliability hardened with memory throttling, sandbox fixes, and 5-layer loop guard.
|
||||
- Self-improving skills foundation with skill evolution and session adapters.
|
||||
|
||||
### New Agents
|
||||
|
||||
- `typescript-reviewer` — TypeScript/JavaScript code review specialist (#647)
|
||||
- `pytorch-build-resolver` — PyTorch runtime, CUDA, and training error resolution (#549)
|
||||
- `java-build-resolver` — Maven/Gradle build error resolution (#538)
|
||||
- `java-reviewer` — Java and Spring Boot code review (#528)
|
||||
- `kotlin-reviewer` — Kotlin/Android/KMP code review (#309)
|
||||
- `kotlin-build-resolver` — Kotlin/Gradle build errors (#309)
|
||||
- `rust-reviewer` — Rust code review (#523)
|
||||
- `rust-build-resolver` — Rust build error resolution (#523)
|
||||
- `docs-lookup` — Documentation and API reference research (#529)
|
||||
|
||||
### New Skills
|
||||
|
||||
- `pytorch-patterns` — PyTorch deep learning workflows (#550)
|
||||
- `documentation-lookup` — API reference and library doc research (#529)
|
||||
- `bun-runtime` — Bun runtime patterns (#529)
|
||||
- `nextjs-turbopack` — Next.js Turbopack workflows (#529)
|
||||
- `mcp-server-patterns` — MCP server design patterns (#531)
|
||||
- `data-scraper-agent` — AI-powered public data collection (#503)
|
||||
- `team-builder` — Team composition skill (#501)
|
||||
- `ai-regression-testing` — AI regression test workflows (#433)
|
||||
- `claude-devfleet` — Multi-agent orchestration (#505)
|
||||
- `blueprint` — Multi-session construction planning
|
||||
- `everything-claude-code` — Self-referential ECC skill (#335)
|
||||
- `prompt-optimizer` — Prompt optimization skill (#418)
|
||||
- 8 Evos operational domain skills (#290)
|
||||
- 3 Laravel skills (#420)
|
||||
- VideoDB skills (#301)
|
||||
|
||||
### New Commands
|
||||
|
||||
- `/docs` — Documentation lookup (#530)
|
||||
- `/aside` — Side conversation (#407)
|
||||
- `/prompt-optimize` — Prompt optimization (#418)
|
||||
- `/resume-session`, `/save-session` — Session management
|
||||
- `learn-eval` improvements with checklist-based holistic verdict
|
||||
|
||||
### New Rules
|
||||
|
||||
- Java language rules (#645)
|
||||
- PHP rule pack (#389)
|
||||
- Perl language rules and skills (patterns, security, testing)
|
||||
- Kotlin/Android/KMP rules (#309)
|
||||
- C++ language support (#539)
|
||||
- Rust language support (#523)
|
||||
|
||||
### Infrastructure
|
||||
|
||||
- Selective install architecture with manifest resolution (`install-plan.js`, `install-apply.js`) (#509, #512)
|
||||
- SQLite state store with query CLI for tracking installed components (#510)
|
||||
- Session adapters for structured session recording (#511)
|
||||
- Skill evolution foundation for self-improving skills (#514)
|
||||
- Orchestration harness with deterministic scoring (#524)
|
||||
- Catalog count enforcement in CI (#525)
|
||||
- Install manifest validation for all 109 skills (#537)
|
||||
- PowerShell installer wrapper (#532)
|
||||
- Antigravity IDE support via `--target antigravity` flag (#332)
|
||||
- Codex CLI customization scripts (#336)
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Resolved 19 CI test failures across 6 files (#519)
|
||||
- Fixed 8 test failures in install pipeline, orchestrator, and repair (#564)
|
||||
- Observer memory explosion with throttling, re-entrancy guard, and tail sampling (#536)
|
||||
- Observer sandbox access fix for Haiku invocation (#661)
|
||||
- Worktree project ID mismatch fix (#665)
|
||||
- Observer lazy-start logic (#508)
|
||||
- Observer 5-layer loop prevention guard (#399)
|
||||
- Hook portability and Windows .cmd support
|
||||
- Biome hook optimization — eliminated npx overhead (#359)
|
||||
- InsAIts security hook made opt-in (#370)
|
||||
- Windows spawnSync export fix (#431)
|
||||
- UTF-8 encoding fix for instinct CLI (#353)
|
||||
- Secret scrubbing in hooks (#348)
|
||||
|
||||
### Translations
|
||||
|
||||
- Korean (ko-KR) translation — README, agents, commands, skills, rules (#392)
|
||||
- Chinese (zh-CN) documentation sync (#428)
|
||||
|
||||
### Credits
|
||||
|
||||
- @ymdvsymd — observer sandbox and worktree fixes
|
||||
- @pythonstrup — biome hook optimization
|
||||
- @Nomadu27 — InsAIts security hook
|
||||
- @hahmee — Korean translation
|
||||
- @zdocapp — Chinese translation sync
|
||||
- @cookiee339 — Kotlin ecosystem
|
||||
- @pangerlkr — CI workflow fixes
|
||||
- @0xrohitgarg — VideoDB skills
|
||||
- @nocodemf — Evos operational skills
|
||||
- @swarnika-cmd — community contributions
|
||||
|
||||
## 1.8.0 - 2026-03-04
|
||||
|
||||
### Highlights
|
||||
|
||||
45
README.md
45
README.md
@@ -75,6 +75,18 @@ This repo is the raw code only. The guides explain everything.
|
||||
|
||||
## What's New
|
||||
|
||||
### v1.9.0 — Selective Install & Language Expansion (Mar 2026)
|
||||
|
||||
- **Selective install architecture** — Manifest-driven install pipeline with `install-plan.js` and `install-apply.js` for targeted component installation. State store tracks what's installed and enables incremental updates.
|
||||
- **6 new agents** — `typescript-reviewer`, `pytorch-build-resolver`, `java-build-resolver`, `java-reviewer`, `kotlin-reviewer`, `kotlin-build-resolver` expand language coverage to 10 languages.
|
||||
- **New skills** — `pytorch-patterns` for deep learning workflows, `documentation-lookup` for API reference research, `bun-runtime` and `nextjs-turbopack` for modern JS toolchains, plus 8 operational domain skills and `mcp-server-patterns`.
|
||||
- **Session & state infrastructure** — SQLite state store with query CLI, session adapters for structured recording, skill evolution foundation for self-improving skills.
|
||||
- **Orchestration overhaul** — Harness audit scoring made deterministic, orchestration status and launcher compatibility hardened, observer loop prevention with 5-layer guard.
|
||||
- **Observer reliability** — Memory explosion fix with throttling and tail sampling, sandbox access fix, lazy-start logic, and re-entrancy guard.
|
||||
- **12 language ecosystems** — New rules for Java, PHP, Perl, Kotlin/Android/KMP, C++, and Rust join existing TypeScript, Python, Go, and common rules.
|
||||
- **Community contributions** — Korean and Chinese translations, InsAIts security hook, biome hook optimization, VideoDB skills, Evos operational skills, PowerShell installer, Antigravity IDE support.
|
||||
- **CI hardening** — 19 test failure fixes, catalog count enforcement, install manifest validation, and full test suite green.
|
||||
|
||||
### v1.8.0 — Harness Performance System (Mar 2026)
|
||||
|
||||
- **Harness-first release** — ECC is now explicitly framed as an agent harness performance system, not just a config pack.
|
||||
@@ -191,7 +203,7 @@ For manual install instructions see the README in the `rules/` folder.
|
||||
/plugin list everything-claude-code@everything-claude-code
|
||||
```
|
||||
|
||||
✨ **That's it!** You now have access to 21 agents, 102 skills, and 52 commands.
|
||||
✨ **That's it!** You now have access to 27 agents, 109 skills, and 57 commands.
|
||||
|
||||
---
|
||||
|
||||
@@ -252,7 +264,7 @@ everything-claude-code/
|
||||
| |-- plugin.json # Plugin metadata and component paths
|
||||
| |-- marketplace.json # Marketplace catalog for /plugin marketplace add
|
||||
|
|
||||
|-- agents/ # Specialized subagents for delegation
|
||||
|-- agents/ # 27 specialized subagents for delegation
|
||||
| |-- planner.md # Feature implementation planning
|
||||
| |-- architect.md # System design decisions
|
||||
| |-- tdd-guide.md # Test-driven development
|
||||
@@ -262,10 +274,24 @@ everything-claude-code/
|
||||
| |-- e2e-runner.md # Playwright E2E testing
|
||||
| |-- refactor-cleaner.md # Dead code cleanup
|
||||
| |-- doc-updater.md # Documentation sync
|
||||
| |-- docs-lookup.md # Documentation/API lookup
|
||||
| |-- chief-of-staff.md # Communication triage and drafts
|
||||
| |-- loop-operator.md # Autonomous loop execution
|
||||
| |-- harness-optimizer.md # Harness config tuning
|
||||
| |-- cpp-reviewer.md # C++ code review
|
||||
| |-- cpp-build-resolver.md # C++ build error resolution
|
||||
| |-- go-reviewer.md # Go code review
|
||||
| |-- go-build-resolver.md # Go build error resolution
|
||||
| |-- python-reviewer.md # Python code review (NEW)
|
||||
| |-- database-reviewer.md # Database/Supabase review (NEW)
|
||||
| |-- python-reviewer.md # Python code review
|
||||
| |-- database-reviewer.md # Database/Supabase review
|
||||
| |-- typescript-reviewer.md # TypeScript/JavaScript code review
|
||||
| |-- java-reviewer.md # Java/Spring Boot code review
|
||||
| |-- java-build-resolver.md # Java/Maven/Gradle build errors
|
||||
| |-- kotlin-reviewer.md # Kotlin/Android/KMP code review
|
||||
| |-- kotlin-build-resolver.md # Kotlin/Gradle build errors
|
||||
| |-- rust-reviewer.md # Rust code review
|
||||
| |-- rust-build-resolver.md # Rust build error resolution
|
||||
| |-- pytorch-build-resolver.md # PyTorch/CUDA training errors
|
||||
|
|
||||
|-- skills/ # Workflow definitions and domain knowledge
|
||||
| |-- coding-standards/ # Language best practices
|
||||
@@ -720,6 +746,7 @@ Not sure where to start? Use this quick reference:
|
||||
| Update documentation | `/update-docs` | doc-updater |
|
||||
| Review Go code | `/go-review` | go-reviewer |
|
||||
| Review Python code | `/python-review` | python-reviewer |
|
||||
| Review TypeScript/JavaScript code | *(invoke `typescript-reviewer` directly)* | typescript-reviewer |
|
||||
| Audit database queries | *(auto-delegated)* | database-reviewer |
|
||||
|
||||
### Common Workflows
|
||||
@@ -830,7 +857,7 @@ Yes. ECC is cross-platform:
|
||||
- **Cursor**: Pre-translated configs in `.cursor/`. See [Cursor IDE Support](#cursor-ide-support).
|
||||
- **OpenCode**: Full plugin support in `.opencode/`. See [OpenCode Support](#-opencode-support).
|
||||
- **Codex**: First-class support for both macOS app and CLI, with adapter drift guards and SessionStart fallback. See PR [#257](https://github.com/affaan-m/everything-claude-code/pull/257).
|
||||
- **Antigravity**: Tightly integrated setup for workflows, skills, and flatten rules in `.agent/`.
|
||||
- **Antigravity**: Tightly integrated setup for workflows, skills, and flattened rules in `.agent/`. See [Antigravity Guide](docs/ANTIGRAVITY-GUIDE.md).
|
||||
- **Claude Code**: Native — this is the primary target.
|
||||
</details>
|
||||
|
||||
@@ -1042,9 +1069,9 @@ The configuration is automatically detected from `.opencode/opencode.json`.
|
||||
|
||||
| Feature | Claude Code | OpenCode | Status |
|
||||
|---------|-------------|----------|--------|
|
||||
| Agents | ✅ 21 agents | ✅ 12 agents | **Claude Code leads** |
|
||||
| Commands | ✅ 52 commands | ✅ 31 commands | **Claude Code leads** |
|
||||
| Skills | ✅ 102 skills | ✅ 37 skills | **Claude Code leads** |
|
||||
| Agents | ✅ 27 agents | ✅ 12 agents | **Claude Code leads** |
|
||||
| Commands | ✅ 57 commands | ✅ 31 commands | **Claude Code leads** |
|
||||
| Skills | ✅ 109 skills | ✅ 37 skills | **Claude Code leads** |
|
||||
| Hooks | ✅ 8 event types | ✅ 11 events | **OpenCode has more!** |
|
||||
| Rules | ✅ 29 rules | ✅ 13 instructions | **Claude Code leads** |
|
||||
| MCP Servers | ✅ 14 servers | ✅ Full | **Full parity** |
|
||||
@@ -1162,7 +1189,7 @@ ECC is the **first plugin to maximize every major AI coding tool**. Here's how e
|
||||
| **Context File** | CLAUDE.md + AGENTS.md | AGENTS.md | AGENTS.md | AGENTS.md |
|
||||
| **Secret Detection** | Hook-based | beforeSubmitPrompt hook | Sandbox-based | Hook-based |
|
||||
| **Auto-Format** | PostToolUse hook | afterFileEdit hook | N/A | file.edited hook |
|
||||
| **Version** | Plugin | Plugin | Reference config | 1.8.0 |
|
||||
| **Version** | Plugin | Plugin | Reference config | 1.9.0 |
|
||||
|
||||
**Key architectural decisions:**
|
||||
- **AGENTS.md** at root is the universal cross-tool file (read by all 4 tools)
|
||||
|
||||
90
agents/cpp-build-resolver.md
Normal file
90
agents/cpp-build-resolver.md
Normal file
@@ -0,0 +1,90 @@
|
||||
---
|
||||
name: cpp-build-resolver
|
||||
description: C++ build, CMake, and compilation error resolution specialist. Fixes build errors, linker issues, and template errors with minimal changes. Use when C++ builds fail.
|
||||
tools: ["Read", "Write", "Edit", "Bash", "Grep", "Glob"]
|
||||
model: sonnet
|
||||
---
|
||||
|
||||
# C++ Build Error Resolver
|
||||
|
||||
You are an expert C++ build error resolution specialist. Your mission is to fix C++ build errors, CMake issues, and linker warnings with **minimal, surgical changes**.
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
1. Diagnose C++ compilation errors
|
||||
2. Fix CMake configuration issues
|
||||
3. Resolve linker errors (undefined references, multiple definitions)
|
||||
4. Handle template instantiation errors
|
||||
5. Fix include and dependency problems
|
||||
|
||||
## Diagnostic Commands
|
||||
|
||||
Run these in order:
|
||||
|
||||
```bash
|
||||
cmake --build build 2>&1 | head -100
|
||||
cmake -B build -S . 2>&1 | tail -30
|
||||
clang-tidy src/*.cpp -- -std=c++17 2>/dev/null || echo "clang-tidy not available"
|
||||
cppcheck --enable=all src/ 2>/dev/null || echo "cppcheck not available"
|
||||
```
|
||||
|
||||
## Resolution Workflow
|
||||
|
||||
```text
|
||||
1. cmake --build build -> Parse error message
|
||||
2. Read affected file -> Understand context
|
||||
3. Apply minimal fix -> Only what's needed
|
||||
4. cmake --build build -> Verify fix
|
||||
5. ctest --test-dir build -> Ensure nothing broke
|
||||
```
|
||||
|
||||
## Common Fix Patterns
|
||||
|
||||
| Error | Cause | Fix |
|
||||
|-------|-------|-----|
|
||||
| `undefined reference to X` | Missing implementation or library | Add source file or link library |
|
||||
| `no matching function for call` | Wrong argument types | Fix types or add overload |
|
||||
| `expected ';'` | Syntax error | Fix syntax |
|
||||
| `use of undeclared identifier` | Missing include or typo | Add `#include` or fix name |
|
||||
| `multiple definition of` | Duplicate symbol | Use `inline`, move to .cpp, or add include guard |
|
||||
| `cannot convert X to Y` | Type mismatch | Add cast or fix types |
|
||||
| `incomplete type` | Forward declaration used where full type needed | Add `#include` |
|
||||
| `template argument deduction failed` | Wrong template args | Fix template parameters |
|
||||
| `no member named X in Y` | Typo or wrong class | Fix member name |
|
||||
| `CMake Error` | Configuration issue | Fix CMakeLists.txt |
|
||||
|
||||
## CMake Troubleshooting
|
||||
|
||||
```bash
|
||||
cmake -B build -S . -DCMAKE_VERBOSE_MAKEFILE=ON
|
||||
cmake --build build --verbose
|
||||
cmake --build build --clean-first
|
||||
```
|
||||
|
||||
## Key Principles
|
||||
|
||||
- **Surgical fixes only** -- don't refactor, just fix the error
|
||||
- **Never** suppress warnings with `#pragma` without approval
|
||||
- **Never** change function signatures unless necessary
|
||||
- Fix root cause over suppressing symptoms
|
||||
- One fix at a time, verify after each
|
||||
|
||||
## Stop Conditions
|
||||
|
||||
Stop and report if:
|
||||
- Same error persists after 3 fix attempts
|
||||
- Fix introduces more errors than it resolves
|
||||
- Error requires architectural changes beyond scope
|
||||
|
||||
## Output Format
|
||||
|
||||
```text
|
||||
[FIXED] src/handler/user.cpp:42
|
||||
Error: undefined reference to `UserService::create`
|
||||
Fix: Added missing method implementation in user_service.cpp
|
||||
Remaining errors: 3
|
||||
```
|
||||
|
||||
Final: `Build Status: SUCCESS/FAILED | Errors Fixed: N | Files Modified: list`
|
||||
|
||||
For detailed C++ patterns and code examples, see `skill: cpp-coding-standards`.
|
||||
72
agents/cpp-reviewer.md
Normal file
72
agents/cpp-reviewer.md
Normal file
@@ -0,0 +1,72 @@
|
||||
---
|
||||
name: cpp-reviewer
|
||||
description: Expert C++ code reviewer specializing in memory safety, modern C++ idioms, concurrency, and performance. Use for all C++ code changes. MUST BE USED for C++ projects.
|
||||
tools: ["Read", "Grep", "Glob", "Bash"]
|
||||
model: sonnet
|
||||
---
|
||||
|
||||
You are a senior C++ code reviewer ensuring high standards of modern C++ and best practices.
|
||||
|
||||
When invoked:
|
||||
1. Run `git diff -- '*.cpp' '*.hpp' '*.cc' '*.hh' '*.cxx' '*.h'` to see recent C++ file changes
|
||||
2. Run `clang-tidy` and `cppcheck` if available
|
||||
3. Focus on modified C++ files
|
||||
4. Begin review immediately
|
||||
|
||||
## Review Priorities
|
||||
|
||||
### CRITICAL -- Memory Safety
|
||||
- **Raw new/delete**: Use `std::unique_ptr` or `std::shared_ptr`
|
||||
- **Buffer overflows**: C-style arrays, `strcpy`, `sprintf` without bounds
|
||||
- **Use-after-free**: Dangling pointers, invalidated iterators
|
||||
- **Uninitialized variables**: Reading before assignment
|
||||
- **Memory leaks**: Missing RAII, resources not tied to object lifetime
|
||||
- **Null dereference**: Pointer access without null check
|
||||
|
||||
### CRITICAL -- Security
|
||||
- **Command injection**: Unvalidated input in `system()` or `popen()`
|
||||
- **Format string attacks**: User input in `printf` format string
|
||||
- **Integer overflow**: Unchecked arithmetic on untrusted input
|
||||
- **Hardcoded secrets**: API keys, passwords in source
|
||||
- **Unsafe casts**: `reinterpret_cast` without justification
|
||||
|
||||
### HIGH -- Concurrency
|
||||
- **Data races**: Shared mutable state without synchronization
|
||||
- **Deadlocks**: Multiple mutexes locked in inconsistent order
|
||||
- **Missing lock guards**: Manual `lock()`/`unlock()` instead of `std::lock_guard`
|
||||
- **Detached threads**: `std::thread` without `join()` or `detach()`
|
||||
|
||||
### HIGH -- Code Quality
|
||||
- **No RAII**: Manual resource management
|
||||
- **Rule of Five violations**: Incomplete special member functions
|
||||
- **Large functions**: Over 50 lines
|
||||
- **Deep nesting**: More than 4 levels
|
||||
- **C-style code**: `malloc`, C arrays, `typedef` instead of `using`
|
||||
|
||||
### MEDIUM -- Performance
|
||||
- **Unnecessary copies**: Pass large objects by value instead of `const&`
|
||||
- **Missing move semantics**: Not using `std::move` for sink parameters
|
||||
- **String concatenation in loops**: Use `std::ostringstream` or `reserve()`
|
||||
- **Missing `reserve()`**: Known-size vector without pre-allocation
|
||||
|
||||
### MEDIUM -- Best Practices
|
||||
- **`const` correctness**: Missing `const` on methods, parameters, references
|
||||
- **`auto` overuse/underuse**: Balance readability with type deduction
|
||||
- **Include hygiene**: Missing include guards, unnecessary includes
|
||||
- **Namespace pollution**: `using namespace std;` in headers
|
||||
|
||||
## Diagnostic Commands
|
||||
|
||||
```bash
|
||||
clang-tidy --checks='*,-llvmlibc-*' src/*.cpp -- -std=c++17
|
||||
cppcheck --enable=all --suppress=missingIncludeSystem src/
|
||||
cmake --build build 2>&1 | head -50
|
||||
```
|
||||
|
||||
## Approval Criteria
|
||||
|
||||
- **Approve**: No CRITICAL or HIGH issues
|
||||
- **Warning**: MEDIUM issues only
|
||||
- **Block**: CRITICAL or HIGH issues found
|
||||
|
||||
For detailed C++ coding standards and anti-patterns, see `skill: cpp-coding-standards`.
|
||||
153
agents/java-build-resolver.md
Normal file
153
agents/java-build-resolver.md
Normal file
@@ -0,0 +1,153 @@
|
||||
---
|
||||
name: java-build-resolver
|
||||
description: Java/Maven/Gradle build, compilation, and dependency error resolution specialist. Fixes build errors, Java compiler errors, and Maven/Gradle issues with minimal changes. Use when Java or Spring Boot builds fail.
|
||||
tools: ["Read", "Write", "Edit", "Bash", "Grep", "Glob"]
|
||||
model: sonnet
|
||||
---
|
||||
|
||||
# Java Build Error Resolver
|
||||
|
||||
You are an expert Java/Maven/Gradle build error resolution specialist. Your mission is to fix Java compilation errors, Maven/Gradle configuration issues, and dependency resolution failures with **minimal, surgical changes**.
|
||||
|
||||
You DO NOT refactor or rewrite code — you fix the build error only.
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
1. Diagnose Java compilation errors
|
||||
2. Fix Maven and Gradle build configuration issues
|
||||
3. Resolve dependency conflicts and version mismatches
|
||||
4. Handle annotation processor errors (Lombok, MapStruct, Spring)
|
||||
5. Fix Checkstyle and SpotBugs violations
|
||||
|
||||
## Diagnostic Commands
|
||||
|
||||
Run these in order:
|
||||
|
||||
```bash
|
||||
./mvnw compile -q 2>&1 || mvn compile -q 2>&1
|
||||
./mvnw test -q 2>&1 || mvn test -q 2>&1
|
||||
./gradlew build 2>&1
|
||||
./mvnw dependency:tree 2>&1 | head -100
|
||||
./gradlew dependencies --configuration runtimeClasspath 2>&1 | head -100
|
||||
./mvnw checkstyle:check 2>&1 || echo "checkstyle not configured"
|
||||
./mvnw spotbugs:check 2>&1 || echo "spotbugs not configured"
|
||||
```
|
||||
|
||||
## Resolution Workflow
|
||||
|
||||
```text
|
||||
1. ./mvnw compile OR ./gradlew build -> Parse error message
|
||||
2. Read affected file -> Understand context
|
||||
3. Apply minimal fix -> Only what's needed
|
||||
4. ./mvnw compile OR ./gradlew build -> Verify fix
|
||||
5. ./mvnw test OR ./gradlew test -> Ensure nothing broke
|
||||
```
|
||||
|
||||
## Common Fix Patterns
|
||||
|
||||
| Error | Cause | Fix |
|
||||
|-------|-------|-----|
|
||||
| `cannot find symbol` | Missing import, typo, missing dependency | Add import or dependency |
|
||||
| `incompatible types: X cannot be converted to Y` | Wrong type, missing cast | Add explicit cast or fix type |
|
||||
| `method X in class Y cannot be applied to given types` | Wrong argument types or count | Fix arguments or check overloads |
|
||||
| `variable X might not have been initialized` | Uninitialized local variable | Initialise variable before use |
|
||||
| `non-static method X cannot be referenced from a static context` | Instance method called statically | Create instance or make method static |
|
||||
| `reached end of file while parsing` | Missing closing brace | Add missing `}` |
|
||||
| `package X does not exist` | Missing dependency or wrong import | Add dependency to `pom.xml`/`build.gradle` |
|
||||
| `error: cannot access X, class file not found` | Missing transitive dependency | Add explicit dependency |
|
||||
| `Annotation processor threw uncaught exception` | Lombok/MapStruct misconfiguration | Check annotation processor setup |
|
||||
| `Could not resolve: group:artifact:version` | Missing repository or wrong version | Add repository or fix version in POM |
|
||||
| `The following artifacts could not be resolved` | Private repo or network issue | Check repository credentials or `settings.xml` |
|
||||
| `COMPILATION ERROR: Source option X is no longer supported` | Java version mismatch | Update `maven.compiler.source` / `targetCompatibility` |
|
||||
|
||||
## Maven Troubleshooting
|
||||
|
||||
```bash
|
||||
# Check dependency tree for conflicts
|
||||
./mvnw dependency:tree -Dverbose
|
||||
|
||||
# Force update snapshots and re-download
|
||||
./mvnw clean install -U
|
||||
|
||||
# Analyse dependency conflicts
|
||||
./mvnw dependency:analyze
|
||||
|
||||
# Check effective POM (resolved inheritance)
|
||||
./mvnw help:effective-pom
|
||||
|
||||
# Debug annotation processors
|
||||
./mvnw compile -X 2>&1 | grep -i "processor\|lombok\|mapstruct"
|
||||
|
||||
# Skip tests to isolate compile errors
|
||||
./mvnw compile -DskipTests
|
||||
|
||||
# Check Java version in use
|
||||
./mvnw --version
|
||||
java -version
|
||||
```
|
||||
|
||||
## Gradle Troubleshooting
|
||||
|
||||
```bash
|
||||
# Check dependency tree for conflicts
|
||||
./gradlew dependencies --configuration runtimeClasspath
|
||||
|
||||
# Force refresh dependencies
|
||||
./gradlew build --refresh-dependencies
|
||||
|
||||
# Clear Gradle build cache
|
||||
./gradlew clean && rm -rf .gradle/build-cache/
|
||||
|
||||
# Run with debug output
|
||||
./gradlew build --debug 2>&1 | tail -50
|
||||
|
||||
# Check dependency insight
|
||||
./gradlew dependencyInsight --dependency <name> --configuration runtimeClasspath
|
||||
|
||||
# Check Java toolchain
|
||||
./gradlew -q javaToolchains
|
||||
```
|
||||
|
||||
## Spring Boot Specific
|
||||
|
||||
```bash
|
||||
# Verify Spring Boot application context loads
|
||||
./mvnw spring-boot:run -Dspring-boot.run.arguments="--spring.profiles.active=test"
|
||||
|
||||
# Check for missing beans or circular dependencies
|
||||
./mvnw test -Dtest=*ContextLoads* -q
|
||||
|
||||
# Verify Lombok is configured as annotation processor (not just dependency)
|
||||
grep -A5 "annotationProcessorPaths\|annotationProcessor" pom.xml build.gradle
|
||||
```
|
||||
|
||||
## Key Principles
|
||||
|
||||
- **Surgical fixes only** — don't refactor, just fix the error
|
||||
- **Never** suppress warnings with `@SuppressWarnings` without explicit approval
|
||||
- **Never** change method signatures unless necessary
|
||||
- **Always** run the build after each fix to verify
|
||||
- Fix root cause over suppressing symptoms
|
||||
- Prefer adding missing imports over changing logic
|
||||
- Check `pom.xml`, `build.gradle`, or `build.gradle.kts` to confirm the build tool before running commands
|
||||
|
||||
## Stop Conditions
|
||||
|
||||
Stop and report if:
|
||||
- Same error persists after 3 fix attempts
|
||||
- Fix introduces more errors than it resolves
|
||||
- Error requires architectural changes beyond scope
|
||||
- Missing external dependencies that need user decision (private repos, licences)
|
||||
|
||||
## Output Format
|
||||
|
||||
```text
|
||||
[FIXED] src/main/java/com/example/service/PaymentService.java:87
|
||||
Error: cannot find symbol — symbol: class IdempotencyKey
|
||||
Fix: Added import com.example.domain.IdempotencyKey
|
||||
Remaining errors: 1
|
||||
```
|
||||
|
||||
Final: `Build Status: SUCCESS/FAILED | Errors Fixed: N | Files Modified: list`
|
||||
|
||||
For detailed Java and Spring Boot patterns, see `skill: springboot-patterns`.
|
||||
120
agents/pytorch-build-resolver.md
Normal file
120
agents/pytorch-build-resolver.md
Normal file
@@ -0,0 +1,120 @@
|
||||
---
|
||||
name: pytorch-build-resolver
|
||||
description: PyTorch runtime, CUDA, and training error resolution specialist. Fixes tensor shape mismatches, device errors, gradient issues, DataLoader problems, and mixed precision failures with minimal changes. Use when PyTorch training or inference crashes.
|
||||
tools: ["Read", "Write", "Edit", "Bash", "Grep", "Glob"]
|
||||
model: sonnet
|
||||
---
|
||||
|
||||
# PyTorch Build/Runtime Error Resolver
|
||||
|
||||
You are an expert PyTorch error resolution specialist. Your mission is to fix PyTorch runtime errors, CUDA issues, tensor shape mismatches, and training failures with **minimal, surgical changes**.
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
1. Diagnose PyTorch runtime and CUDA errors
|
||||
2. Fix tensor shape mismatches across model layers
|
||||
3. Resolve device placement issues (CPU/GPU)
|
||||
4. Debug gradient computation failures
|
||||
5. Fix DataLoader and data pipeline errors
|
||||
6. Handle mixed precision (AMP) issues
|
||||
|
||||
## Diagnostic Commands
|
||||
|
||||
Run these in order:
|
||||
|
||||
```bash
|
||||
python -c "import torch; print(f'PyTorch: {torch.__version__}, CUDA: {torch.cuda.is_available()}, Device: {torch.cuda.get_device_name(0) if torch.cuda.is_available() else \"CPU\"}')"
|
||||
python -c "import torch; print(f'cuDNN: {torch.backends.cudnn.version()}')" 2>/dev/null || echo "cuDNN not available"
|
||||
pip list 2>/dev/null | grep -iE "torch|cuda|nvidia"
|
||||
nvidia-smi 2>/dev/null || echo "nvidia-smi not available"
|
||||
python -c "import torch; x = torch.randn(2,3).cuda(); print('CUDA tensor test: OK')" 2>&1 || echo "CUDA tensor creation failed"
|
||||
```
|
||||
|
||||
## Resolution Workflow
|
||||
|
||||
```text
|
||||
1. Read error traceback -> Identify failing line and error type
|
||||
2. Read affected file -> Understand model/training context
|
||||
3. Trace tensor shapes -> Print shapes at key points
|
||||
4. Apply minimal fix -> Only what's needed
|
||||
5. Run failing script -> Verify fix
|
||||
6. Check gradients flow -> Ensure backward pass works
|
||||
```
|
||||
|
||||
## Common Fix Patterns
|
||||
|
||||
| Error | Cause | Fix |
|
||||
|-------|-------|-----|
|
||||
| `RuntimeError: mat1 and mat2 shapes cannot be multiplied` | Linear layer input size mismatch | Fix `in_features` to match previous layer output |
|
||||
| `RuntimeError: Expected all tensors to be on the same device` | Mixed CPU/GPU tensors | Add `.to(device)` to all tensors and model |
|
||||
| `CUDA out of memory` | Batch too large or memory leak | Reduce batch size, add `torch.cuda.empty_cache()`, use gradient checkpointing |
|
||||
| `RuntimeError: element 0 of tensors does not require grad` | Detached tensor in loss computation | Remove `.detach()` or `.item()` before backward |
|
||||
| `ValueError: Expected input batch_size X to match target batch_size Y` | Mismatched batch dimensions | Fix DataLoader collation or model output reshape |
|
||||
| `RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation` | In-place op breaks autograd | Replace `x += 1` with `x = x + 1`, avoid in-place relu |
|
||||
| `RuntimeError: stack expects each tensor to be equal size` | Inconsistent tensor sizes in DataLoader | Add padding/truncation in Dataset `__getitem__` or custom `collate_fn` |
|
||||
| `RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR` | cuDNN incompatibility or corrupted state | Set `torch.backends.cudnn.enabled = False` to test, update drivers |
|
||||
| `IndexError: index out of range in self` | Embedding index >= num_embeddings | Fix vocabulary size or clamp indices |
|
||||
| `RuntimeError: Trying to backward through the graph a second time` | Reused computation graph | Add `retain_graph=True` or restructure forward pass |
|
||||
|
||||
## Shape Debugging
|
||||
|
||||
When shapes are unclear, inject diagnostic prints:
|
||||
|
||||
```python
|
||||
# Add before the failing line:
|
||||
print(f"tensor.shape = {tensor.shape}, dtype = {tensor.dtype}, device = {tensor.device}")
|
||||
|
||||
# For full model shape tracing:
|
||||
from torchsummary import summary
|
||||
summary(model, input_size=(C, H, W))
|
||||
```
|
||||
|
||||
## Memory Debugging
|
||||
|
||||
```bash
|
||||
# Check GPU memory usage
|
||||
python -c "
|
||||
import torch
|
||||
print(f'Allocated: {torch.cuda.memory_allocated()/1e9:.2f} GB')
|
||||
print(f'Cached: {torch.cuda.memory_reserved()/1e9:.2f} GB')
|
||||
print(f'Max allocated: {torch.cuda.max_memory_allocated()/1e9:.2f} GB')
|
||||
"
|
||||
```
|
||||
|
||||
Common memory fixes:
|
||||
- Wrap validation in `with torch.no_grad():`
|
||||
- Use `del tensor; torch.cuda.empty_cache()`
|
||||
- Enable gradient checkpointing: `model.gradient_checkpointing_enable()`
|
||||
- Use `torch.cuda.amp.autocast()` for mixed precision
|
||||
|
||||
## Key Principles
|
||||
|
||||
- **Surgical fixes only** -- don't refactor, just fix the error
|
||||
- **Never** change model architecture unless the error requires it
|
||||
- **Never** silence warnings with `warnings.filterwarnings` without approval
|
||||
- **Always** verify tensor shapes before and after fix
|
||||
- **Always** test with a small batch first (`batch_size=2`)
|
||||
- Fix root cause over suppressing symptoms
|
||||
|
||||
## Stop Conditions
|
||||
|
||||
Stop and report if:
|
||||
- Same error persists after 3 fix attempts
|
||||
- Fix requires changing the model architecture fundamentally
|
||||
- Error is caused by hardware/driver incompatibility (recommend driver update)
|
||||
- Out of memory even with `batch_size=1` (recommend smaller model or gradient checkpointing)
|
||||
|
||||
## Output Format
|
||||
|
||||
```text
|
||||
[FIXED] train.py:42
|
||||
Error: RuntimeError: mat1 and mat2 shapes cannot be multiplied (32x512 and 256x10)
|
||||
Fix: Changed nn.Linear(256, 10) to nn.Linear(512, 10) to match encoder output
|
||||
Remaining errors: 0
|
||||
```
|
||||
|
||||
Final: `Status: SUCCESS/FAILED | Errors Fixed: N | Files Modified: list`
|
||||
|
||||
---
|
||||
|
||||
For PyTorch best practices, consult the [official PyTorch documentation](https://pytorch.org/docs/stable/) and [PyTorch forums](https://discuss.pytorch.org/).
|
||||
112
agents/typescript-reviewer.md
Normal file
112
agents/typescript-reviewer.md
Normal file
@@ -0,0 +1,112 @@
|
||||
---
|
||||
name: typescript-reviewer
|
||||
description: Expert TypeScript/JavaScript code reviewer specializing in type safety, async correctness, Node/web security, and idiomatic patterns. Use for all TypeScript and JavaScript code changes. MUST BE USED for TypeScript/JavaScript projects.
|
||||
tools: ["Read", "Grep", "Glob", "Bash"]
|
||||
model: sonnet
|
||||
---
|
||||
|
||||
You are a senior TypeScript engineer ensuring high standards of type-safe, idiomatic TypeScript and JavaScript.
|
||||
|
||||
When invoked:
|
||||
1. Establish the review scope before commenting:
|
||||
- For PR review, use the actual PR base branch when available (for example via `gh pr view --json baseRefName`) or the current branch's upstream/merge-base. Do not hard-code `main`.
|
||||
- For local review, prefer `git diff --staged` and `git diff` first.
|
||||
- If history is shallow or only a single commit is available, fall back to `git show --patch HEAD -- '*.ts' '*.tsx' '*.js' '*.jsx'` so you still inspect code-level changes.
|
||||
2. Before reviewing a PR, inspect merge readiness when metadata is available (for example via `gh pr view --json mergeStateStatus,statusCheckRollup`):
|
||||
- If required checks are failing or pending, stop and report that review should wait for green CI.
|
||||
- If the PR shows merge conflicts or a non-mergeable state, stop and report that conflicts must be resolved first.
|
||||
- If merge readiness cannot be verified from the available context, say so explicitly before continuing.
|
||||
3. Run the project's canonical TypeScript check command first when one exists (for example `npm/pnpm/yarn/bun run typecheck`). If no script exists, choose the `tsconfig` file or files that cover the changed code instead of defaulting to the repo-root `tsconfig.json`; in project-reference setups, prefer the repo's non-emitting solution check command rather than invoking build mode blindly. Otherwise use `tsc --noEmit -p <relevant-config>`. Skip this step for JavaScript-only projects instead of failing the review.
|
||||
4. Run `eslint . --ext .ts,.tsx,.js,.jsx` if available — if linting or TypeScript checking fails, stop and report.
|
||||
5. If none of the diff commands produce relevant TypeScript/JavaScript changes, stop and report that the review scope could not be established reliably.
|
||||
6. Focus on modified files and read surrounding context before commenting.
|
||||
7. Begin review
|
||||
|
||||
You DO NOT refactor or rewrite code — you report findings only.
|
||||
|
||||
## Review Priorities
|
||||
|
||||
### CRITICAL -- Security
|
||||
- **Injection via `eval` / `new Function`**: User-controlled input passed to dynamic execution — never execute untrusted strings
|
||||
- **XSS**: Unsanitised user input assigned to `innerHTML`, `dangerouslySetInnerHTML`, or `document.write`
|
||||
- **SQL/NoSQL injection**: String concatenation in queries — use parameterised queries or an ORM
|
||||
- **Path traversal**: User-controlled input in `fs.readFile`, `path.join` without `path.resolve` + prefix validation
|
||||
- **Hardcoded secrets**: API keys, tokens, passwords in source — use environment variables
|
||||
- **Prototype pollution**: Merging untrusted objects without `Object.create(null)` or schema validation
|
||||
- **`child_process` with user input**: Validate and allowlist before passing to `exec`/`spawn`
|
||||
|
||||
### HIGH -- Type Safety
|
||||
- **`any` without justification**: Disables type checking — use `unknown` and narrow, or a precise type
|
||||
- **Non-null assertion abuse**: `value!` without a preceding guard — add a runtime check
|
||||
- **`as` casts that bypass checks**: Casting to unrelated types to silence errors — fix the type instead
|
||||
- **Relaxed compiler settings**: If `tsconfig.json` is touched and weakens strictness, call it out explicitly
|
||||
|
||||
### HIGH -- Async Correctness
|
||||
- **Unhandled promise rejections**: `async` functions called without `await` or `.catch()`
|
||||
- **Sequential awaits for independent work**: `await` inside loops when operations could safely run in parallel — consider `Promise.all`
|
||||
- **Floating promises**: Fire-and-forget without error handling in event handlers or constructors
|
||||
- **`async` with `forEach`**: `array.forEach(async fn)` does not await — use `for...of` or `Promise.all`
|
||||
|
||||
### HIGH -- Error Handling
|
||||
- **Swallowed errors**: Empty `catch` blocks or `catch (e) {}` with no action
|
||||
- **`JSON.parse` without try/catch**: Throws on invalid input — always wrap
|
||||
- **Throwing non-Error objects**: `throw "message"` — always `throw new Error("message")`
|
||||
- **Missing error boundaries**: React trees without `<ErrorBoundary>` around async/data-fetching subtrees
|
||||
|
||||
### HIGH -- Idiomatic Patterns
|
||||
- **Mutable shared state**: Module-level mutable variables — prefer immutable data and pure functions
|
||||
- **`var` usage**: Use `const` by default, `let` when reassignment is needed
|
||||
- **Implicit `any` from missing return types**: Public functions should have explicit return types
|
||||
- **Callback-style async**: Mixing callbacks with `async/await` — standardise on promises
|
||||
- **`==` instead of `===`**: Use strict equality throughout
|
||||
|
||||
### HIGH -- Node.js Specifics
|
||||
- **Synchronous fs in request handlers**: `fs.readFileSync` blocks the event loop — use async variants
|
||||
- **Missing input validation at boundaries**: No schema validation (zod, joi, yup) on external data
|
||||
- **Unvalidated `process.env` access**: Access without fallback or startup validation
|
||||
- **`require()` in ESM context**: Mixing module systems without clear intent
|
||||
|
||||
### MEDIUM -- React / Next.js (when applicable)
|
||||
- **Missing dependency arrays**: `useEffect`/`useCallback`/`useMemo` with incomplete deps — use exhaustive-deps lint rule
|
||||
- **State mutation**: Mutating state directly instead of returning new objects
|
||||
- **Key prop using index**: `key={index}` in dynamic lists — use stable unique IDs
|
||||
- **`useEffect` for derived state**: Compute derived values during render, not in effects
|
||||
- **Server/client boundary leaks**: Importing server-only modules into client components in Next.js
|
||||
|
||||
### MEDIUM -- Performance
|
||||
- **Object/array creation in render**: Inline objects as props cause unnecessary re-renders — hoist or memoize
|
||||
- **N+1 queries**: Database or API calls inside loops — batch or use `Promise.all`
|
||||
- **Missing `React.memo` / `useMemo`**: Expensive computations or components re-running on every render
|
||||
- **Large bundle imports**: `import _ from 'lodash'` — use named imports or tree-shakeable alternatives
|
||||
|
||||
### MEDIUM -- Best Practices
|
||||
- **`console.log` left in production code**: Use a structured logger
|
||||
- **Magic numbers/strings**: Use named constants or enums
|
||||
- **Deep optional chaining without fallback**: `a?.b?.c?.d` with no default — add `?? fallback`
|
||||
- **Inconsistent naming**: camelCase for variables/functions, PascalCase for types/classes/components
|
||||
|
||||
## Diagnostic Commands
|
||||
|
||||
```bash
|
||||
npm run typecheck --if-present # Canonical TypeScript check when the project defines one
|
||||
tsc --noEmit -p <relevant-config> # Fallback type check for the tsconfig that owns the changed files
|
||||
eslint . --ext .ts,.tsx,.js,.jsx # Linting
|
||||
prettier --check . # Format check
|
||||
npm audit # Dependency vulnerabilities (or the equivalent yarn/pnpm/bun audit command)
|
||||
vitest run # Tests (Vitest)
|
||||
jest --ci # Tests (Jest)
|
||||
```
|
||||
|
||||
## Approval Criteria
|
||||
|
||||
- **Approve**: No CRITICAL or HIGH issues
|
||||
- **Warning**: MEDIUM issues only (can merge with caution)
|
||||
- **Block**: CRITICAL or HIGH issues found
|
||||
|
||||
## Reference
|
||||
|
||||
This repo does not yet ship a dedicated `typescript-patterns` skill. For detailed TypeScript and JavaScript patterns, use `coding-standards` plus `frontend-patterns` or `backend-patterns` based on the code being reviewed.
|
||||
|
||||
---
|
||||
|
||||
Review with the mindset: "Would this code pass review at a top TypeScript shop or well-maintained open-source project?"
|
||||
29
commands/context-budget.md
Normal file
29
commands/context-budget.md
Normal file
@@ -0,0 +1,29 @@
|
||||
---
|
||||
description: Analyze context window usage across agents, skills, MCP servers, and rules to find optimization opportunities. Helps reduce token overhead and avoid performance warnings.
|
||||
---
|
||||
|
||||
# Context Budget Optimizer
|
||||
|
||||
Analyze your Claude Code setup's context window consumption and produce actionable recommendations to reduce token overhead.
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
/context-budget [--verbose]
|
||||
```
|
||||
|
||||
- Default: summary with top recommendations
|
||||
- `--verbose`: full breakdown per component
|
||||
|
||||
$ARGUMENTS
|
||||
|
||||
## What to Do
|
||||
|
||||
Run the **context-budget** skill (`skills/context-budget/SKILL.md`) with the following inputs:
|
||||
|
||||
1. Pass `--verbose` flag if present in `$ARGUMENTS`
|
||||
2. Assume a 200K context window (Claude Sonnet default) unless the user specifies otherwise
|
||||
3. Follow the skill's four phases: Inventory → Classify → Detect Issues → Report
|
||||
4. Output the formatted Context Budget Report to the user
|
||||
|
||||
The skill handles all scanning logic, token estimation, issue detection, and report formatting.
|
||||
173
commands/cpp-build.md
Normal file
173
commands/cpp-build.md
Normal file
@@ -0,0 +1,173 @@
|
||||
---
|
||||
description: Fix C++ build errors, CMake issues, and linker problems incrementally. Invokes the cpp-build-resolver agent for minimal, surgical fixes.
|
||||
---
|
||||
|
||||
# C++ Build and Fix
|
||||
|
||||
This command invokes the **cpp-build-resolver** agent to incrementally fix C++ build errors with minimal changes.
|
||||
|
||||
## What This Command Does
|
||||
|
||||
1. **Run Diagnostics**: Execute `cmake --build`, `clang-tidy`, `cppcheck`
|
||||
2. **Parse Errors**: Group by file and sort by severity
|
||||
3. **Fix Incrementally**: One error at a time
|
||||
4. **Verify Each Fix**: Re-run build after each change
|
||||
5. **Report Summary**: Show what was fixed and what remains
|
||||
|
||||
## When to Use
|
||||
|
||||
Use `/cpp-build` when:
|
||||
- `cmake --build build` fails with errors
|
||||
- Linker errors (undefined references, multiple definitions)
|
||||
- Template instantiation failures
|
||||
- Include/dependency issues
|
||||
- After pulling changes that break the build
|
||||
|
||||
## Diagnostic Commands Run
|
||||
|
||||
```bash
|
||||
# CMake configure
|
||||
cmake -B build -S .
|
||||
|
||||
# Build
|
||||
cmake --build build 2>&1 | head -100
|
||||
|
||||
# Static analysis (if available)
|
||||
clang-tidy src/*.cpp -- -std=c++17
|
||||
cppcheck --enable=all src/
|
||||
```
|
||||
|
||||
## Example Session
|
||||
|
||||
```text
|
||||
User: /cpp-build
|
||||
|
||||
Agent:
|
||||
# C++ Build Resolution
|
||||
|
||||
## Initial Diagnostics
|
||||
|
||||
```bash
|
||||
$ cmake --build build
|
||||
src/service/user.cpp:25:15: error: use of undeclared identifier 'UserRepository'
|
||||
src/handler/api.cpp:42:9: error: no matching function for call to 'process'
|
||||
src/handler/api.cpp:58:1: error: non-void function does not return a value
|
||||
```
|
||||
|
||||
Errors found: 3
|
||||
|
||||
## Fix 1: Undeclared Identifier
|
||||
|
||||
File: src/service/user.cpp:25
|
||||
Error: use of undeclared identifier 'UserRepository'
|
||||
Cause: Missing include
|
||||
|
||||
```cpp
|
||||
// Added include
|
||||
#include "repository/user_repository.hpp"
|
||||
```
|
||||
|
||||
```bash
|
||||
$ cmake --build build
|
||||
# 2 errors remaining
|
||||
```
|
||||
|
||||
## Fix 2: No Matching Function
|
||||
|
||||
File: src/handler/api.cpp:42
|
||||
Error: no matching function for call to 'process'
|
||||
|
||||
```cpp
|
||||
// Changed
|
||||
process(params.get("count"));
|
||||
// To
|
||||
process(std::stoi(params.get("count")));
|
||||
```
|
||||
|
||||
```bash
|
||||
$ cmake --build build
|
||||
# 1 error remaining
|
||||
```
|
||||
|
||||
## Fix 3: Missing Return
|
||||
|
||||
File: src/handler/api.cpp:58
|
||||
Error: non-void function does not return a value
|
||||
|
||||
```cpp
|
||||
std::optional<User> getUser(const std::string& id) {
|
||||
if (id.empty()) {
|
||||
return std::nullopt;
|
||||
}
|
||||
auto user = findUser(id);
|
||||
// Added missing return
|
||||
return user;
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
$ cmake --build build
|
||||
# Build successful!
|
||||
```
|
||||
|
||||
## Final Verification
|
||||
|
||||
```bash
|
||||
$ ctest --test-dir build --output-on-failure
|
||||
Test project build
|
||||
1/5 Test #1: unit_tests ........ Passed 0.02 sec
|
||||
2/5 Test #2: integration_tests Passed 0.15 sec
|
||||
All tests passed.
|
||||
```
|
||||
|
||||
## Summary
|
||||
|
||||
| Metric | Count |
|
||||
|--------|-------|
|
||||
| Build errors fixed | 3 |
|
||||
| Linker errors fixed | 0 |
|
||||
| Files modified | 2 |
|
||||
| Remaining issues | 0 |
|
||||
|
||||
Build Status: ✅ SUCCESS
|
||||
```
|
||||
|
||||
## Common Errors Fixed
|
||||
|
||||
| Error | Typical Fix |
|
||||
|-------|-------------|
|
||||
| `undeclared identifier` | Add `#include` or fix typo |
|
||||
| `no matching function` | Fix argument types or add overload |
|
||||
| `undefined reference` | Link library or add implementation |
|
||||
| `multiple definition` | Use `inline` or move to .cpp |
|
||||
| `incomplete type` | Replace forward decl with `#include` |
|
||||
| `no member named X` | Fix member name or include |
|
||||
| `cannot convert X to Y` | Add appropriate cast |
|
||||
| `CMake Error` | Fix CMakeLists.txt configuration |
|
||||
|
||||
## Fix Strategy
|
||||
|
||||
1. **Compilation errors first** - Code must compile
|
||||
2. **Linker errors second** - Resolve undefined references
|
||||
3. **Warnings third** - Fix with `-Wall -Wextra`
|
||||
4. **One fix at a time** - Verify each change
|
||||
5. **Minimal changes** - Don't refactor, just fix
|
||||
|
||||
## Stop Conditions
|
||||
|
||||
The agent will stop and report if:
|
||||
- Same error persists after 3 attempts
|
||||
- Fix introduces more errors
|
||||
- Requires architectural changes
|
||||
- Missing external dependencies
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/cpp-test` - Run tests after build succeeds
|
||||
- `/cpp-review` - Review code quality
|
||||
- `/verify` - Full verification loop
|
||||
|
||||
## Related
|
||||
|
||||
- Agent: `agents/cpp-build-resolver.md`
|
||||
- Skill: `skills/cpp-coding-standards/`
|
||||
132
commands/cpp-review.md
Normal file
132
commands/cpp-review.md
Normal file
@@ -0,0 +1,132 @@
|
||||
---
|
||||
description: Comprehensive C++ code review for memory safety, modern C++ idioms, concurrency, and security. Invokes the cpp-reviewer agent.
|
||||
---
|
||||
|
||||
# C++ Code Review
|
||||
|
||||
This command invokes the **cpp-reviewer** agent for comprehensive C++-specific code review.
|
||||
|
||||
## What This Command Does
|
||||
|
||||
1. **Identify C++ Changes**: Find modified `.cpp`, `.hpp`, `.cc`, `.h` files via `git diff`
|
||||
2. **Run Static Analysis**: Execute `clang-tidy` and `cppcheck`
|
||||
3. **Memory Safety Scan**: Check for raw new/delete, buffer overflows, use-after-free
|
||||
4. **Concurrency Review**: Analyze thread safety, mutex usage, data races
|
||||
5. **Modern C++ Check**: Verify code follows C++17/20 conventions and best practices
|
||||
6. **Generate Report**: Categorize issues by severity
|
||||
|
||||
## When to Use
|
||||
|
||||
Use `/cpp-review` when:
|
||||
- After writing or modifying C++ code
|
||||
- Before committing C++ changes
|
||||
- Reviewing pull requests with C++ code
|
||||
- Onboarding to a new C++ codebase
|
||||
- Checking for memory safety issues
|
||||
|
||||
## Review Categories
|
||||
|
||||
### CRITICAL (Must Fix)
|
||||
- Raw `new`/`delete` without RAII
|
||||
- Buffer overflows and use-after-free
|
||||
- Data races without synchronization
|
||||
- Command injection via `system()`
|
||||
- Uninitialized variable reads
|
||||
- Null pointer dereferences
|
||||
|
||||
### HIGH (Should Fix)
|
||||
- Rule of Five violations
|
||||
- Missing `std::lock_guard` / `std::scoped_lock`
|
||||
- Detached threads without proper lifetime management
|
||||
- C-style casts instead of `static_cast`/`dynamic_cast`
|
||||
- Missing `const` correctness
|
||||
|
||||
### MEDIUM (Consider)
|
||||
- Unnecessary copies (pass by value instead of `const&`)
|
||||
- Missing `reserve()` on known-size containers
|
||||
- `using namespace std;` in headers
|
||||
- Missing `[[nodiscard]]` on important return values
|
||||
- Overly complex template metaprogramming
|
||||
|
||||
## Automated Checks Run
|
||||
|
||||
```bash
|
||||
# Static analysis
|
||||
clang-tidy --checks='*,-llvmlibc-*' src/*.cpp -- -std=c++17
|
||||
|
||||
# Additional analysis
|
||||
cppcheck --enable=all --suppress=missingIncludeSystem src/
|
||||
|
||||
# Build with warnings
|
||||
cmake --build build -- -Wall -Wextra -Wpedantic
|
||||
```
|
||||
|
||||
## Example Usage
|
||||
|
||||
```text
|
||||
User: /cpp-review
|
||||
|
||||
Agent:
|
||||
# C++ Code Review Report
|
||||
|
||||
## Files Reviewed
|
||||
- src/handler/user.cpp (modified)
|
||||
- src/service/auth.cpp (modified)
|
||||
|
||||
## Static Analysis Results
|
||||
✓ clang-tidy: 2 warnings
|
||||
✓ cppcheck: No issues
|
||||
|
||||
## Issues Found
|
||||
|
||||
[CRITICAL] Memory Leak
|
||||
File: src/service/auth.cpp:45
|
||||
Issue: Raw `new` without matching `delete`
|
||||
```cpp
|
||||
auto* session = new Session(userId); // Memory leak!
|
||||
cache[userId] = session;
|
||||
```
|
||||
Fix: Use `std::unique_ptr`
|
||||
```cpp
|
||||
auto session = std::make_unique<Session>(userId);
|
||||
cache[userId] = std::move(session);
|
||||
```
|
||||
|
||||
[HIGH] Missing const Reference
|
||||
File: src/handler/user.cpp:28
|
||||
Issue: Large object passed by value
|
||||
```cpp
|
||||
void processUser(User user) { // Unnecessary copy
|
||||
```
|
||||
Fix: Pass by const reference
|
||||
```cpp
|
||||
void processUser(const User& user) {
|
||||
```
|
||||
|
||||
## Summary
|
||||
- CRITICAL: 1
|
||||
- HIGH: 1
|
||||
- MEDIUM: 0
|
||||
|
||||
Recommendation: ❌ Block merge until CRITICAL issue is fixed
|
||||
```
|
||||
|
||||
## Approval Criteria
|
||||
|
||||
| Status | Condition |
|
||||
|--------|-----------|
|
||||
| ✅ Approve | No CRITICAL or HIGH issues |
|
||||
| ⚠️ Warning | Only MEDIUM issues (merge with caution) |
|
||||
| ❌ Block | CRITICAL or HIGH issues found |
|
||||
|
||||
## Integration with Other Commands
|
||||
|
||||
- Use `/cpp-test` first to ensure tests pass
|
||||
- Use `/cpp-build` if build errors occur
|
||||
- Use `/cpp-review` before committing
|
||||
- Use `/code-review` for non-C++ specific concerns
|
||||
|
||||
## Related
|
||||
|
||||
- Agent: `agents/cpp-reviewer.md`
|
||||
- Skills: `skills/cpp-coding-standards/`, `skills/cpp-testing/`
|
||||
251
commands/cpp-test.md
Normal file
251
commands/cpp-test.md
Normal file
@@ -0,0 +1,251 @@
|
||||
---
|
||||
description: Enforce TDD workflow for C++. Write GoogleTest tests first, then implement. Verify coverage with gcov/lcov.
|
||||
---
|
||||
|
||||
# C++ TDD Command
|
||||
|
||||
This command enforces test-driven development methodology for C++ code using GoogleTest/GoogleMock with CMake/CTest.
|
||||
|
||||
## What This Command Does
|
||||
|
||||
1. **Define Interfaces**: Scaffold class/function signatures first
|
||||
2. **Write Tests**: Create comprehensive GoogleTest test cases (RED)
|
||||
3. **Run Tests**: Verify tests fail for the right reason
|
||||
4. **Implement Code**: Write minimal code to pass (GREEN)
|
||||
5. **Refactor**: Improve while keeping tests green
|
||||
6. **Check Coverage**: Ensure 80%+ coverage
|
||||
|
||||
## When to Use
|
||||
|
||||
Use `/cpp-test` when:
|
||||
- Implementing new C++ functions or classes
|
||||
- Adding test coverage to existing code
|
||||
- Fixing bugs (write failing test first)
|
||||
- Building critical business logic
|
||||
- Learning TDD workflow in C++
|
||||
|
||||
## TDD Cycle
|
||||
|
||||
```
|
||||
RED → Write failing GoogleTest test
|
||||
GREEN → Implement minimal code to pass
|
||||
REFACTOR → Improve code, tests stay green
|
||||
REPEAT → Next test case
|
||||
```
|
||||
|
||||
## Example Session
|
||||
|
||||
````
|
||||
User: /cpp-test I need a function to validate email addresses
|
||||
|
||||
Agent:
|
||||
# TDD Session: Email Validator
|
||||
|
||||
## Step 1: Define Interface
|
||||
|
||||
```cpp
|
||||
// validator/email.hpp
|
||||
#pragma once
|
||||
#include <string>
|
||||
#include <expected>
|
||||
|
||||
enum class EmailError {
|
||||
Empty,
|
||||
InvalidFormat
|
||||
};
|
||||
|
||||
std::expected<void, EmailError> validate_email(const std::string& email);
|
||||
```
|
||||
|
||||
## Step 2: Write Tests (RED)
|
||||
|
||||
```cpp
|
||||
// validator/email_test.cpp
|
||||
#include <gtest/gtest.h>
|
||||
#include "email.hpp"
|
||||
|
||||
TEST(ValidateEmail, AcceptsSimpleEmail) {
|
||||
auto result = validate_email("user@example.com");
|
||||
EXPECT_TRUE(result.has_value());
|
||||
}
|
||||
|
||||
TEST(ValidateEmail, AcceptsSubdomain) {
|
||||
EXPECT_TRUE(validate_email("user@mail.example.com").has_value());
|
||||
}
|
||||
|
||||
TEST(ValidateEmail, AcceptsPlus) {
|
||||
EXPECT_TRUE(validate_email("user+tag@example.com").has_value());
|
||||
}
|
||||
|
||||
TEST(ValidateEmail, RejectsEmpty) {
|
||||
auto result = validate_email("");
|
||||
ASSERT_FALSE(result.has_value());
|
||||
EXPECT_EQ(result.error(), EmailError::Empty);
|
||||
}
|
||||
|
||||
TEST(ValidateEmail, RejectsNoAtSign) {
|
||||
EXPECT_FALSE(validate_email("userexample.com").has_value());
|
||||
}
|
||||
|
||||
TEST(ValidateEmail, RejectsNoDomain) {
|
||||
EXPECT_FALSE(validate_email("user@").has_value());
|
||||
}
|
||||
|
||||
TEST(ValidateEmail, RejectsNoLocalPart) {
|
||||
EXPECT_FALSE(validate_email("@example.com").has_value());
|
||||
}
|
||||
```
|
||||
|
||||
## Step 3: Run Tests - Verify FAIL
|
||||
|
||||
```bash
|
||||
$ cmake --build build && ctest --test-dir build --output-on-failure
|
||||
|
||||
1/1 Test #1: email_validator_test .....***Failed
|
||||
--- undefined reference to `validate_email`
|
||||
|
||||
FAIL
|
||||
```
|
||||
|
||||
✓ Tests fail as expected (unimplemented).
|
||||
|
||||
## Step 4: Implement Minimal Code (GREEN)
|
||||
|
||||
```cpp
|
||||
// validator/email.cpp
|
||||
#include "email.hpp"
|
||||
#include <regex>
|
||||
|
||||
std::expected<void, EmailError> validate_email(const std::string& email) {
|
||||
if (email.empty()) {
|
||||
return std::unexpected(EmailError::Empty);
|
||||
}
|
||||
static const std::regex pattern(R"([a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,})");
|
||||
if (!std::regex_match(email, pattern)) {
|
||||
return std::unexpected(EmailError::InvalidFormat);
|
||||
}
|
||||
return {};
|
||||
}
|
||||
```
|
||||
|
||||
## Step 5: Run Tests - Verify PASS
|
||||
|
||||
```bash
|
||||
$ cmake --build build && ctest --test-dir build --output-on-failure
|
||||
|
||||
1/1 Test #1: email_validator_test ..... Passed 0.01 sec
|
||||
|
||||
100% tests passed.
|
||||
```
|
||||
|
||||
✓ All tests passing!
|
||||
|
||||
## Step 6: Check Coverage
|
||||
|
||||
```bash
|
||||
$ cmake -DCMAKE_CXX_FLAGS="--coverage" -B build && cmake --build build
|
||||
$ ctest --test-dir build
|
||||
$ lcov --capture --directory build --output-file coverage.info
|
||||
$ lcov --list coverage.info
|
||||
|
||||
validator/email.cpp | 100%
|
||||
```
|
||||
|
||||
✓ Coverage: 100%
|
||||
|
||||
## TDD Complete!
|
||||
````
|
||||
|
||||
## Test Patterns
|
||||
|
||||
### Basic Tests
|
||||
```cpp
|
||||
TEST(SuiteName, TestName) {
|
||||
EXPECT_EQ(add(2, 3), 5);
|
||||
EXPECT_NE(result, nullptr);
|
||||
EXPECT_TRUE(is_valid);
|
||||
EXPECT_THROW(func(), std::invalid_argument);
|
||||
}
|
||||
```
|
||||
|
||||
### Fixtures
|
||||
```cpp
|
||||
class DatabaseTest : public ::testing::Test {
|
||||
protected:
|
||||
void SetUp() override { db_ = create_test_db(); }
|
||||
void TearDown() override { db_.reset(); }
|
||||
std::unique_ptr<Database> db_;
|
||||
};
|
||||
|
||||
TEST_F(DatabaseTest, InsertsRecord) {
|
||||
db_->insert("key", "value");
|
||||
EXPECT_EQ(db_->get("key"), "value");
|
||||
}
|
||||
```
|
||||
|
||||
### Parameterized Tests
|
||||
```cpp
|
||||
class PrimeTest : public ::testing::TestWithParam<std::pair<int, bool>> {};
|
||||
|
||||
TEST_P(PrimeTest, ChecksPrimality) {
|
||||
auto [input, expected] = GetParam();
|
||||
EXPECT_EQ(is_prime(input), expected);
|
||||
}
|
||||
|
||||
INSTANTIATE_TEST_SUITE_P(Primes, PrimeTest, ::testing::Values(
|
||||
std::make_pair(2, true),
|
||||
std::make_pair(4, false),
|
||||
std::make_pair(7, true)
|
||||
));
|
||||
```
|
||||
|
||||
## Coverage Commands
|
||||
|
||||
```bash
|
||||
# Build with coverage
|
||||
cmake -DCMAKE_CXX_FLAGS="--coverage" -DCMAKE_EXE_LINKER_FLAGS="--coverage" -B build
|
||||
|
||||
# Run tests
|
||||
cmake --build build && ctest --test-dir build
|
||||
|
||||
# Generate coverage report
|
||||
lcov --capture --directory build --output-file coverage.info
|
||||
lcov --remove coverage.info '/usr/*' --output-file coverage.info
|
||||
genhtml coverage.info --output-directory coverage_html
|
||||
```
|
||||
|
||||
## Coverage Targets
|
||||
|
||||
| Code Type | Target |
|
||||
|-----------|--------|
|
||||
| Critical business logic | 100% |
|
||||
| Public APIs | 90%+ |
|
||||
| General code | 80%+ |
|
||||
| Generated code | Exclude |
|
||||
|
||||
## TDD Best Practices
|
||||
|
||||
**DO:**
|
||||
- Write test FIRST, before any implementation
|
||||
- Run tests after each change
|
||||
- Use `EXPECT_*` (continues) over `ASSERT_*` (stops) when appropriate
|
||||
- Test behavior, not implementation details
|
||||
- Include edge cases (empty, null, max values, boundary conditions)
|
||||
|
||||
**DON'T:**
|
||||
- Write implementation before tests
|
||||
- Skip the RED phase
|
||||
- Test private methods directly (test through public API)
|
||||
- Use `sleep` in tests
|
||||
- Ignore flaky tests
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/cpp-build` - Fix build errors
|
||||
- `/cpp-review` - Review code after implementation
|
||||
- `/verify` - Run full verification loop
|
||||
|
||||
## Related
|
||||
|
||||
- Skill: `skills/cpp-testing/`
|
||||
- Skill: `skills/tdd-workflow/`
|
||||
156
docs/ANTIGRAVITY-GUIDE.md
Normal file
156
docs/ANTIGRAVITY-GUIDE.md
Normal file
@@ -0,0 +1,156 @@
|
||||
# Antigravity Setup and Usage Guide
|
||||
|
||||
Google's [Antigravity](https://antigravity.dev) is an AI coding IDE that uses a `.agent/` directory convention for configuration. ECC provides first-class support for Antigravity through its selective install system.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Install ECC with Antigravity target
|
||||
./install.sh --target antigravity typescript
|
||||
|
||||
# Or with multiple language modules
|
||||
./install.sh --target antigravity typescript python go
|
||||
```
|
||||
|
||||
This installs ECC components into your project's `.agent/` directory, ready for Antigravity to pick up.
|
||||
|
||||
## How the Install Mapping Works
|
||||
|
||||
ECC remaps its component structure to match Antigravity's expected layout:
|
||||
|
||||
| ECC Source | Antigravity Destination | What It Contains |
|
||||
|------------|------------------------|------------------|
|
||||
| `rules/` | `.agent/rules/` | Language rules and coding standards (flattened) |
|
||||
| `commands/` | `.agent/workflows/` | Slash commands become Antigravity workflows |
|
||||
| `agents/` | `.agent/skills/` | Agent definitions become Antigravity skills |
|
||||
|
||||
> **Note on `.agents/` vs `.agent/` vs `agents/`**: The installer only handles three source paths explicitly: `rules` → `.agent/rules/`, `commands` → `.agent/workflows/`, and `agents` (no dot prefix) → `.agent/skills/`. The dot-prefixed `.agents/` directory in the ECC repo is a **static layout** for Codex/Antigravity skill definitions and `openai.yaml` configs — it is not directly mapped by the installer. Any `.agents/` path falls through to the default scaffold operation. If you want `.agents/skills/` content available in the Antigravity runtime, you must manually copy it to `.agent/skills/`.
|
||||
|
||||
### Key Differences from Claude Code
|
||||
|
||||
- **Rules are flattened**: Claude Code nests rules under subdirectories (`rules/common/`, `rules/typescript/`). Antigravity expects a flat `rules/` directory — the installer handles this automatically.
|
||||
- **Commands become workflows**: ECC's `/command` files land in `.agent/workflows/`, which is Antigravity's equivalent of slash commands.
|
||||
- **Agents become skills**: ECC agent definitions map to `.agent/skills/`, where Antigravity looks for skill configurations.
|
||||
|
||||
## Directory Structure After Install
|
||||
|
||||
```
|
||||
your-project/
|
||||
├── .agent/
|
||||
│ ├── rules/
|
||||
│ │ ├── coding-standards.md
|
||||
│ │ ├── testing.md
|
||||
│ │ ├── security.md
|
||||
│ │ └── typescript.md # language-specific rules
|
||||
│ ├── workflows/
|
||||
│ │ ├── plan.md
|
||||
│ │ ├── code-review.md
|
||||
│ │ ├── tdd.md
|
||||
│ │ └── ...
|
||||
│ ├── skills/
|
||||
│ │ ├── planner.md
|
||||
│ │ ├── code-reviewer.md
|
||||
│ │ ├── tdd-guide.md
|
||||
│ │ └── ...
|
||||
│ └── ecc-install-state.json # tracks what ECC installed
|
||||
```
|
||||
|
||||
## The `openai.yaml` Agent Config
|
||||
|
||||
Each skill directory under `.agents/skills/` contains an `agents/openai.yaml` file at the path `.agents/skills/<skill-name>/agents/openai.yaml` that configures the skill for Antigravity:
|
||||
|
||||
```yaml
|
||||
interface:
|
||||
display_name: "API Design"
|
||||
short_description: "REST API design patterns and best practices"
|
||||
brand_color: "#F97316"
|
||||
default_prompt: "Design REST API: resources, status codes, pagination"
|
||||
policy:
|
||||
allow_implicit_invocation: true
|
||||
```
|
||||
|
||||
| Field | Purpose |
|
||||
|-------|---------|
|
||||
| `display_name` | Human-readable name shown in Antigravity's UI |
|
||||
| `short_description` | Brief description of what the skill does |
|
||||
| `brand_color` | Hex color for the skill's visual badge |
|
||||
| `default_prompt` | Suggested prompt when the skill is invoked manually |
|
||||
| `allow_implicit_invocation` | When `true`, Antigravity can activate the skill automatically based on context |
|
||||
|
||||
## Managing Your Installation
|
||||
|
||||
### Check What's Installed
|
||||
|
||||
```bash
|
||||
node scripts/list-installed.js --target antigravity
|
||||
```
|
||||
|
||||
### Repair a Broken Install
|
||||
|
||||
```bash
|
||||
# First, diagnose what's wrong
|
||||
node scripts/doctor.js --target antigravity
|
||||
|
||||
# Then, restore missing or drifted files
|
||||
node scripts/repair.js --target antigravity
|
||||
```
|
||||
|
||||
### Uninstall
|
||||
|
||||
```bash
|
||||
node scripts/uninstall.js --target antigravity
|
||||
```
|
||||
|
||||
### Install State
|
||||
|
||||
The installer writes `.agent/ecc-install-state.json` to track which files ECC owns. This enables safe uninstall and repair — ECC will never touch files it didn't create.
|
||||
|
||||
## Adding Custom Skills for Antigravity
|
||||
|
||||
If you're contributing a new skill and want it available on Antigravity:
|
||||
|
||||
1. Create the skill under `skills/your-skill-name/SKILL.md` as usual
|
||||
2. Add an agent definition at `agents/your-skill-name.md` — this is the path the installer maps to `.agent/skills/` at runtime, making your skill available in the Antigravity harness
|
||||
3. Add the Antigravity agent config at `.agents/skills/your-skill-name/agents/openai.yaml` — this is a static repo layout consumed by Codex for implicit invocation metadata
|
||||
4. Mirror the `SKILL.md` content to `.agents/skills/your-skill-name/SKILL.md` — this static copy is used by Codex and serves as a reference for Antigravity
|
||||
5. Mention in your PR that you added Antigravity support
|
||||
|
||||
> **Key distinction**: The installer deploys `agents/` (no dot) → `.agent/skills/` — this is what makes skills available at runtime. The `.agents/` (dot-prefixed) directory is a separate static layout for Codex `openai.yaml` configs and is not auto-deployed by the installer.
|
||||
|
||||
See [CONTRIBUTING.md](../CONTRIBUTING.md) for the full contribution guide.
|
||||
|
||||
## Comparison with Other Targets
|
||||
|
||||
| Feature | Claude Code | Cursor | Codex | Antigravity |
|
||||
|---------|-------------|--------|-------|-------------|
|
||||
| Install target | `claude-home` | `cursor-project` | `codex-home` | `antigravity` |
|
||||
| Config root | `~/.claude/` | `.cursor/` | `~/.codex/` | `.agent/` |
|
||||
| Scope | User-level | Project-level | User-level | Project-level |
|
||||
| Rules format | Nested dirs | Flat | Flat | Flat |
|
||||
| Commands | `commands/` | N/A | N/A | `workflows/` |
|
||||
| Agents/Skills | `agents/` | N/A | N/A | `skills/` |
|
||||
| Install state | `ecc-install-state.json` | `ecc-install-state.json` | `ecc-install-state.json` | `ecc-install-state.json` |
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Skills not loading in Antigravity
|
||||
|
||||
- Verify the `.agent/` directory exists in your project root (not home directory)
|
||||
- Check that `ecc-install-state.json` was created — if missing, re-run the installer
|
||||
- Ensure files have `.md` extension and valid frontmatter
|
||||
|
||||
### Rules not applying
|
||||
|
||||
- Rules must be in `.agent/rules/`, not nested in subdirectories
|
||||
- Run `node scripts/doctor.js --target antigravity` to verify the install
|
||||
|
||||
### Workflows not available
|
||||
|
||||
- Antigravity looks for workflows in `.agent/workflows/`, not `commands/`
|
||||
- If you manually copied ECC commands, rename the directory
|
||||
|
||||
## Related Resources
|
||||
|
||||
- [Selective Install Architecture](./SELECTIVE-INSTALL-ARCHITECTURE.md) — how the install system works under the hood
|
||||
- [Selective Install Design](./SELECTIVE-INSTALL-DESIGN.md) — design decisions and target adapter contracts
|
||||
- [CONTRIBUTING.md](../CONTRIBUTING.md) — how to contribute skills, agents, and commands
|
||||
@@ -1,6 +1,6 @@
|
||||
# Command → Agent / Skill Map
|
||||
|
||||
This document lists each slash command and the primary agent(s) or skills it invokes. Use it to discover which commands use which agents and to keep refactoring consistent.
|
||||
This document lists each slash command and the primary agent(s) or skills it invokes, plus notable direct-invoke agents. Use it to discover which commands use which agents and to keep refactoring consistent.
|
||||
|
||||
| Command | Primary agent(s) | Notes |
|
||||
|---------|------------------|--------|
|
||||
@@ -46,6 +46,12 @@ This document lists each slash command and the primary agent(s) or skills it inv
|
||||
| `/pm2` | — | PM2 service lifecycle |
|
||||
| `/security-scan` | security-reviewer (skill) | AgentShield via security-scan skill |
|
||||
|
||||
## Direct-Use Agents
|
||||
|
||||
| Direct agent | Purpose | Scope | Notes |
|
||||
|--------------|---------|-------|-------|
|
||||
| `typescript-reviewer` | TypeScript/JavaScript code review | TypeScript/JavaScript projects | Invoke the agent directly when a review needs TS/JS-specific findings and there is no dedicated slash command yet. |
|
||||
|
||||
## Skills referenced by commands
|
||||
|
||||
- **continuous-learning**, **continuous-learning-v2**: `/learn`, `/learn-eval`, `/instinct-*`, `/evolve`, `/promote`, `/projects`
|
||||
|
||||
@@ -1163,7 +1163,7 @@ ECC 是**第一个最大化利用每个主要 AI 编码工具的插件**。以
|
||||
| **上下文文件** | CLAUDE.md + AGENTS.md | AGENTS.md | AGENTS.md | AGENTS.md |
|
||||
| **秘密检测** | 基于钩子 | beforeSubmitPrompt 钩子 | 基于沙箱 | 基于钩子 |
|
||||
| **自动格式化** | PostToolUse 钩子 | afterFileEdit 钩子 | N/A | file.edited 钩子 |
|
||||
| **版本** | 插件 | 插件 | 参考配置 | 1.8.0 |
|
||||
| **版本** | 插件 | 插件 | 参考配置 | 1.9.0 |
|
||||
|
||||
**关键架构决策:**
|
||||
|
||||
|
||||
4
package-lock.json
generated
4
package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "ecc-universal",
|
||||
"version": "1.8.0",
|
||||
"version": "1.9.0",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "ecc-universal",
|
||||
"version": "1.8.0",
|
||||
"version": "1.9.0",
|
||||
"hasInstallScript": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "ecc-universal",
|
||||
"version": "1.8.0",
|
||||
"version": "1.9.0",
|
||||
"description": "Complete collection of battle-tested Claude Code configs — agents, skills, hooks, commands, and rules evolved over 10+ months of intensive daily use by an Anthropic hackathon winner",
|
||||
"keywords": [
|
||||
"claude-code",
|
||||
|
||||
44
rules/cpp/coding-style.md
Normal file
44
rules/cpp/coding-style.md
Normal file
@@ -0,0 +1,44 @@
|
||||
---
|
||||
paths:
|
||||
- "**/*.cpp"
|
||||
- "**/*.hpp"
|
||||
- "**/*.cc"
|
||||
- "**/*.hh"
|
||||
- "**/*.cxx"
|
||||
- "**/*.h"
|
||||
- "**/CMakeLists.txt"
|
||||
---
|
||||
# C++ Coding Style
|
||||
|
||||
> This file extends [common/coding-style.md](../common/coding-style.md) with C++ specific content.
|
||||
|
||||
## Modern C++ (C++17/20/23)
|
||||
|
||||
- Prefer **modern C++ features** over C-style constructs
|
||||
- Use `auto` when the type is obvious from context
|
||||
- Use `constexpr` for compile-time constants
|
||||
- Use structured bindings: `auto [key, value] = map_entry;`
|
||||
|
||||
## Resource Management
|
||||
|
||||
- **RAII everywhere** — no manual `new`/`delete`
|
||||
- Use `std::unique_ptr` for exclusive ownership
|
||||
- Use `std::shared_ptr` only when shared ownership is truly needed
|
||||
- Use `std::make_unique` / `std::make_shared` over raw `new`
|
||||
|
||||
## Naming Conventions
|
||||
|
||||
- Types/Classes: `PascalCase`
|
||||
- Functions/Methods: `snake_case` or `camelCase` (follow project convention)
|
||||
- Constants: `kPascalCase` or `UPPER_SNAKE_CASE`
|
||||
- Namespaces: `lowercase`
|
||||
- Member variables: `snake_case_` (trailing underscore) or `m_` prefix
|
||||
|
||||
## Formatting
|
||||
|
||||
- Use **clang-format** — no style debates
|
||||
- Run `clang-format -i <file>` before committing
|
||||
|
||||
## Reference
|
||||
|
||||
See skill: `cpp-coding-standards` for comprehensive C++ coding standards and guidelines.
|
||||
39
rules/cpp/hooks.md
Normal file
39
rules/cpp/hooks.md
Normal file
@@ -0,0 +1,39 @@
|
||||
---
|
||||
paths:
|
||||
- "**/*.cpp"
|
||||
- "**/*.hpp"
|
||||
- "**/*.cc"
|
||||
- "**/*.hh"
|
||||
- "**/*.cxx"
|
||||
- "**/*.h"
|
||||
- "**/CMakeLists.txt"
|
||||
---
|
||||
# C++ Hooks
|
||||
|
||||
> This file extends [common/hooks.md](../common/hooks.md) with C++ specific content.
|
||||
|
||||
## Build Hooks
|
||||
|
||||
Run these checks before committing C++ changes:
|
||||
|
||||
```bash
|
||||
# Format check
|
||||
clang-format --dry-run --Werror src/*.cpp src/*.hpp
|
||||
|
||||
# Static analysis
|
||||
clang-tidy src/*.cpp -- -std=c++17
|
||||
|
||||
# Build
|
||||
cmake --build build
|
||||
|
||||
# Tests
|
||||
ctest --test-dir build --output-on-failure
|
||||
```
|
||||
|
||||
## Recommended CI Pipeline
|
||||
|
||||
1. **clang-format** — formatting check
|
||||
2. **clang-tidy** — static analysis
|
||||
3. **cppcheck** — additional analysis
|
||||
4. **cmake build** — compilation
|
||||
5. **ctest** — test execution with sanitizers
|
||||
51
rules/cpp/patterns.md
Normal file
51
rules/cpp/patterns.md
Normal file
@@ -0,0 +1,51 @@
|
||||
---
|
||||
paths:
|
||||
- "**/*.cpp"
|
||||
- "**/*.hpp"
|
||||
- "**/*.cc"
|
||||
- "**/*.hh"
|
||||
- "**/*.cxx"
|
||||
- "**/*.h"
|
||||
- "**/CMakeLists.txt"
|
||||
---
|
||||
# C++ Patterns
|
||||
|
||||
> This file extends [common/patterns.md](../common/patterns.md) with C++ specific content.
|
||||
|
||||
## RAII (Resource Acquisition Is Initialization)
|
||||
|
||||
Tie resource lifetime to object lifetime:
|
||||
|
||||
```cpp
|
||||
class FileHandle {
|
||||
public:
|
||||
explicit FileHandle(const std::string& path) : file_(std::fopen(path.c_str(), "r")) {}
|
||||
~FileHandle() { if (file_) std::fclose(file_); }
|
||||
FileHandle(const FileHandle&) = delete;
|
||||
FileHandle& operator=(const FileHandle&) = delete;
|
||||
private:
|
||||
std::FILE* file_;
|
||||
};
|
||||
```
|
||||
|
||||
## Rule of Five/Zero
|
||||
|
||||
- **Rule of Zero**: Prefer classes that need no custom destructor, copy/move constructors, or assignments
|
||||
- **Rule of Five**: If you define any of destructor/copy-ctor/copy-assign/move-ctor/move-assign, define all five
|
||||
|
||||
## Value Semantics
|
||||
|
||||
- Pass small/trivial types by value
|
||||
- Pass large types by `const&`
|
||||
- Return by value (rely on RVO/NRVO)
|
||||
- Use move semantics for sink parameters
|
||||
|
||||
## Error Handling
|
||||
|
||||
- Use exceptions for exceptional conditions
|
||||
- Use `std::optional` for values that may not exist
|
||||
- Use `std::expected` (C++23) or result types for expected failures
|
||||
|
||||
## Reference
|
||||
|
||||
See skill: `cpp-coding-standards` for comprehensive C++ patterns and anti-patterns.
|
||||
51
rules/cpp/security.md
Normal file
51
rules/cpp/security.md
Normal file
@@ -0,0 +1,51 @@
|
||||
---
|
||||
paths:
|
||||
- "**/*.cpp"
|
||||
- "**/*.hpp"
|
||||
- "**/*.cc"
|
||||
- "**/*.hh"
|
||||
- "**/*.cxx"
|
||||
- "**/*.h"
|
||||
- "**/CMakeLists.txt"
|
||||
---
|
||||
# C++ Security
|
||||
|
||||
> This file extends [common/security.md](../common/security.md) with C++ specific content.
|
||||
|
||||
## Memory Safety
|
||||
|
||||
- Never use raw `new`/`delete` — use smart pointers
|
||||
- Never use C-style arrays — use `std::array` or `std::vector`
|
||||
- Never use `malloc`/`free` — use C++ allocation
|
||||
- Avoid `reinterpret_cast` unless absolutely necessary
|
||||
|
||||
## Buffer Overflows
|
||||
|
||||
- Use `std::string` over `char*`
|
||||
- Use `.at()` for bounds-checked access when safety matters
|
||||
- Never use `strcpy`, `strcat`, `sprintf` — use `std::string` or `fmt::format`
|
||||
|
||||
## Undefined Behavior
|
||||
|
||||
- Always initialize variables
|
||||
- Avoid signed integer overflow
|
||||
- Never dereference null or dangling pointers
|
||||
- Use sanitizers in CI:
|
||||
```bash
|
||||
cmake -DCMAKE_CXX_FLAGS="-fsanitize=address,undefined" ..
|
||||
```
|
||||
|
||||
## Static Analysis
|
||||
|
||||
- Use **clang-tidy** for automated checks:
|
||||
```bash
|
||||
clang-tidy --checks='*' src/*.cpp
|
||||
```
|
||||
- Use **cppcheck** for additional analysis:
|
||||
```bash
|
||||
cppcheck --enable=all src/
|
||||
```
|
||||
|
||||
## Reference
|
||||
|
||||
See skill: `cpp-coding-standards` for detailed security guidelines.
|
||||
44
rules/cpp/testing.md
Normal file
44
rules/cpp/testing.md
Normal file
@@ -0,0 +1,44 @@
|
||||
---
|
||||
paths:
|
||||
- "**/*.cpp"
|
||||
- "**/*.hpp"
|
||||
- "**/*.cc"
|
||||
- "**/*.hh"
|
||||
- "**/*.cxx"
|
||||
- "**/*.h"
|
||||
- "**/CMakeLists.txt"
|
||||
---
|
||||
# C++ Testing
|
||||
|
||||
> This file extends [common/testing.md](../common/testing.md) with C++ specific content.
|
||||
|
||||
## Framework
|
||||
|
||||
Use **GoogleTest** (gtest/gmock) with **CMake/CTest**.
|
||||
|
||||
## Running Tests
|
||||
|
||||
```bash
|
||||
cmake --build build && ctest --test-dir build --output-on-failure
|
||||
```
|
||||
|
||||
## Coverage
|
||||
|
||||
```bash
|
||||
cmake -DCMAKE_CXX_FLAGS="--coverage" -DCMAKE_EXE_LINKER_FLAGS="--coverage" ..
|
||||
cmake --build .
|
||||
ctest --output-on-failure
|
||||
lcov --capture --directory . --output-file coverage.info
|
||||
```
|
||||
|
||||
## Sanitizers
|
||||
|
||||
Always run tests with sanitizers in CI:
|
||||
|
||||
```bash
|
||||
cmake -DCMAKE_CXX_FLAGS="-fsanitize=address,undefined" ..
|
||||
```
|
||||
|
||||
## Reference
|
||||
|
||||
See skill: `cpp-testing` for detailed C++ testing patterns, TDD workflow, and GoogleTest/GMock usage.
|
||||
114
rules/java/coding-style.md
Normal file
114
rules/java/coding-style.md
Normal file
@@ -0,0 +1,114 @@
|
||||
---
|
||||
paths:
|
||||
- "**/*.java"
|
||||
---
|
||||
# Java Coding Style
|
||||
|
||||
> This file extends [common/coding-style.md](../common/coding-style.md) with Java-specific content.
|
||||
|
||||
## Formatting
|
||||
|
||||
- **google-java-format** or **Checkstyle** (Google or Sun style) for enforcement
|
||||
- One public top-level type per file
|
||||
- Consistent indent: 2 or 4 spaces (match project standard)
|
||||
- Member order: constants, fields, constructors, public methods, protected, private
|
||||
|
||||
## Immutability
|
||||
|
||||
- Prefer `record` for value types (Java 16+)
|
||||
- Mark fields `final` by default — use mutable state only when required
|
||||
- Return defensive copies from public APIs: `List.copyOf()`, `Map.copyOf()`, `Set.copyOf()`
|
||||
- Copy-on-write: return new instances rather than mutating existing ones
|
||||
|
||||
```java
|
||||
// GOOD — immutable value type
|
||||
public record OrderSummary(Long id, String customerName, BigDecimal total) {}
|
||||
|
||||
// GOOD — final fields, no setters
|
||||
public class Order {
|
||||
private final Long id;
|
||||
private final List<LineItem> items;
|
||||
|
||||
public List<LineItem> getItems() {
|
||||
return List.copyOf(items);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Naming
|
||||
|
||||
Follow standard Java conventions:
|
||||
- `PascalCase` for classes, interfaces, records, enums
|
||||
- `camelCase` for methods, fields, parameters, local variables
|
||||
- `SCREAMING_SNAKE_CASE` for `static final` constants
|
||||
- Packages: all lowercase, reverse domain (`com.example.app.service`)
|
||||
|
||||
## Modern Java Features
|
||||
|
||||
Use modern language features where they improve clarity:
|
||||
- **Records** for DTOs and value types (Java 16+)
|
||||
- **Sealed classes** for closed type hierarchies (Java 17+)
|
||||
- **Pattern matching** with `instanceof` — no explicit cast (Java 16+)
|
||||
- **Text blocks** for multi-line strings — SQL, JSON templates (Java 15+)
|
||||
- **Switch expressions** with arrow syntax (Java 14+)
|
||||
- **Pattern matching in switch** — exhaustive sealed type handling (Java 21+)
|
||||
|
||||
```java
|
||||
// Pattern matching instanceof
|
||||
if (shape instanceof Circle c) {
|
||||
return Math.PI * c.radius() * c.radius();
|
||||
}
|
||||
|
||||
// Sealed type hierarchy
|
||||
public sealed interface PaymentMethod permits CreditCard, BankTransfer, Wallet {}
|
||||
|
||||
// Switch expression
|
||||
String label = switch (status) {
|
||||
case ACTIVE -> "Active";
|
||||
case SUSPENDED -> "Suspended";
|
||||
case CLOSED -> "Closed";
|
||||
};
|
||||
```
|
||||
|
||||
## Optional Usage
|
||||
|
||||
- Return `Optional<T>` from finder methods that may have no result
|
||||
- Use `map()`, `flatMap()`, `orElseThrow()` — never call `get()` without `isPresent()`
|
||||
- Never use `Optional` as a field type or method parameter
|
||||
|
||||
```java
|
||||
// GOOD
|
||||
return repository.findById(id)
|
||||
.map(ResponseDto::from)
|
||||
.orElseThrow(() -> new OrderNotFoundException(id));
|
||||
|
||||
// BAD — Optional as parameter
|
||||
public void process(Optional<String> name) {}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
- Prefer unchecked exceptions for domain errors
|
||||
- Create domain-specific exceptions extending `RuntimeException`
|
||||
- Avoid broad `catch (Exception e)` unless at top-level handlers
|
||||
- Include context in exception messages
|
||||
|
||||
```java
|
||||
public class OrderNotFoundException extends RuntimeException {
|
||||
public OrderNotFoundException(Long id) {
|
||||
super("Order not found: id=" + id);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Streams
|
||||
|
||||
- Use streams for transformations; keep pipelines short (3-4 operations max)
|
||||
- Prefer method references when readable: `.map(Order::getTotal)`
|
||||
- Avoid side effects in stream operations
|
||||
- For complex logic, prefer a loop over a convoluted stream pipeline
|
||||
|
||||
## References
|
||||
|
||||
See skill: `java-coding-standards` for full coding standards with examples.
|
||||
See skill: `jpa-patterns` for JPA/Hibernate entity design patterns.
|
||||
18
rules/java/hooks.md
Normal file
18
rules/java/hooks.md
Normal file
@@ -0,0 +1,18 @@
|
||||
---
|
||||
paths:
|
||||
- "**/*.java"
|
||||
- "**/pom.xml"
|
||||
- "**/build.gradle"
|
||||
- "**/build.gradle.kts"
|
||||
---
|
||||
# Java Hooks
|
||||
|
||||
> This file extends [common/hooks.md](../common/hooks.md) with Java-specific content.
|
||||
|
||||
## PostToolUse Hooks
|
||||
|
||||
Configure in `~/.claude/settings.json`:
|
||||
|
||||
- **google-java-format**: Auto-format `.java` files after edit
|
||||
- **checkstyle**: Run style checks after editing Java files
|
||||
- **./mvnw compile** or **./gradlew compileJava**: Verify compilation after changes
|
||||
146
rules/java/patterns.md
Normal file
146
rules/java/patterns.md
Normal file
@@ -0,0 +1,146 @@
|
||||
---
|
||||
paths:
|
||||
- "**/*.java"
|
||||
---
|
||||
# Java Patterns
|
||||
|
||||
> This file extends [common/patterns.md](../common/patterns.md) with Java-specific content.
|
||||
|
||||
## Repository Pattern
|
||||
|
||||
Encapsulate data access behind an interface:
|
||||
|
||||
```java
|
||||
public interface OrderRepository {
|
||||
Optional<Order> findById(Long id);
|
||||
List<Order> findAll();
|
||||
Order save(Order order);
|
||||
void deleteById(Long id);
|
||||
}
|
||||
```
|
||||
|
||||
Concrete implementations handle storage details (JPA, JDBC, in-memory for tests).
|
||||
|
||||
## Service Layer
|
||||
|
||||
Business logic in service classes; keep controllers and repositories thin:
|
||||
|
||||
```java
|
||||
public class OrderService {
|
||||
private final OrderRepository orderRepository;
|
||||
private final PaymentGateway paymentGateway;
|
||||
|
||||
public OrderService(OrderRepository orderRepository, PaymentGateway paymentGateway) {
|
||||
this.orderRepository = orderRepository;
|
||||
this.paymentGateway = paymentGateway;
|
||||
}
|
||||
|
||||
public OrderSummary placeOrder(CreateOrderRequest request) {
|
||||
var order = Order.from(request);
|
||||
paymentGateway.charge(order.total());
|
||||
var saved = orderRepository.save(order);
|
||||
return OrderSummary.from(saved);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Constructor Injection
|
||||
|
||||
Always use constructor injection — never field injection:
|
||||
|
||||
```java
|
||||
// GOOD — constructor injection (testable, immutable)
|
||||
public class NotificationService {
|
||||
private final EmailSender emailSender;
|
||||
|
||||
public NotificationService(EmailSender emailSender) {
|
||||
this.emailSender = emailSender;
|
||||
}
|
||||
}
|
||||
|
||||
// BAD — field injection (untestable without reflection, requires framework magic)
|
||||
public class NotificationService {
|
||||
@Inject // or @Autowired
|
||||
private EmailSender emailSender;
|
||||
}
|
||||
```
|
||||
|
||||
## DTO Mapping
|
||||
|
||||
Use records for DTOs. Map at service/controller boundaries:
|
||||
|
||||
```java
|
||||
public record OrderResponse(Long id, String customer, BigDecimal total) {
|
||||
public static OrderResponse from(Order order) {
|
||||
return new OrderResponse(order.getId(), order.getCustomerName(), order.getTotal());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Builder Pattern
|
||||
|
||||
Use for objects with many optional parameters:
|
||||
|
||||
```java
|
||||
public class SearchCriteria {
|
||||
private final String query;
|
||||
private final int page;
|
||||
private final int size;
|
||||
private final String sortBy;
|
||||
|
||||
private SearchCriteria(Builder builder) {
|
||||
this.query = builder.query;
|
||||
this.page = builder.page;
|
||||
this.size = builder.size;
|
||||
this.sortBy = builder.sortBy;
|
||||
}
|
||||
|
||||
public static class Builder {
|
||||
private String query = "";
|
||||
private int page = 0;
|
||||
private int size = 20;
|
||||
private String sortBy = "id";
|
||||
|
||||
public Builder query(String query) { this.query = query; return this; }
|
||||
public Builder page(int page) { this.page = page; return this; }
|
||||
public Builder size(int size) { this.size = size; return this; }
|
||||
public Builder sortBy(String sortBy) { this.sortBy = sortBy; return this; }
|
||||
public SearchCriteria build() { return new SearchCriteria(this); }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Sealed Types for Domain Models
|
||||
|
||||
```java
|
||||
public sealed interface PaymentResult permits PaymentSuccess, PaymentFailure {
|
||||
record PaymentSuccess(String transactionId, BigDecimal amount) implements PaymentResult {}
|
||||
record PaymentFailure(String errorCode, String message) implements PaymentResult {}
|
||||
}
|
||||
|
||||
// Exhaustive handling (Java 21+)
|
||||
String message = switch (result) {
|
||||
case PaymentSuccess s -> "Paid: " + s.transactionId();
|
||||
case PaymentFailure f -> "Failed: " + f.errorCode();
|
||||
};
|
||||
```
|
||||
|
||||
## API Response Envelope
|
||||
|
||||
Consistent API responses:
|
||||
|
||||
```java
|
||||
public record ApiResponse<T>(boolean success, T data, String error) {
|
||||
public static <T> ApiResponse<T> ok(T data) {
|
||||
return new ApiResponse<>(true, data, null);
|
||||
}
|
||||
public static <T> ApiResponse<T> error(String message) {
|
||||
return new ApiResponse<>(false, null, message);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## References
|
||||
|
||||
See skill: `springboot-patterns` for Spring Boot architecture patterns.
|
||||
See skill: `jpa-patterns` for entity design and query optimization.
|
||||
100
rules/java/security.md
Normal file
100
rules/java/security.md
Normal file
@@ -0,0 +1,100 @@
|
||||
---
|
||||
paths:
|
||||
- "**/*.java"
|
||||
---
|
||||
# Java Security
|
||||
|
||||
> This file extends [common/security.md](../common/security.md) with Java-specific content.
|
||||
|
||||
## Secrets Management
|
||||
|
||||
- Never hardcode API keys, tokens, or credentials in source code
|
||||
- Use environment variables: `System.getenv("API_KEY")`
|
||||
- Use a secret manager (Vault, AWS Secrets Manager) for production secrets
|
||||
- Keep local config files with secrets in `.gitignore`
|
||||
|
||||
```java
|
||||
// BAD
|
||||
private static final String API_KEY = "sk-abc123...";
|
||||
|
||||
// GOOD — environment variable
|
||||
String apiKey = System.getenv("PAYMENT_API_KEY");
|
||||
Objects.requireNonNull(apiKey, "PAYMENT_API_KEY must be set");
|
||||
```
|
||||
|
||||
## SQL Injection Prevention
|
||||
|
||||
- Always use parameterized queries — never concatenate user input into SQL
|
||||
- Use `PreparedStatement` or your framework's parameterized query API
|
||||
- Validate and sanitize any input used in native queries
|
||||
|
||||
```java
|
||||
// BAD — SQL injection via string concatenation
|
||||
Statement stmt = conn.createStatement();
|
||||
String sql = "SELECT * FROM orders WHERE name = '" + name + "'";
|
||||
stmt.executeQuery(sql);
|
||||
|
||||
// GOOD — PreparedStatement with parameterized query
|
||||
PreparedStatement ps = conn.prepareStatement("SELECT * FROM orders WHERE name = ?");
|
||||
ps.setString(1, name);
|
||||
|
||||
// GOOD — JDBC template
|
||||
jdbcTemplate.query("SELECT * FROM orders WHERE name = ?", mapper, name);
|
||||
```
|
||||
|
||||
## Input Validation
|
||||
|
||||
- Validate all user input at system boundaries before processing
|
||||
- Use Bean Validation (`@NotNull`, `@NotBlank`, `@Size`) on DTOs when using a validation framework
|
||||
- Sanitize file paths and user-provided strings before use
|
||||
- Reject input that fails validation with clear error messages
|
||||
|
||||
```java
|
||||
// Validate manually in plain Java
|
||||
public Order createOrder(String customerName, BigDecimal amount) {
|
||||
if (customerName == null || customerName.isBlank()) {
|
||||
throw new IllegalArgumentException("Customer name is required");
|
||||
}
|
||||
if (amount == null || amount.compareTo(BigDecimal.ZERO) <= 0) {
|
||||
throw new IllegalArgumentException("Amount must be positive");
|
||||
}
|
||||
return new Order(customerName, amount);
|
||||
}
|
||||
```
|
||||
|
||||
## Authentication and Authorization
|
||||
|
||||
- Never implement custom auth crypto — use established libraries
|
||||
- Store passwords with bcrypt or Argon2, never MD5/SHA1
|
||||
- Enforce authorization checks at service boundaries
|
||||
- Clear sensitive data from logs — never log passwords, tokens, or PII
|
||||
|
||||
## Dependency Security
|
||||
|
||||
- Run `mvn dependency:tree` or `./gradlew dependencies` to audit transitive dependencies
|
||||
- Use OWASP Dependency-Check or Snyk to scan for known CVEs
|
||||
- Keep dependencies updated — set up Dependabot or Renovate
|
||||
|
||||
## Error Messages
|
||||
|
||||
- Never expose stack traces, internal paths, or SQL errors in API responses
|
||||
- Map exceptions to safe, generic client messages at handler boundaries
|
||||
- Log detailed errors server-side; return generic messages to clients
|
||||
|
||||
```java
|
||||
// Log the detail, return a generic message
|
||||
try {
|
||||
return orderService.findById(id);
|
||||
} catch (OrderNotFoundException ex) {
|
||||
log.warn("Order not found: id={}", id);
|
||||
return ApiResponse.error("Resource not found"); // generic, no internals
|
||||
} catch (Exception ex) {
|
||||
log.error("Unexpected error processing order id={}", id, ex);
|
||||
return ApiResponse.error("Internal server error"); // never expose ex.getMessage()
|
||||
}
|
||||
```
|
||||
|
||||
## References
|
||||
|
||||
See skill: `springboot-security` for Spring Security authentication and authorization patterns.
|
||||
See skill: `security-review` for general security checklists.
|
||||
131
rules/java/testing.md
Normal file
131
rules/java/testing.md
Normal file
@@ -0,0 +1,131 @@
|
||||
---
|
||||
paths:
|
||||
- "**/*.java"
|
||||
---
|
||||
# Java Testing
|
||||
|
||||
> This file extends [common/testing.md](../common/testing.md) with Java-specific content.
|
||||
|
||||
## Test Framework
|
||||
|
||||
- **JUnit 5** (`@Test`, `@ParameterizedTest`, `@Nested`, `@DisplayName`)
|
||||
- **AssertJ** for fluent assertions (`assertThat(result).isEqualTo(expected)`)
|
||||
- **Mockito** for mocking dependencies
|
||||
- **Testcontainers** for integration tests requiring databases or services
|
||||
|
||||
## Test Organization
|
||||
|
||||
```
|
||||
src/test/java/com/example/app/
|
||||
service/ # Unit tests for service layer
|
||||
controller/ # Web layer / API tests
|
||||
repository/ # Data access tests
|
||||
integration/ # Cross-layer integration tests
|
||||
```
|
||||
|
||||
Mirror the `src/main/java` package structure in `src/test/java`.
|
||||
|
||||
## Unit Test Pattern
|
||||
|
||||
```java
|
||||
@ExtendWith(MockitoExtension.class)
|
||||
class OrderServiceTest {
|
||||
|
||||
@Mock
|
||||
private OrderRepository orderRepository;
|
||||
|
||||
private OrderService orderService;
|
||||
|
||||
@BeforeEach
|
||||
void setUp() {
|
||||
orderService = new OrderService(orderRepository);
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("findById returns order when exists")
|
||||
void findById_existingOrder_returnsOrder() {
|
||||
var order = new Order(1L, "Alice", BigDecimal.TEN);
|
||||
when(orderRepository.findById(1L)).thenReturn(Optional.of(order));
|
||||
|
||||
var result = orderService.findById(1L);
|
||||
|
||||
assertThat(result.customerName()).isEqualTo("Alice");
|
||||
verify(orderRepository).findById(1L);
|
||||
}
|
||||
|
||||
@Test
|
||||
@DisplayName("findById throws when order not found")
|
||||
void findById_missingOrder_throws() {
|
||||
when(orderRepository.findById(99L)).thenReturn(Optional.empty());
|
||||
|
||||
assertThatThrownBy(() -> orderService.findById(99L))
|
||||
.isInstanceOf(OrderNotFoundException.class)
|
||||
.hasMessageContaining("99");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Parameterized Tests
|
||||
|
||||
```java
|
||||
@ParameterizedTest
|
||||
@CsvSource({
|
||||
"100.00, 10, 90.00",
|
||||
"50.00, 0, 50.00",
|
||||
"200.00, 25, 150.00"
|
||||
})
|
||||
@DisplayName("discount applied correctly")
|
||||
void applyDiscount(BigDecimal price, int pct, BigDecimal expected) {
|
||||
assertThat(PricingUtils.discount(price, pct)).isEqualByComparingTo(expected);
|
||||
}
|
||||
```
|
||||
|
||||
## Integration Tests
|
||||
|
||||
Use Testcontainers for real database integration:
|
||||
|
||||
```java
|
||||
@Testcontainers
|
||||
class OrderRepositoryIT {
|
||||
|
||||
@Container
|
||||
static PostgreSQLContainer<?> postgres = new PostgreSQLContainer<>("postgres:16");
|
||||
|
||||
private OrderRepository repository;
|
||||
|
||||
@BeforeEach
|
||||
void setUp() {
|
||||
var dataSource = new PGSimpleDataSource();
|
||||
dataSource.setUrl(postgres.getJdbcUrl());
|
||||
dataSource.setUser(postgres.getUsername());
|
||||
dataSource.setPassword(postgres.getPassword());
|
||||
repository = new JdbcOrderRepository(dataSource);
|
||||
}
|
||||
|
||||
@Test
|
||||
void save_and_findById() {
|
||||
var saved = repository.save(new Order(null, "Bob", BigDecimal.ONE));
|
||||
var found = repository.findById(saved.getId());
|
||||
assertThat(found).isPresent();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
For Spring Boot integration tests, see skill: `springboot-tdd`.
|
||||
|
||||
## Test Naming
|
||||
|
||||
Use descriptive names with `@DisplayName`:
|
||||
- `methodName_scenario_expectedBehavior()` for method names
|
||||
- `@DisplayName("human-readable description")` for reports
|
||||
|
||||
## Coverage
|
||||
|
||||
- Target 80%+ line coverage
|
||||
- Use JaCoCo for coverage reporting
|
||||
- Focus on service and domain logic — skip trivial getters/config classes
|
||||
|
||||
## References
|
||||
|
||||
See skill: `springboot-tdd` for Spring Boot TDD patterns with MockMvc and Testcontainers.
|
||||
See skill: `java-coding-standards` for testing expectations.
|
||||
@@ -190,6 +190,7 @@ function buildOrchestrationPlan(config = {}) {
|
||||
throw new Error('buildOrchestrationPlan requires at least one worker');
|
||||
}
|
||||
|
||||
const seenSlugs = new Set();
|
||||
const workerPlans = workers.map((worker, index) => {
|
||||
if (!worker || typeof worker.task !== 'string' || worker.task.trim().length === 0) {
|
||||
throw new Error(`Worker ${index + 1} is missing a task`);
|
||||
@@ -197,6 +198,12 @@ function buildOrchestrationPlan(config = {}) {
|
||||
|
||||
const workerName = worker.name || `worker-${index + 1}`;
|
||||
const workerSlug = slugify(workerName, `worker-${index + 1}`);
|
||||
|
||||
if (seenSlugs.has(workerSlug)) {
|
||||
throw new Error(`Workers must have unique slugs — duplicate: ${workerSlug}`);
|
||||
}
|
||||
seenSlugs.add(workerSlug);
|
||||
|
||||
const branchName = `orchestrator-${sessionName}-${workerSlug}`;
|
||||
const worktreePath = path.join(worktreeRoot, `${repoName}-${sessionName}-${workerSlug}`);
|
||||
const workerCoordinationDir = path.join(coordinationDir, workerSlug);
|
||||
@@ -206,7 +213,7 @@ function buildOrchestrationPlan(config = {}) {
|
||||
const launcherCommand = worker.launcherCommand || defaultLauncher;
|
||||
const workerSeedPaths = normalizeSeedPaths(worker.seedPaths, repoRoot);
|
||||
const seedPaths = normalizeSeedPaths([...globalSeedPaths, ...workerSeedPaths], repoRoot);
|
||||
const templateVariables = {
|
||||
const templateVariables = buildTemplateVariables({
|
||||
branch_name: branchName,
|
||||
handoff_file: handoffFilePath,
|
||||
repo_root: repoRoot,
|
||||
@@ -216,7 +223,7 @@ function buildOrchestrationPlan(config = {}) {
|
||||
worker_name: workerName,
|
||||
worker_slug: workerSlug,
|
||||
worktree_path: worktreePath
|
||||
};
|
||||
});
|
||||
|
||||
if (!launcherCommand) {
|
||||
throw new Error(`Worker ${workerName} is missing a launcherCommand`);
|
||||
|
||||
148
skills/agent-eval/SKILL.md
Normal file
148
skills/agent-eval/SKILL.md
Normal file
@@ -0,0 +1,148 @@
|
||||
---
|
||||
name: agent-eval
|
||||
description: Head-to-head comparison of coding agents (Claude Code, Aider, Codex, etc.) on custom tasks with pass rate, cost, time, and consistency metrics
|
||||
origin: ECC
|
||||
tools: Read, Write, Edit, Bash, Grep, Glob
|
||||
---
|
||||
|
||||
# Agent Eval Skill
|
||||
|
||||
A lightweight CLI tool for comparing coding agents head-to-head on reproducible tasks. Every "which coding agent is best?" comparison runs on vibes — this tool systematizes it.
|
||||
|
||||
## When to Activate
|
||||
|
||||
- Comparing coding agents (Claude Code, Aider, Codex, etc.) on your own codebase
|
||||
- Measuring agent performance before adopting a new tool or model
|
||||
- Running regression checks when an agent updates its model or tooling
|
||||
- Producing data-backed agent selection decisions for a team
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
# pinned to v0.1.0 — latest stable commit
|
||||
pip install git+https://github.com/joaquinhuigomez/agent-eval.git@6d062a2f5cda6ea443bf5d458d361892c04e749b
|
||||
```
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### YAML Task Definitions
|
||||
|
||||
Define tasks declaratively. Each task specifies what to do, which files to touch, and how to judge success:
|
||||
|
||||
```yaml
|
||||
name: add-retry-logic
|
||||
description: Add exponential backoff retry to the HTTP client
|
||||
repo: ./my-project
|
||||
files:
|
||||
- src/http_client.py
|
||||
prompt: |
|
||||
Add retry logic with exponential backoff to all HTTP requests.
|
||||
Max 3 retries. Initial delay 1s, max delay 30s.
|
||||
judge:
|
||||
- type: pytest
|
||||
command: pytest tests/test_http_client.py -v
|
||||
- type: grep
|
||||
pattern: "exponential_backoff|retry"
|
||||
files: src/http_client.py
|
||||
commit: "abc1234" # pin to specific commit for reproducibility
|
||||
```
|
||||
|
||||
### Git Worktree Isolation
|
||||
|
||||
Each agent run gets its own git worktree — no Docker required. This provides reproducibility isolation so agents cannot interfere with each other or corrupt the base repo.
|
||||
|
||||
### Metrics Collected
|
||||
|
||||
| Metric | What It Measures |
|
||||
|--------|-----------------|
|
||||
| Pass rate | Did the agent produce code that passes the judge? |
|
||||
| Cost | API spend per task (when available) |
|
||||
| Time | Wall-clock seconds to completion |
|
||||
| Consistency | Pass rate across repeated runs (e.g., 3/3 = 100%) |
|
||||
|
||||
## Workflow
|
||||
|
||||
### 1. Define Tasks
|
||||
|
||||
Create a `tasks/` directory with YAML files, one per task:
|
||||
|
||||
```bash
|
||||
mkdir tasks
|
||||
# Write task definitions (see template above)
|
||||
```
|
||||
|
||||
### 2. Run Agents
|
||||
|
||||
Execute agents against your tasks:
|
||||
|
||||
```bash
|
||||
agent-eval run --task tasks/add-retry-logic.yaml --agent claude-code --agent aider --runs 3
|
||||
```
|
||||
|
||||
Each run:
|
||||
1. Creates a fresh git worktree from the specified commit
|
||||
2. Hands the prompt to the agent
|
||||
3. Runs the judge criteria
|
||||
4. Records pass/fail, cost, and time
|
||||
|
||||
### 3. Compare Results
|
||||
|
||||
Generate a comparison report:
|
||||
|
||||
```bash
|
||||
agent-eval report --format table
|
||||
```
|
||||
|
||||
```
|
||||
Task: add-retry-logic (3 runs each)
|
||||
┌──────────────┬───────────┬────────┬────────┬─────────────┐
|
||||
│ Agent │ Pass Rate │ Cost │ Time │ Consistency │
|
||||
├──────────────┼───────────┼────────┼────────┼─────────────┤
|
||||
│ claude-code │ 3/3 │ $0.12 │ 45s │ 100% │
|
||||
│ aider │ 2/3 │ $0.08 │ 38s │ 67% │
|
||||
└──────────────┴───────────┴────────┴────────┴─────────────┘
|
||||
```
|
||||
|
||||
## Judge Types
|
||||
|
||||
### Code-Based (deterministic)
|
||||
|
||||
```yaml
|
||||
judge:
|
||||
- type: pytest
|
||||
command: pytest tests/ -v
|
||||
- type: command
|
||||
command: npm run build
|
||||
```
|
||||
|
||||
### Pattern-Based
|
||||
|
||||
```yaml
|
||||
judge:
|
||||
- type: grep
|
||||
pattern: "class.*Retry"
|
||||
files: src/**/*.py
|
||||
```
|
||||
|
||||
### Model-Based (LLM-as-judge)
|
||||
|
||||
```yaml
|
||||
judge:
|
||||
- type: llm
|
||||
prompt: |
|
||||
Does this implementation correctly handle exponential backoff?
|
||||
Check for: max retries, increasing delays, jitter.
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
- **Start with 3-5 tasks** that represent your real workload, not toy examples
|
||||
- **Run at least 3 trials** per agent to capture variance — agents are non-deterministic
|
||||
- **Pin the commit** in your task YAML so results are reproducible across days/weeks
|
||||
- **Include at least one deterministic judge** (tests, build) per task — LLM judges add noise
|
||||
- **Track cost alongside pass rate** — a 95% agent at 10x the cost may not be the right choice
|
||||
- **Version your task definitions** — they are test fixtures, treat them as code
|
||||
|
||||
## Links
|
||||
|
||||
- Repository: [github.com/joaquinhuigomez/agent-eval](https://github.com/joaquinhuigomez/agent-eval)
|
||||
179
skills/architecture-decision-records/SKILL.md
Normal file
179
skills/architecture-decision-records/SKILL.md
Normal file
@@ -0,0 +1,179 @@
|
||||
---
|
||||
name: architecture-decision-records
|
||||
description: Capture architectural decisions made during Claude Code sessions as structured ADRs. Auto-detects decision moments, records context, alternatives considered, and rationale. Maintains an ADR log so future developers understand why the codebase is shaped the way it is.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# Architecture Decision Records
|
||||
|
||||
Capture architectural decisions as they happen during coding sessions. Instead of decisions living only in Slack threads, PR comments, or someone's memory, this skill produces structured ADR documents that live alongside the code.
|
||||
|
||||
## When to Activate
|
||||
|
||||
- User explicitly says "let's record this decision" or "ADR this"
|
||||
- User chooses between significant alternatives (framework, library, pattern, database, API design)
|
||||
- User says "we decided to..." or "the reason we're doing X instead of Y is..."
|
||||
- User asks "why did we choose X?" (read existing ADRs)
|
||||
- During planning phases when architectural trade-offs are discussed
|
||||
|
||||
## ADR Format
|
||||
|
||||
Use the lightweight ADR format proposed by Michael Nygard, adapted for AI-assisted development:
|
||||
|
||||
```markdown
|
||||
# ADR-NNNN: [Decision Title]
|
||||
|
||||
**Date**: YYYY-MM-DD
|
||||
**Status**: proposed | accepted | deprecated | superseded by ADR-NNNN
|
||||
**Deciders**: [who was involved]
|
||||
|
||||
## Context
|
||||
|
||||
What is the issue that we're seeing that is motivating this decision or change?
|
||||
|
||||
[2-5 sentences describing the situation, constraints, and forces at play]
|
||||
|
||||
## Decision
|
||||
|
||||
What is the change that we're proposing and/or doing?
|
||||
|
||||
[1-3 sentences stating the decision clearly]
|
||||
|
||||
## Alternatives Considered
|
||||
|
||||
### Alternative 1: [Name]
|
||||
- **Pros**: [benefits]
|
||||
- **Cons**: [drawbacks]
|
||||
- **Why not**: [specific reason this was rejected]
|
||||
|
||||
### Alternative 2: [Name]
|
||||
- **Pros**: [benefits]
|
||||
- **Cons**: [drawbacks]
|
||||
- **Why not**: [specific reason this was rejected]
|
||||
|
||||
## Consequences
|
||||
|
||||
What becomes easier or more difficult to do because of this change?
|
||||
|
||||
### Positive
|
||||
- [benefit 1]
|
||||
- [benefit 2]
|
||||
|
||||
### Negative
|
||||
- [trade-off 1]
|
||||
- [trade-off 2]
|
||||
|
||||
### Risks
|
||||
- [risk and mitigation]
|
||||
```
|
||||
|
||||
## Workflow
|
||||
|
||||
### Capturing a New ADR
|
||||
|
||||
When a decision moment is detected:
|
||||
|
||||
1. **Initialize (first time only)** — if `docs/adr/` does not exist, ask the user for confirmation before creating the directory, a `README.md` seeded with the index table header (see ADR Index Format below), and a blank `template.md` for manual use. Do not create files without explicit consent.
|
||||
2. **Identify the decision** — extract the core architectural choice being made
|
||||
3. **Gather context** — what problem prompted this? What constraints exist?
|
||||
4. **Document alternatives** — what other options were considered? Why were they rejected?
|
||||
5. **State consequences** — what are the trade-offs? What becomes easier/harder?
|
||||
6. **Assign a number** — scan existing ADRs in `docs/adr/` and increment
|
||||
7. **Confirm and write** — present the draft ADR to the user for review. Only write to `docs/adr/NNNN-decision-title.md` after explicit approval. If the user declines, discard the draft without writing any files.
|
||||
8. **Update the index** — append to `docs/adr/README.md`
|
||||
|
||||
### Reading Existing ADRs
|
||||
|
||||
When a user asks "why did we choose X?":
|
||||
|
||||
1. Check if `docs/adr/` exists — if not, respond: "No ADRs found in this project. Would you like to start recording architectural decisions?"
|
||||
2. If it exists, scan `docs/adr/README.md` index for relevant entries
|
||||
3. Read matching ADR files and present the Context and Decision sections
|
||||
4. If no match is found, respond: "No ADR found for that decision. Would you like to record one now?"
|
||||
|
||||
### ADR Directory Structure
|
||||
|
||||
```
|
||||
docs/
|
||||
└── adr/
|
||||
├── README.md ← index of all ADRs
|
||||
├── 0001-use-nextjs.md
|
||||
├── 0002-postgres-over-mongo.md
|
||||
├── 0003-rest-over-graphql.md
|
||||
└── template.md ← blank template for manual use
|
||||
```
|
||||
|
||||
### ADR Index Format
|
||||
|
||||
```markdown
|
||||
# Architecture Decision Records
|
||||
|
||||
| ADR | Title | Status | Date |
|
||||
|-----|-------|--------|------|
|
||||
| [0001](0001-use-nextjs.md) | Use Next.js as frontend framework | accepted | 2026-01-15 |
|
||||
| [0002](0002-postgres-over-mongo.md) | PostgreSQL over MongoDB for primary datastore | accepted | 2026-01-20 |
|
||||
| [0003](0003-rest-over-graphql.md) | REST API over GraphQL | accepted | 2026-02-01 |
|
||||
```
|
||||
|
||||
## Decision Detection Signals
|
||||
|
||||
Watch for these patterns in conversation that indicate an architectural decision:
|
||||
|
||||
**Explicit signals**
|
||||
- "Let's go with X"
|
||||
- "We should use X instead of Y"
|
||||
- "The trade-off is worth it because..."
|
||||
- "Record this as an ADR"
|
||||
|
||||
**Implicit signals** (suggest recording an ADR — do not auto-create without user confirmation)
|
||||
- Comparing two frameworks or libraries and reaching a conclusion
|
||||
- Making a database schema design choice with stated rationale
|
||||
- Choosing between architectural patterns (monolith vs microservices, REST vs GraphQL)
|
||||
- Deciding on authentication/authorization strategy
|
||||
- Selecting deployment infrastructure after evaluating alternatives
|
||||
|
||||
## What Makes a Good ADR
|
||||
|
||||
### Do
|
||||
- **Be specific** — "Use Prisma ORM" not "use an ORM"
|
||||
- **Record the why** — the rationale matters more than the what
|
||||
- **Include rejected alternatives** — future developers need to know what was considered
|
||||
- **State consequences honestly** — every decision has trade-offs
|
||||
- **Keep it short** — an ADR should be readable in 2 minutes
|
||||
- **Use present tense** — "We use X" not "We will use X"
|
||||
|
||||
### Don't
|
||||
- Record trivial decisions — variable naming or formatting choices don't need ADRs
|
||||
- Write essays — if the context section exceeds 10 lines, it's too long
|
||||
- Omit alternatives — "we just picked it" is not a valid rationale
|
||||
- Backfill without marking it — if recording a past decision, note the original date
|
||||
- Let ADRs go stale — superseded decisions should reference their replacement
|
||||
|
||||
## ADR Lifecycle
|
||||
|
||||
```
|
||||
proposed → accepted → [deprecated | superseded by ADR-NNNN]
|
||||
```
|
||||
|
||||
- **proposed**: decision is under discussion, not yet committed
|
||||
- **accepted**: decision is in effect and being followed
|
||||
- **deprecated**: decision is no longer relevant (e.g., feature removed)
|
||||
- **superseded**: a newer ADR replaces this one (always link the replacement)
|
||||
|
||||
## Categories of Decisions Worth Recording
|
||||
|
||||
| Category | Examples |
|
||||
|----------|---------|
|
||||
| **Technology choices** | Framework, language, database, cloud provider |
|
||||
| **Architecture patterns** | Monolith vs microservices, event-driven, CQRS |
|
||||
| **API design** | REST vs GraphQL, versioning strategy, auth mechanism |
|
||||
| **Data modeling** | Schema design, normalization decisions, caching strategy |
|
||||
| **Infrastructure** | Deployment model, CI/CD pipeline, monitoring stack |
|
||||
| **Security** | Auth strategy, encryption approach, secret management |
|
||||
| **Testing** | Test framework, coverage targets, E2E vs integration balance |
|
||||
| **Process** | Branching strategy, review process, release cadence |
|
||||
|
||||
## Integration with Other Skills
|
||||
|
||||
- **Planner agent**: when the planner proposes architecture changes, suggest creating an ADR
|
||||
- **Code reviewer agent**: flag PRs that introduce architectural changes without a corresponding ADR
|
||||
233
skills/codebase-onboarding/SKILL.md
Normal file
233
skills/codebase-onboarding/SKILL.md
Normal file
@@ -0,0 +1,233 @@
|
||||
---
|
||||
name: codebase-onboarding
|
||||
description: Analyze an unfamiliar codebase and generate a structured onboarding guide with architecture map, key entry points, conventions, and a starter CLAUDE.md. Use when joining a new project or setting up Claude Code for the first time in a repo.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# Codebase Onboarding
|
||||
|
||||
Systematically analyze an unfamiliar codebase and produce a structured onboarding guide. Designed for developers joining a new project or setting up Claude Code in an existing repo for the first time.
|
||||
|
||||
## When to Use
|
||||
|
||||
- First time opening a project with Claude Code
|
||||
- Joining a new team or repository
|
||||
- User asks "help me understand this codebase"
|
||||
- User asks to generate a CLAUDE.md for a project
|
||||
- User says "onboard me" or "walk me through this repo"
|
||||
|
||||
## How It Works
|
||||
|
||||
### Phase 1: Reconnaissance
|
||||
|
||||
Gather raw signals about the project without reading every file. Run these checks in parallel:
|
||||
|
||||
```
|
||||
1. Package manifest detection
|
||||
→ package.json, go.mod, Cargo.toml, pyproject.toml, pom.xml, build.gradle,
|
||||
Gemfile, composer.json, mix.exs, pubspec.yaml
|
||||
|
||||
2. Framework fingerprinting
|
||||
→ next.config.*, nuxt.config.*, angular.json, vite.config.*,
|
||||
django settings, flask app factory, fastapi main, rails config
|
||||
|
||||
3. Entry point identification
|
||||
→ main.*, index.*, app.*, server.*, cmd/, src/main/
|
||||
|
||||
4. Directory structure snapshot
|
||||
→ Top 2 levels of the directory tree, ignoring node_modules, vendor,
|
||||
.git, dist, build, __pycache__, .next
|
||||
|
||||
5. Config and tooling detection
|
||||
→ .eslintrc*, .prettierrc*, tsconfig.json, Makefile, Dockerfile,
|
||||
docker-compose*, .github/workflows/, .env.example, CI configs
|
||||
|
||||
6. Test structure detection
|
||||
→ tests/, test/, __tests__/, *_test.go, *.spec.ts, *.test.js,
|
||||
pytest.ini, jest.config.*, vitest.config.*
|
||||
```
|
||||
|
||||
### Phase 2: Architecture Mapping
|
||||
|
||||
From the reconnaissance data, identify:
|
||||
|
||||
**Tech Stack**
|
||||
- Language(s) and version constraints
|
||||
- Framework(s) and major libraries
|
||||
- Database(s) and ORMs
|
||||
- Build tools and bundlers
|
||||
- CI/CD platform
|
||||
|
||||
**Architecture Pattern**
|
||||
- Monolith, monorepo, microservices, or serverless
|
||||
- Frontend/backend split or full-stack
|
||||
- API style: REST, GraphQL, gRPC, tRPC
|
||||
|
||||
**Key Directories**
|
||||
Map the top-level directories to their purpose:
|
||||
|
||||
<!-- Example for a React project — replace with detected directories -->
|
||||
```
|
||||
src/components/ → React UI components
|
||||
src/api/ → API route handlers
|
||||
src/lib/ → Shared utilities
|
||||
src/db/ → Database models and migrations
|
||||
tests/ → Test suites
|
||||
scripts/ → Build and deployment scripts
|
||||
```
|
||||
|
||||
**Data Flow**
|
||||
Trace one request from entry to response:
|
||||
- Where does a request enter? (router, handler, controller)
|
||||
- How is it validated? (middleware, schemas, guards)
|
||||
- Where is business logic? (services, models, use cases)
|
||||
- How does it reach the database? (ORM, raw queries, repositories)
|
||||
|
||||
### Phase 3: Convention Detection
|
||||
|
||||
Identify patterns the codebase already follows:
|
||||
|
||||
**Naming Conventions**
|
||||
- File naming: kebab-case, camelCase, PascalCase, snake_case
|
||||
- Component/class naming patterns
|
||||
- Test file naming: `*.test.ts`, `*.spec.ts`, `*_test.go`
|
||||
|
||||
**Code Patterns**
|
||||
- Error handling style: try/catch, Result types, error codes
|
||||
- Dependency injection or direct imports
|
||||
- State management approach
|
||||
- Async patterns: callbacks, promises, async/await, channels
|
||||
|
||||
**Git Conventions**
|
||||
- Branch naming from recent branches
|
||||
- Commit message style from recent commits
|
||||
- PR workflow (squash, merge, rebase)
|
||||
- If the repo has no commits yet or only a shallow history (e.g. `git clone --depth 1`), skip this section and note "Git history unavailable or too shallow to detect conventions"
|
||||
|
||||
### Phase 4: Generate Onboarding Artifacts
|
||||
|
||||
Produce two outputs:
|
||||
|
||||
#### Output 1: Onboarding Guide
|
||||
|
||||
```markdown
|
||||
# Onboarding Guide: [Project Name]
|
||||
|
||||
## Overview
|
||||
[2-3 sentences: what this project does and who it serves]
|
||||
|
||||
## Tech Stack
|
||||
<!-- Example for a Next.js project — replace with detected stack -->
|
||||
| Layer | Technology | Version |
|
||||
|-------|-----------|---------|
|
||||
| Language | TypeScript | 5.x |
|
||||
| Framework | Next.js | 14.x |
|
||||
| Database | PostgreSQL | 16 |
|
||||
| ORM | Prisma | 5.x |
|
||||
| Testing | Jest + Playwright | - |
|
||||
|
||||
## Architecture
|
||||
[Diagram or description of how components connect]
|
||||
|
||||
## Key Entry Points
|
||||
<!-- Example for a Next.js project — replace with detected paths -->
|
||||
- **API routes**: `src/app/api/` — Next.js route handlers
|
||||
- **UI pages**: `src/app/(dashboard)/` — authenticated pages
|
||||
- **Database**: `prisma/schema.prisma` — data model source of truth
|
||||
- **Config**: `next.config.ts` — build and runtime config
|
||||
|
||||
## Directory Map
|
||||
[Top-level directory → purpose mapping]
|
||||
|
||||
## Request Lifecycle
|
||||
[Trace one API request from entry to response]
|
||||
|
||||
## Conventions
|
||||
- [File naming pattern]
|
||||
- [Error handling approach]
|
||||
- [Testing patterns]
|
||||
- [Git workflow]
|
||||
|
||||
## Common Tasks
|
||||
<!-- Example for a Node.js project — replace with detected commands -->
|
||||
- **Run dev server**: `npm run dev`
|
||||
- **Run tests**: `npm test`
|
||||
- **Run linter**: `npm run lint`
|
||||
- **Database migrations**: `npx prisma migrate dev`
|
||||
- **Build for production**: `npm run build`
|
||||
|
||||
## Where to Look
|
||||
<!-- Example for a Next.js project — replace with detected paths -->
|
||||
| I want to... | Look at... |
|
||||
|--------------|-----------|
|
||||
| Add an API endpoint | `src/app/api/` |
|
||||
| Add a UI page | `src/app/(dashboard)/` |
|
||||
| Add a database table | `prisma/schema.prisma` |
|
||||
| Add a test | `tests/` matching the source path |
|
||||
| Change build config | `next.config.ts` |
|
||||
```
|
||||
|
||||
#### Output 2: Starter CLAUDE.md
|
||||
|
||||
Generate or update a project-specific CLAUDE.md based on detected conventions. If `CLAUDE.md` already exists, read it first and enhance it — preserve existing project-specific instructions and clearly call out what was added or changed.
|
||||
|
||||
```markdown
|
||||
# Project Instructions
|
||||
|
||||
## Tech Stack
|
||||
[Detected stack summary]
|
||||
|
||||
## Code Style
|
||||
- [Detected naming conventions]
|
||||
- [Detected patterns to follow]
|
||||
|
||||
## Testing
|
||||
- Run tests: `[detected test command]`
|
||||
- Test pattern: [detected test file convention]
|
||||
- Coverage: [if configured, the coverage command]
|
||||
|
||||
## Build & Run
|
||||
- Dev: `[detected dev command]`
|
||||
- Build: `[detected build command]`
|
||||
- Lint: `[detected lint command]`
|
||||
|
||||
## Project Structure
|
||||
[Key directory → purpose map]
|
||||
|
||||
## Conventions
|
||||
- [Commit style if detectable]
|
||||
- [PR workflow if detectable]
|
||||
- [Error handling patterns]
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Don't read everything** — reconnaissance should use Glob and Grep, not Read on every file. Read selectively only for ambiguous signals.
|
||||
2. **Verify, don't guess** — if a framework is detected from config but the actual code uses something different, trust the code.
|
||||
3. **Respect existing CLAUDE.md** — if one already exists, enhance it rather than replacing it. Call out what's new vs existing.
|
||||
4. **Stay concise** — the onboarding guide should be scannable in 2 minutes. Details belong in the code, not the guide.
|
||||
5. **Flag unknowns** — if a convention can't be confidently detected, say so rather than guessing. "Could not determine test runner" is better than a wrong answer.
|
||||
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
- Generating a CLAUDE.md that's longer than 100 lines — keep it focused
|
||||
- Listing every dependency — highlight only the ones that shape how you write code
|
||||
- Describing obvious directory names — `src/` doesn't need an explanation
|
||||
- Copying the README — the onboarding guide adds structural insight the README lacks
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: First time in a new repo
|
||||
**User**: "Onboard me to this codebase"
|
||||
**Action**: Run full 4-phase workflow → produce Onboarding Guide + Starter CLAUDE.md
|
||||
**Output**: Onboarding Guide printed directly to the conversation, plus a `CLAUDE.md` written to the project root
|
||||
|
||||
### Example 2: Generate CLAUDE.md for existing project
|
||||
**User**: "Generate a CLAUDE.md for this project"
|
||||
**Action**: Run Phases 1-3, skip Onboarding Guide, produce only CLAUDE.md
|
||||
**Output**: Project-specific `CLAUDE.md` with detected conventions
|
||||
|
||||
### Example 3: Enhance existing CLAUDE.md
|
||||
**User**: "Update the CLAUDE.md with current project conventions"
|
||||
**Action**: Read existing CLAUDE.md, run Phases 1-3, merge new findings
|
||||
**Output**: Updated `CLAUDE.md` with additions clearly marked
|
||||
135
skills/context-budget/SKILL.md
Normal file
135
skills/context-budget/SKILL.md
Normal file
@@ -0,0 +1,135 @@
|
||||
---
|
||||
name: context-budget
|
||||
description: Audits Claude Code context window consumption across agents, skills, MCP servers, and rules. Identifies bloat, redundant components, and produces prioritized token-savings recommendations.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# Context Budget
|
||||
|
||||
Analyze token overhead across every loaded component in a Claude Code session and surface actionable optimizations to reclaim context space.
|
||||
|
||||
## When to Use
|
||||
|
||||
- Session performance feels sluggish or output quality is degrading
|
||||
- You've recently added many skills, agents, or MCP servers
|
||||
- You want to know how much context headroom you actually have
|
||||
- Planning to add more components and need to know if there's room
|
||||
- Running `/context-budget` command (this skill backs it)
|
||||
|
||||
## How It Works
|
||||
|
||||
### Phase 1: Inventory
|
||||
|
||||
Scan all component directories and estimate token consumption:
|
||||
|
||||
**Agents** (`agents/*.md`)
|
||||
- Count lines and tokens per file (words × 1.3)
|
||||
- Extract `description` frontmatter length
|
||||
- Flag: files >200 lines (heavy), description >30 words (bloated frontmatter)
|
||||
|
||||
**Skills** (`skills/*/SKILL.md`)
|
||||
- Count tokens per SKILL.md
|
||||
- Flag: files >400 lines
|
||||
- Check for duplicate copies in `.agents/skills/` — skip identical copies to avoid double-counting
|
||||
|
||||
**Rules** (`rules/**/*.md`)
|
||||
- Count tokens per file
|
||||
- Flag: files >100 lines
|
||||
- Detect content overlap between rule files in the same language module
|
||||
|
||||
**MCP Servers** (`.mcp.json` or active MCP config)
|
||||
- Count configured servers and total tool count
|
||||
- Estimate schema overhead at ~500 tokens per tool
|
||||
- Flag: servers with >20 tools, servers that wrap simple CLI commands (`gh`, `git`, `npm`, `supabase`, `vercel`)
|
||||
|
||||
**CLAUDE.md** (project + user-level)
|
||||
- Count tokens per file in the CLAUDE.md chain
|
||||
- Flag: combined total >300 lines
|
||||
|
||||
### Phase 2: Classify
|
||||
|
||||
Sort every component into a bucket:
|
||||
|
||||
| Bucket | Criteria | Action |
|
||||
|--------|----------|--------|
|
||||
| **Always needed** | Referenced in CLAUDE.md, backs an active command, or matches current project type | Keep |
|
||||
| **Sometimes needed** | Domain-specific (e.g. language patterns), not referenced in CLAUDE.md | Consider on-demand activation |
|
||||
| **Rarely needed** | No command reference, overlapping content, or no obvious project match | Remove or lazy-load |
|
||||
|
||||
### Phase 3: Detect Issues
|
||||
|
||||
Identify the following problem patterns:
|
||||
|
||||
- **Bloated agent descriptions** — description >30 words in frontmatter loads into every Task tool invocation
|
||||
- **Heavy agents** — files >200 lines inflate Task tool context on every spawn
|
||||
- **Redundant components** — skills that duplicate agent logic, rules that duplicate CLAUDE.md
|
||||
- **MCP over-subscription** — >10 servers, or servers wrapping CLI tools available for free
|
||||
- **CLAUDE.md bloat** — verbose explanations, outdated sections, instructions that should be rules
|
||||
|
||||
### Phase 4: Report
|
||||
|
||||
Produce the context budget report:
|
||||
|
||||
```
|
||||
Context Budget Report
|
||||
═══════════════════════════════════════
|
||||
|
||||
Total estimated overhead: ~XX,XXX tokens
|
||||
Context model: Claude Sonnet (200K window)
|
||||
Effective available context: ~XXX,XXX tokens (XX%)
|
||||
|
||||
Component Breakdown:
|
||||
┌─────────────────┬────────┬───────────┐
|
||||
│ Component │ Count │ Tokens │
|
||||
├─────────────────┼────────┼───────────┤
|
||||
│ Agents │ N │ ~X,XXX │
|
||||
│ Skills │ N │ ~X,XXX │
|
||||
│ Rules │ N │ ~X,XXX │
|
||||
│ MCP tools │ N │ ~XX,XXX │
|
||||
│ CLAUDE.md │ N │ ~X,XXX │
|
||||
└─────────────────┴────────┴───────────┘
|
||||
|
||||
⚠ Issues Found (N):
|
||||
[ranked by token savings]
|
||||
|
||||
Top 3 Optimizations:
|
||||
1. [action] → save ~X,XXX tokens
|
||||
2. [action] → save ~X,XXX tokens
|
||||
3. [action] → save ~X,XXX tokens
|
||||
|
||||
Potential savings: ~XX,XXX tokens (XX% of current overhead)
|
||||
```
|
||||
|
||||
In verbose mode, additionally output per-file token counts, line-by-line breakdown of the heaviest files, specific redundant lines between overlapping components, and MCP tool list with per-tool schema size estimates.
|
||||
|
||||
## Examples
|
||||
|
||||
**Basic audit**
|
||||
```
|
||||
User: /context-budget
|
||||
Skill: Scans setup → 16 agents (12,400 tokens), 28 skills (6,200), 87 MCP tools (43,500), 2 CLAUDE.md (1,200)
|
||||
Flags: 3 heavy agents, 14 MCP servers (3 CLI-replaceable)
|
||||
Top saving: remove 3 MCP servers → -27,500 tokens (47% overhead reduction)
|
||||
```
|
||||
|
||||
**Verbose mode**
|
||||
```
|
||||
User: /context-budget --verbose
|
||||
Skill: Full report + per-file breakdown showing planner.md (213 lines, 1,840 tokens),
|
||||
MCP tool list with per-tool sizes, duplicated rule lines side by side
|
||||
```
|
||||
|
||||
**Pre-expansion check**
|
||||
```
|
||||
User: I want to add 5 more MCP servers, do I have room?
|
||||
Skill: Current overhead 33% → adding 5 servers (~50 tools) would add ~25,000 tokens → pushes to 45% overhead
|
||||
Recommendation: remove 2 CLI-replaceable servers first to stay under 40%
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
- **Token estimation**: use `words × 1.3` for prose, `chars / 4` for code-heavy files
|
||||
- **MCP is the biggest lever**: each tool schema costs ~500 tokens; a 30-tool server costs more than all your skills combined
|
||||
- **Agent descriptions are loaded always**: even if the agent is never invoked, its description field is present in every Task tool context
|
||||
- **Verbose mode for debugging**: use when you need to pinpoint the exact files driving overhead, not for regular audits
|
||||
- **Audit after changes**: run after adding any agent, skill, or MCP server to catch creep early
|
||||
@@ -114,7 +114,9 @@ PROMPT
|
||||
fi
|
||||
|
||||
# Prevent observe.sh from recording this automated Haiku session as observations
|
||||
ECC_SKIP_OBSERVE=1 ECC_HOOK_PROFILE=minimal claude --model haiku --max-turns "$max_turns" --print < "$prompt_file" >> "$LOG_FILE" 2>&1 &
|
||||
ECC_SKIP_OBSERVE=1 ECC_HOOK_PROFILE=minimal claude --model haiku --max-turns "$max_turns" --print \
|
||||
--allowedTools "Read,Write" \
|
||||
< "$prompt_file" >> "$LOG_FILE" 2>&1 &
|
||||
claude_pid=$!
|
||||
|
||||
(
|
||||
|
||||
@@ -97,8 +97,11 @@ fi
|
||||
# - automated sessions creating project-scoped homunculus metadata
|
||||
|
||||
# Layer 1: entrypoint. Only interactive terminal sessions should continue.
|
||||
# sdk-ts: Agent SDK sessions can be human-interactive (e.g. via Happy).
|
||||
# Non-interactive SDK automation is still filtered by Layers 2-5 below
|
||||
# (ECC_HOOK_PROFILE=minimal, ECC_SKIP_OBSERVE=1, agent_id, path exclusions).
|
||||
case "${CLAUDE_CODE_ENTRYPOINT:-cli}" in
|
||||
cli) ;;
|
||||
cli|sdk-ts) ;;
|
||||
*) exit 0 ;;
|
||||
esac
|
||||
|
||||
|
||||
@@ -85,7 +85,7 @@ _clv2_detect_project() {
|
||||
# fall back to path hash (machine-specific but still useful)
|
||||
local remote_url=""
|
||||
if command -v git &>/dev/null; then
|
||||
if [ "$source_hint" = "git" ] || [ -d "${project_root}/.git" ]; then
|
||||
if [ "$source_hint" = "git" ] || [ -e "${project_root}/.git" ]; then
|
||||
remote_url=$(git -C "$project_root" remote get-url origin 2>/dev/null || true)
|
||||
fi
|
||||
fi
|
||||
|
||||
396
skills/pytorch-patterns/SKILL.md
Normal file
396
skills/pytorch-patterns/SKILL.md
Normal file
@@ -0,0 +1,396 @@
|
||||
---
|
||||
name: pytorch-patterns
|
||||
description: PyTorch deep learning patterns and best practices for building robust, efficient, and reproducible training pipelines, model architectures, and data loading.
|
||||
origin: ECC
|
||||
---
|
||||
|
||||
# PyTorch Development Patterns
|
||||
|
||||
Idiomatic PyTorch patterns and best practices for building robust, efficient, and reproducible deep learning applications.
|
||||
|
||||
## When to Activate
|
||||
|
||||
- Writing new PyTorch models or training scripts
|
||||
- Reviewing deep learning code
|
||||
- Debugging training loops or data pipelines
|
||||
- Optimizing GPU memory usage or training speed
|
||||
- Setting up reproducible experiments
|
||||
|
||||
## Core Principles
|
||||
|
||||
### 1. Device-Agnostic Code
|
||||
|
||||
Always write code that works on both CPU and GPU without hardcoding devices.
|
||||
|
||||
```python
|
||||
# Good: Device-agnostic
|
||||
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
||||
model = MyModel().to(device)
|
||||
data = data.to(device)
|
||||
|
||||
# Bad: Hardcoded device
|
||||
model = MyModel().cuda() # Crashes if no GPU
|
||||
data = data.cuda()
|
||||
```
|
||||
|
||||
### 2. Reproducibility First
|
||||
|
||||
Set all random seeds for reproducible results.
|
||||
|
||||
```python
|
||||
# Good: Full reproducibility setup
|
||||
def set_seed(seed: int = 42) -> None:
|
||||
torch.manual_seed(seed)
|
||||
torch.cuda.manual_seed_all(seed)
|
||||
np.random.seed(seed)
|
||||
random.seed(seed)
|
||||
torch.backends.cudnn.deterministic = True
|
||||
torch.backends.cudnn.benchmark = False
|
||||
|
||||
# Bad: No seed control
|
||||
model = MyModel() # Different weights every run
|
||||
```
|
||||
|
||||
### 3. Explicit Shape Management
|
||||
|
||||
Always document and verify tensor shapes.
|
||||
|
||||
```python
|
||||
# Good: Shape-annotated forward pass
|
||||
def forward(self, x: torch.Tensor) -> torch.Tensor:
|
||||
# x: (batch_size, channels, height, width)
|
||||
x = self.conv1(x) # -> (batch_size, 32, H, W)
|
||||
x = self.pool(x) # -> (batch_size, 32, H//2, W//2)
|
||||
x = x.view(x.size(0), -1) # -> (batch_size, 32*H//2*W//2)
|
||||
return self.fc(x) # -> (batch_size, num_classes)
|
||||
|
||||
# Bad: No shape tracking
|
||||
def forward(self, x):
|
||||
x = self.conv1(x)
|
||||
x = self.pool(x)
|
||||
x = x.view(x.size(0), -1) # What size is this?
|
||||
return self.fc(x) # Will this even work?
|
||||
```
|
||||
|
||||
## Model Architecture Patterns
|
||||
|
||||
### Clean nn.Module Structure
|
||||
|
||||
```python
|
||||
# Good: Well-organized module
|
||||
class ImageClassifier(nn.Module):
|
||||
def __init__(self, num_classes: int, dropout: float = 0.5) -> None:
|
||||
super().__init__()
|
||||
self.features = nn.Sequential(
|
||||
nn.Conv2d(3, 64, kernel_size=3, padding=1),
|
||||
nn.BatchNorm2d(64),
|
||||
nn.ReLU(inplace=True),
|
||||
nn.MaxPool2d(2),
|
||||
)
|
||||
self.classifier = nn.Sequential(
|
||||
nn.Dropout(dropout),
|
||||
nn.Linear(64 * 16 * 16, num_classes),
|
||||
)
|
||||
|
||||
def forward(self, x: torch.Tensor) -> torch.Tensor:
|
||||
x = self.features(x)
|
||||
x = x.view(x.size(0), -1)
|
||||
return self.classifier(x)
|
||||
|
||||
# Bad: Everything in forward
|
||||
class ImageClassifier(nn.Module):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
|
||||
def forward(self, x):
|
||||
x = F.conv2d(x, weight=self.make_weight()) # Creates weight each call!
|
||||
return x
|
||||
```
|
||||
|
||||
### Proper Weight Initialization
|
||||
|
||||
```python
|
||||
# Good: Explicit initialization
|
||||
def _init_weights(self, module: nn.Module) -> None:
|
||||
if isinstance(module, nn.Linear):
|
||||
nn.init.kaiming_normal_(module.weight, mode="fan_out", nonlinearity="relu")
|
||||
if module.bias is not None:
|
||||
nn.init.zeros_(module.bias)
|
||||
elif isinstance(module, nn.Conv2d):
|
||||
nn.init.kaiming_normal_(module.weight, mode="fan_out", nonlinearity="relu")
|
||||
elif isinstance(module, nn.BatchNorm2d):
|
||||
nn.init.ones_(module.weight)
|
||||
nn.init.zeros_(module.bias)
|
||||
|
||||
model = MyModel()
|
||||
model.apply(model._init_weights)
|
||||
```
|
||||
|
||||
## Training Loop Patterns
|
||||
|
||||
### Standard Training Loop
|
||||
|
||||
```python
|
||||
# Good: Complete training loop with best practices
|
||||
def train_one_epoch(
|
||||
model: nn.Module,
|
||||
dataloader: DataLoader,
|
||||
optimizer: torch.optim.Optimizer,
|
||||
criterion: nn.Module,
|
||||
device: torch.device,
|
||||
scaler: torch.amp.GradScaler | None = None,
|
||||
) -> float:
|
||||
model.train() # Always set train mode
|
||||
total_loss = 0.0
|
||||
|
||||
for batch_idx, (data, target) in enumerate(dataloader):
|
||||
data, target = data.to(device), target.to(device)
|
||||
|
||||
optimizer.zero_grad(set_to_none=True) # More efficient than zero_grad()
|
||||
|
||||
# Mixed precision training
|
||||
with torch.amp.autocast("cuda", enabled=scaler is not None):
|
||||
output = model(data)
|
||||
loss = criterion(output, target)
|
||||
|
||||
if scaler is not None:
|
||||
scaler.scale(loss).backward()
|
||||
scaler.unscale_(optimizer)
|
||||
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0)
|
||||
scaler.step(optimizer)
|
||||
scaler.update()
|
||||
else:
|
||||
loss.backward()
|
||||
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0)
|
||||
optimizer.step()
|
||||
|
||||
total_loss += loss.item()
|
||||
|
||||
return total_loss / len(dataloader)
|
||||
```
|
||||
|
||||
### Validation Loop
|
||||
|
||||
```python
|
||||
# Good: Proper evaluation
|
||||
@torch.no_grad() # More efficient than wrapping in torch.no_grad() block
|
||||
def evaluate(
|
||||
model: nn.Module,
|
||||
dataloader: DataLoader,
|
||||
criterion: nn.Module,
|
||||
device: torch.device,
|
||||
) -> tuple[float, float]:
|
||||
model.eval() # Always set eval mode — disables dropout, uses running BN stats
|
||||
total_loss = 0.0
|
||||
correct = 0
|
||||
total = 0
|
||||
|
||||
for data, target in dataloader:
|
||||
data, target = data.to(device), target.to(device)
|
||||
output = model(data)
|
||||
total_loss += criterion(output, target).item()
|
||||
correct += (output.argmax(1) == target).sum().item()
|
||||
total += target.size(0)
|
||||
|
||||
return total_loss / len(dataloader), correct / total
|
||||
```
|
||||
|
||||
## Data Pipeline Patterns
|
||||
|
||||
### Custom Dataset
|
||||
|
||||
```python
|
||||
# Good: Clean Dataset with type hints
|
||||
class ImageDataset(Dataset):
|
||||
def __init__(
|
||||
self,
|
||||
image_dir: str,
|
||||
labels: dict[str, int],
|
||||
transform: transforms.Compose | None = None,
|
||||
) -> None:
|
||||
self.image_paths = list(Path(image_dir).glob("*.jpg"))
|
||||
self.labels = labels
|
||||
self.transform = transform
|
||||
|
||||
def __len__(self) -> int:
|
||||
return len(self.image_paths)
|
||||
|
||||
def __getitem__(self, idx: int) -> tuple[torch.Tensor, int]:
|
||||
img = Image.open(self.image_paths[idx]).convert("RGB")
|
||||
label = self.labels[self.image_paths[idx].stem]
|
||||
|
||||
if self.transform:
|
||||
img = self.transform(img)
|
||||
|
||||
return img, label
|
||||
```
|
||||
|
||||
### Efficient DataLoader Configuration
|
||||
|
||||
```python
|
||||
# Good: Optimized DataLoader
|
||||
dataloader = DataLoader(
|
||||
dataset,
|
||||
batch_size=32,
|
||||
shuffle=True, # Shuffle for training
|
||||
num_workers=4, # Parallel data loading
|
||||
pin_memory=True, # Faster CPU->GPU transfer
|
||||
persistent_workers=True, # Keep workers alive between epochs
|
||||
drop_last=True, # Consistent batch sizes for BatchNorm
|
||||
)
|
||||
|
||||
# Bad: Slow defaults
|
||||
dataloader = DataLoader(dataset, batch_size=32) # num_workers=0, no pin_memory
|
||||
```
|
||||
|
||||
### Custom Collate for Variable-Length Data
|
||||
|
||||
```python
|
||||
# Good: Pad sequences in collate_fn
|
||||
def collate_fn(batch: list[tuple[torch.Tensor, int]]) -> tuple[torch.Tensor, torch.Tensor]:
|
||||
sequences, labels = zip(*batch)
|
||||
# Pad to max length in batch
|
||||
padded = nn.utils.rnn.pad_sequence(sequences, batch_first=True, padding_value=0)
|
||||
return padded, torch.tensor(labels)
|
||||
|
||||
dataloader = DataLoader(dataset, batch_size=32, collate_fn=collate_fn)
|
||||
```
|
||||
|
||||
## Checkpointing Patterns
|
||||
|
||||
### Save and Load Checkpoints
|
||||
|
||||
```python
|
||||
# Good: Complete checkpoint with all training state
|
||||
def save_checkpoint(
|
||||
model: nn.Module,
|
||||
optimizer: torch.optim.Optimizer,
|
||||
epoch: int,
|
||||
loss: float,
|
||||
path: str,
|
||||
) -> None:
|
||||
torch.save({
|
||||
"epoch": epoch,
|
||||
"model_state_dict": model.state_dict(),
|
||||
"optimizer_state_dict": optimizer.state_dict(),
|
||||
"loss": loss,
|
||||
}, path)
|
||||
|
||||
def load_checkpoint(
|
||||
path: str,
|
||||
model: nn.Module,
|
||||
optimizer: torch.optim.Optimizer | None = None,
|
||||
) -> dict:
|
||||
checkpoint = torch.load(path, map_location="cpu", weights_only=True)
|
||||
model.load_state_dict(checkpoint["model_state_dict"])
|
||||
if optimizer:
|
||||
optimizer.load_state_dict(checkpoint["optimizer_state_dict"])
|
||||
return checkpoint
|
||||
|
||||
# Bad: Only saving model weights (can't resume training)
|
||||
torch.save(model.state_dict(), "model.pt")
|
||||
```
|
||||
|
||||
## Performance Optimization
|
||||
|
||||
### Mixed Precision Training
|
||||
|
||||
```python
|
||||
# Good: AMP with GradScaler
|
||||
scaler = torch.amp.GradScaler("cuda")
|
||||
for data, target in dataloader:
|
||||
with torch.amp.autocast("cuda"):
|
||||
output = model(data)
|
||||
loss = criterion(output, target)
|
||||
scaler.scale(loss).backward()
|
||||
scaler.step(optimizer)
|
||||
scaler.update()
|
||||
optimizer.zero_grad(set_to_none=True)
|
||||
```
|
||||
|
||||
### Gradient Checkpointing for Large Models
|
||||
|
||||
```python
|
||||
# Good: Trade compute for memory
|
||||
from torch.utils.checkpoint import checkpoint
|
||||
|
||||
class LargeModel(nn.Module):
|
||||
def forward(self, x: torch.Tensor) -> torch.Tensor:
|
||||
# Recompute activations during backward to save memory
|
||||
x = checkpoint(self.block1, x, use_reentrant=False)
|
||||
x = checkpoint(self.block2, x, use_reentrant=False)
|
||||
return self.head(x)
|
||||
```
|
||||
|
||||
### torch.compile for Speed
|
||||
|
||||
```python
|
||||
# Good: Compile the model for faster execution (PyTorch 2.0+)
|
||||
model = MyModel().to(device)
|
||||
model = torch.compile(model, mode="reduce-overhead")
|
||||
|
||||
# Modes: "default" (safe), "reduce-overhead" (faster), "max-autotune" (fastest)
|
||||
```
|
||||
|
||||
## Quick Reference: PyTorch Idioms
|
||||
|
||||
| Idiom | Description |
|
||||
|-------|-------------|
|
||||
| `model.train()` / `model.eval()` | Always set mode before train/eval |
|
||||
| `torch.no_grad()` | Disable gradients for inference |
|
||||
| `optimizer.zero_grad(set_to_none=True)` | More efficient gradient clearing |
|
||||
| `.to(device)` | Device-agnostic tensor/model placement |
|
||||
| `torch.amp.autocast` | Mixed precision for 2x speed |
|
||||
| `pin_memory=True` | Faster CPU→GPU data transfer |
|
||||
| `torch.compile` | JIT compilation for speed (2.0+) |
|
||||
| `weights_only=True` | Secure model loading |
|
||||
| `torch.manual_seed` | Reproducible experiments |
|
||||
| `gradient_checkpointing` | Trade compute for memory |
|
||||
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
```python
|
||||
# Bad: Forgetting model.eval() during validation
|
||||
model.train()
|
||||
with torch.no_grad():
|
||||
output = model(val_data) # Dropout still active! BatchNorm uses batch stats!
|
||||
|
||||
# Good: Always set eval mode
|
||||
model.eval()
|
||||
with torch.no_grad():
|
||||
output = model(val_data)
|
||||
|
||||
# Bad: In-place operations breaking autograd
|
||||
x = F.relu(x, inplace=True) # Can break gradient computation
|
||||
x += residual # In-place add breaks autograd graph
|
||||
|
||||
# Good: Out-of-place operations
|
||||
x = F.relu(x)
|
||||
x = x + residual
|
||||
|
||||
# Bad: Moving data to GPU inside the training loop repeatedly
|
||||
for data, target in dataloader:
|
||||
model = model.cuda() # Moves model EVERY iteration!
|
||||
|
||||
# Good: Move model once before the loop
|
||||
model = model.to(device)
|
||||
for data, target in dataloader:
|
||||
data, target = data.to(device), target.to(device)
|
||||
|
||||
# Bad: Using .item() before backward
|
||||
loss = criterion(output, target).item() # Detaches from graph!
|
||||
loss.backward() # Error: can't backprop through .item()
|
||||
|
||||
# Good: Call .item() only for logging
|
||||
loss = criterion(output, target)
|
||||
loss.backward()
|
||||
print(f"Loss: {loss.item():.4f}") # .item() after backward is fine
|
||||
|
||||
# Bad: Not using torch.save properly
|
||||
torch.save(model, "model.pt") # Saves entire model (fragile, not portable)
|
||||
|
||||
# Good: Save state_dict
|
||||
torch.save(model.state_dict(), "model.pt")
|
||||
```
|
||||
|
||||
__Remember__: PyTorch code should be device-agnostic, reproducible, and memory-conscious. When in doubt, profile with `torch.profiler` and check GPU memory with `torch.cuda.memory_summary()`.
|
||||
@@ -43,7 +43,6 @@ fn process_bad(data: &Vec<u8>) -> usize {
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### Use `Cow` for Flexible Ownership
|
||||
|
||||
```rust
|
||||
|
||||
@@ -52,6 +52,34 @@ function writeInstallComponentsManifest(testDir, components) {
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Run modified source via a temp file (avoids Windows node -e shebang issues).
|
||||
* The temp file is written inside the repo so require() can resolve node_modules.
|
||||
* @param {string} source - JavaScript source to execute
|
||||
* @returns {{code: number, stdout: string, stderr: string}}
|
||||
*/
|
||||
function runSourceViaTempFile(source) {
|
||||
const tmpFile = path.join(repoRoot, `.tmp-validator-${Date.now()}-${Math.random().toString(36).slice(2)}.js`);
|
||||
try {
|
||||
fs.writeFileSync(tmpFile, source, 'utf8');
|
||||
const stdout = execFileSync('node', [tmpFile], {
|
||||
encoding: 'utf8',
|
||||
stdio: ['pipe', 'pipe', 'pipe'],
|
||||
timeout: 10000,
|
||||
cwd: repoRoot,
|
||||
});
|
||||
return { code: 0, stdout, stderr: '' };
|
||||
} catch (err) {
|
||||
return {
|
||||
code: err.status || 1,
|
||||
stdout: err.stdout || '',
|
||||
stderr: err.stderr || '',
|
||||
};
|
||||
} finally {
|
||||
try { fs.unlinkSync(tmpFile); } catch (_) { /* ignore cleanup errors */ }
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Run a validator script via a wrapper that overrides its directory constant.
|
||||
* This allows testing error cases without modifying real project files.
|
||||
@@ -67,27 +95,14 @@ function runValidatorWithDir(validatorName, dirConstant, overridePath) {
|
||||
// Read the validator source, replace the directory constant, and run as a wrapper
|
||||
let source = fs.readFileSync(validatorPath, 'utf8');
|
||||
|
||||
// Remove the shebang line
|
||||
// Remove the shebang line (Windows node cannot parse shebangs in eval/inline mode)
|
||||
source = source.replace(/^#!.*\n/, '');
|
||||
|
||||
// Replace the directory constant with our override path
|
||||
const dirRegex = new RegExp(`const ${dirConstant} = .*?;`);
|
||||
source = source.replace(dirRegex, `const ${dirConstant} = ${JSON.stringify(overridePath)};`);
|
||||
|
||||
try {
|
||||
const stdout = execFileSync('node', ['-e', source], {
|
||||
encoding: 'utf8',
|
||||
stdio: ['pipe', 'pipe', 'pipe'],
|
||||
timeout: 10000,
|
||||
});
|
||||
return { code: 0, stdout, stderr: '' };
|
||||
} catch (err) {
|
||||
return {
|
||||
code: err.status || 1,
|
||||
stdout: err.stdout || '',
|
||||
stderr: err.stderr || '',
|
||||
};
|
||||
}
|
||||
return runSourceViaTempFile(source);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -103,20 +118,7 @@ function runValidatorWithDirs(validatorName, overrides) {
|
||||
const dirRegex = new RegExp(`const ${constant} = .*?;`);
|
||||
source = source.replace(dirRegex, `const ${constant} = ${JSON.stringify(overridePath)};`);
|
||||
}
|
||||
try {
|
||||
const stdout = execFileSync('node', ['-e', source], {
|
||||
encoding: 'utf8',
|
||||
stdio: ['pipe', 'pipe', 'pipe'],
|
||||
timeout: 10000,
|
||||
});
|
||||
return { code: 0, stdout, stderr: '' };
|
||||
} catch (err) {
|
||||
return {
|
||||
code: err.status || 1,
|
||||
stdout: err.stdout || '',
|
||||
stderr: err.stderr || '',
|
||||
};
|
||||
}
|
||||
return runSourceViaTempFile(source);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -158,20 +160,7 @@ function runCatalogValidator(overrides = {}) {
|
||||
source = source.replace(dirRegex, `const ${constant} = ${JSON.stringify(overridePath)};`);
|
||||
}
|
||||
|
||||
try {
|
||||
const stdout = execFileSync('node', ['-e', source], {
|
||||
encoding: 'utf8',
|
||||
stdio: ['pipe', 'pipe', 'pipe'],
|
||||
timeout: 10000,
|
||||
});
|
||||
return { code: 0, stdout, stderr: '' };
|
||||
} catch (err) {
|
||||
return {
|
||||
code: err.status || 1,
|
||||
stdout: err.stdout || '',
|
||||
stderr: err.stderr || '',
|
||||
};
|
||||
}
|
||||
return runSourceViaTempFile(source);
|
||||
}
|
||||
|
||||
function writeCatalogFixture(testDir, options = {}) {
|
||||
|
||||
145
tests/hooks/auto-tmux-dev.test.js
Normal file
145
tests/hooks/auto-tmux-dev.test.js
Normal file
@@ -0,0 +1,145 @@
|
||||
/**
|
||||
* Tests for scripts/hooks/auto-tmux-dev.js
|
||||
*
|
||||
* Tests dev server command transformation for tmux wrapping.
|
||||
*
|
||||
* Run with: node tests/hooks/auto-tmux-dev.test.js
|
||||
*/
|
||||
|
||||
const assert = require('assert');
|
||||
const path = require('path');
|
||||
const { spawnSync } = require('child_process');
|
||||
|
||||
const script = path.join(__dirname, '..', '..', 'scripts', 'hooks', 'auto-tmux-dev.js');
|
||||
|
||||
function test(name, fn) {
|
||||
try {
|
||||
fn();
|
||||
console.log(` \u2713 ${name}`);
|
||||
return true;
|
||||
} catch (err) {
|
||||
console.log(` \u2717 ${name}`);
|
||||
console.log(` Error: ${err.message}`);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
function runScript(input) {
|
||||
const result = spawnSync('node', [script], {
|
||||
encoding: 'utf8',
|
||||
input: typeof input === 'string' ? input : JSON.stringify(input),
|
||||
timeout: 10000,
|
||||
});
|
||||
return {
|
||||
code: result.status || 0,
|
||||
stdout: result.stdout || '',
|
||||
stderr: result.stderr || '',
|
||||
};
|
||||
}
|
||||
|
||||
function runTests() {
|
||||
console.log('\n=== Testing auto-tmux-dev.js ===\n');
|
||||
|
||||
let passed = 0;
|
||||
let failed = 0;
|
||||
|
||||
// Check if tmux is available for conditional tests
|
||||
const tmuxAvailable = spawnSync('which', ['tmux'], { encoding: 'utf8' }).status === 0;
|
||||
|
||||
console.log('Dev server detection:');
|
||||
|
||||
if (test('transforms npm run dev command', () => {
|
||||
const result = runScript({ tool_input: { command: 'npm run dev' } });
|
||||
assert.strictEqual(result.code, 0);
|
||||
const output = JSON.parse(result.stdout);
|
||||
if (process.platform !== 'win32' && tmuxAvailable) {
|
||||
assert.ok(output.tool_input.command.includes('tmux'), 'Should contain tmux');
|
||||
assert.ok(output.tool_input.command.includes('npm run dev'), 'Should contain original command');
|
||||
}
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('transforms pnpm dev command', () => {
|
||||
const result = runScript({ tool_input: { command: 'pnpm dev' } });
|
||||
assert.strictEqual(result.code, 0);
|
||||
const output = JSON.parse(result.stdout);
|
||||
if (process.platform !== 'win32' && tmuxAvailable) {
|
||||
assert.ok(output.tool_input.command.includes('tmux'));
|
||||
}
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('transforms yarn dev command', () => {
|
||||
const result = runScript({ tool_input: { command: 'yarn dev' } });
|
||||
assert.strictEqual(result.code, 0);
|
||||
const output = JSON.parse(result.stdout);
|
||||
if (process.platform !== 'win32' && tmuxAvailable) {
|
||||
assert.ok(output.tool_input.command.includes('tmux'));
|
||||
}
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('transforms bun run dev command', () => {
|
||||
const result = runScript({ tool_input: { command: 'bun run dev' } });
|
||||
assert.strictEqual(result.code, 0);
|
||||
const output = JSON.parse(result.stdout);
|
||||
if (process.platform !== 'win32' && tmuxAvailable) {
|
||||
assert.ok(output.tool_input.command.includes('tmux'));
|
||||
}
|
||||
})) passed++; else failed++;
|
||||
|
||||
console.log('\nNon-dev commands (pass-through):');
|
||||
|
||||
if (test('does not transform npm install', () => {
|
||||
const input = { tool_input: { command: 'npm install' } };
|
||||
const result = runScript(input);
|
||||
assert.strictEqual(result.code, 0);
|
||||
const output = JSON.parse(result.stdout);
|
||||
assert.strictEqual(output.tool_input.command, 'npm install');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('does not transform npm test', () => {
|
||||
const input = { tool_input: { command: 'npm test' } };
|
||||
const result = runScript(input);
|
||||
assert.strictEqual(result.code, 0);
|
||||
const output = JSON.parse(result.stdout);
|
||||
assert.strictEqual(output.tool_input.command, 'npm test');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('does not transform npm run build', () => {
|
||||
const input = { tool_input: { command: 'npm run build' } };
|
||||
const result = runScript(input);
|
||||
assert.strictEqual(result.code, 0);
|
||||
const output = JSON.parse(result.stdout);
|
||||
assert.strictEqual(output.tool_input.command, 'npm run build');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('does not transform npm run develop (partial match)', () => {
|
||||
const input = { tool_input: { command: 'npm run develop' } };
|
||||
const result = runScript(input);
|
||||
assert.strictEqual(result.code, 0);
|
||||
const output = JSON.parse(result.stdout);
|
||||
assert.strictEqual(output.tool_input.command, 'npm run develop');
|
||||
})) passed++; else failed++;
|
||||
|
||||
console.log('\nEdge cases:');
|
||||
|
||||
if (test('handles empty input gracefully', () => {
|
||||
const result = runScript('{}');
|
||||
assert.strictEqual(result.code, 0);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('handles invalid JSON gracefully', () => {
|
||||
const result = runScript('not json');
|
||||
assert.strictEqual(result.code, 0);
|
||||
assert.strictEqual(result.stdout, 'not json');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('passes through missing command field', () => {
|
||||
const input = { tool_input: {} };
|
||||
const result = runScript(input);
|
||||
assert.strictEqual(result.code, 0);
|
||||
})) passed++; else failed++;
|
||||
|
||||
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
|
||||
process.exit(failed > 0 ? 1 : 0);
|
||||
}
|
||||
|
||||
runTests();
|
||||
108
tests/hooks/check-hook-enabled.test.js
Normal file
108
tests/hooks/check-hook-enabled.test.js
Normal file
@@ -0,0 +1,108 @@
|
||||
/**
|
||||
* Tests for scripts/hooks/check-hook-enabled.js
|
||||
*
|
||||
* Tests the CLI wrapper around isHookEnabled.
|
||||
*
|
||||
* Run with: node tests/hooks/check-hook-enabled.test.js
|
||||
*/
|
||||
|
||||
const assert = require('assert');
|
||||
const path = require('path');
|
||||
const { spawnSync } = require('child_process');
|
||||
|
||||
const script = path.join(__dirname, '..', '..', 'scripts', 'hooks', 'check-hook-enabled.js');
|
||||
|
||||
function test(name, fn) {
|
||||
try {
|
||||
fn();
|
||||
console.log(` \u2713 ${name}`);
|
||||
return true;
|
||||
} catch (err) {
|
||||
console.log(` \u2717 ${name}`);
|
||||
console.log(` Error: ${err.message}`);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
function runScript(args = [], envOverrides = {}) {
|
||||
const env = { ...process.env, ...envOverrides };
|
||||
// Remove potentially interfering env vars unless explicitly set
|
||||
if (!envOverrides.ECC_HOOK_PROFILE) delete env.ECC_HOOK_PROFILE;
|
||||
if (!envOverrides.ECC_DISABLED_HOOKS) delete env.ECC_DISABLED_HOOKS;
|
||||
|
||||
const result = spawnSync('node', [script, ...args], {
|
||||
encoding: 'utf8',
|
||||
timeout: 10000,
|
||||
env,
|
||||
});
|
||||
return {
|
||||
code: result.status || 0,
|
||||
stdout: result.stdout || '',
|
||||
stderr: result.stderr || '',
|
||||
};
|
||||
}
|
||||
|
||||
function runTests() {
|
||||
console.log('\n=== Testing check-hook-enabled.js ===\n');
|
||||
|
||||
let passed = 0;
|
||||
let failed = 0;
|
||||
|
||||
console.log('No arguments:');
|
||||
|
||||
if (test('returns yes when no hookId provided', () => {
|
||||
const result = runScript([]);
|
||||
assert.strictEqual(result.stdout, 'yes');
|
||||
})) passed++; else failed++;
|
||||
|
||||
console.log('\nDefault profile (standard):');
|
||||
|
||||
if (test('returns yes for hook with default profiles', () => {
|
||||
const result = runScript(['my-hook']);
|
||||
assert.strictEqual(result.stdout, 'yes');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns yes for hook with standard,strict profiles', () => {
|
||||
const result = runScript(['my-hook', 'standard,strict']);
|
||||
assert.strictEqual(result.stdout, 'yes');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns no for hook with only strict profile', () => {
|
||||
const result = runScript(['my-hook', 'strict']);
|
||||
assert.strictEqual(result.stdout, 'no');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns no for hook with only minimal profile', () => {
|
||||
const result = runScript(['my-hook', 'minimal']);
|
||||
assert.strictEqual(result.stdout, 'no');
|
||||
})) passed++; else failed++;
|
||||
|
||||
console.log('\nDisabled hooks:');
|
||||
|
||||
if (test('returns no when hook is disabled via env', () => {
|
||||
const result = runScript(['my-hook'], { ECC_DISABLED_HOOKS: 'my-hook' });
|
||||
assert.strictEqual(result.stdout, 'no');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns yes when different hook is disabled', () => {
|
||||
const result = runScript(['my-hook'], { ECC_DISABLED_HOOKS: 'other-hook' });
|
||||
assert.strictEqual(result.stdout, 'yes');
|
||||
})) passed++; else failed++;
|
||||
|
||||
console.log('\nProfile overrides:');
|
||||
|
||||
if (test('returns yes for strict profile with strict-only hook', () => {
|
||||
const result = runScript(['my-hook', 'strict'], { ECC_HOOK_PROFILE: 'strict' });
|
||||
assert.strictEqual(result.stdout, 'yes');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns yes for minimal profile with minimal-only hook', () => {
|
||||
const result = runScript(['my-hook', 'minimal'], { ECC_HOOK_PROFILE: 'minimal' });
|
||||
assert.strictEqual(result.stdout, 'yes');
|
||||
})) passed++; else failed++;
|
||||
|
||||
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
|
||||
process.exit(failed > 0 ? 1 : 0);
|
||||
}
|
||||
|
||||
runTests();
|
||||
131
tests/hooks/cost-tracker.test.js
Normal file
131
tests/hooks/cost-tracker.test.js
Normal file
@@ -0,0 +1,131 @@
|
||||
/**
|
||||
* Tests for cost-tracker.js hook
|
||||
*
|
||||
* Run with: node tests/hooks/cost-tracker.test.js
|
||||
*/
|
||||
|
||||
const assert = require('assert');
|
||||
const path = require('path');
|
||||
const fs = require('fs');
|
||||
const os = require('os');
|
||||
const { spawnSync } = require('child_process');
|
||||
|
||||
const script = path.join(__dirname, '..', '..', 'scripts', 'hooks', 'cost-tracker.js');
|
||||
|
||||
function test(name, fn) {
|
||||
try {
|
||||
fn();
|
||||
console.log(` ✓ ${name}`);
|
||||
return true;
|
||||
} catch (err) {
|
||||
console.log(` ✗ ${name}`);
|
||||
console.log(` Error: ${err.message}`);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
function makeTempDir() {
|
||||
return fs.mkdtempSync(path.join(os.tmpdir(), 'cost-tracker-test-'));
|
||||
}
|
||||
|
||||
function runScript(input, envOverrides = {}) {
|
||||
const inputStr = typeof input === 'string' ? input : JSON.stringify(input);
|
||||
const result = spawnSync('node', [script], {
|
||||
encoding: 'utf8',
|
||||
input: inputStr,
|
||||
timeout: 10000,
|
||||
env: { ...process.env, ...envOverrides },
|
||||
});
|
||||
return { code: result.status || 0, stdout: result.stdout || '', stderr: result.stderr || '' };
|
||||
}
|
||||
|
||||
function runTests() {
|
||||
console.log('\n=== Testing cost-tracker.js ===\n');
|
||||
|
||||
let passed = 0;
|
||||
let failed = 0;
|
||||
|
||||
// 1. Passes through input on stdout
|
||||
(test('passes through input on stdout', () => {
|
||||
const input = {
|
||||
model: 'claude-sonnet-4-20250514',
|
||||
usage: { input_tokens: 100, output_tokens: 50 },
|
||||
};
|
||||
const inputStr = JSON.stringify(input);
|
||||
const result = runScript(input);
|
||||
assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`);
|
||||
assert.strictEqual(result.stdout, inputStr, 'Expected stdout to match original input');
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
// 2. Creates metrics file when given valid usage data
|
||||
(test('creates metrics file when given valid usage data', () => {
|
||||
const tmpHome = makeTempDir();
|
||||
const input = {
|
||||
model: 'claude-sonnet-4-20250514',
|
||||
usage: { input_tokens: 1000, output_tokens: 500 },
|
||||
};
|
||||
const result = runScript(input, { HOME: tmpHome });
|
||||
assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`);
|
||||
|
||||
const metricsFile = path.join(tmpHome, '.claude', 'metrics', 'costs.jsonl');
|
||||
assert.ok(fs.existsSync(metricsFile), `Expected metrics file to exist at ${metricsFile}`);
|
||||
|
||||
const content = fs.readFileSync(metricsFile, 'utf8').trim();
|
||||
const row = JSON.parse(content);
|
||||
assert.strictEqual(row.input_tokens, 1000, 'Expected input_tokens to be 1000');
|
||||
assert.strictEqual(row.output_tokens, 500, 'Expected output_tokens to be 500');
|
||||
assert.ok(row.timestamp, 'Expected timestamp to be present');
|
||||
assert.ok(typeof row.estimated_cost_usd === 'number', 'Expected estimated_cost_usd to be a number');
|
||||
assert.ok(row.estimated_cost_usd > 0, 'Expected estimated_cost_usd to be positive');
|
||||
|
||||
fs.rmSync(tmpHome, { recursive: true, force: true });
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
// 3. Handles empty input gracefully
|
||||
(test('handles empty input gracefully', () => {
|
||||
const tmpHome = makeTempDir();
|
||||
const result = runScript('', { HOME: tmpHome });
|
||||
assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`);
|
||||
// stdout should be empty since input was empty
|
||||
assert.strictEqual(result.stdout, '', 'Expected empty stdout for empty input');
|
||||
|
||||
fs.rmSync(tmpHome, { recursive: true, force: true });
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
// 4. Handles invalid JSON gracefully
|
||||
(test('handles invalid JSON gracefully', () => {
|
||||
const tmpHome = makeTempDir();
|
||||
const invalidInput = 'not valid json {{{';
|
||||
const result = runScript(invalidInput, { HOME: tmpHome });
|
||||
assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`);
|
||||
// Should still pass through the raw input on stdout
|
||||
assert.strictEqual(result.stdout, invalidInput, 'Expected stdout to contain original invalid input');
|
||||
|
||||
fs.rmSync(tmpHome, { recursive: true, force: true });
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
// 5. Handles missing usage fields gracefully
|
||||
(test('handles missing usage fields gracefully', () => {
|
||||
const tmpHome = makeTempDir();
|
||||
const input = { model: 'claude-sonnet-4-20250514' };
|
||||
const inputStr = JSON.stringify(input);
|
||||
const result = runScript(input, { HOME: tmpHome });
|
||||
assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`);
|
||||
assert.strictEqual(result.stdout, inputStr, 'Expected stdout to match original input');
|
||||
|
||||
const metricsFile = path.join(tmpHome, '.claude', 'metrics', 'costs.jsonl');
|
||||
assert.ok(fs.existsSync(metricsFile), 'Expected metrics file to exist even with missing usage');
|
||||
|
||||
const row = JSON.parse(fs.readFileSync(metricsFile, 'utf8').trim());
|
||||
assert.strictEqual(row.input_tokens, 0, 'Expected input_tokens to be 0 when missing');
|
||||
assert.strictEqual(row.output_tokens, 0, 'Expected output_tokens to be 0 when missing');
|
||||
assert.strictEqual(row.estimated_cost_usd, 0, 'Expected estimated_cost_usd to be 0 when no tokens');
|
||||
|
||||
fs.rmSync(tmpHome, { recursive: true, force: true });
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
|
||||
process.exit(failed > 0 ? 1 : 0);
|
||||
}
|
||||
|
||||
runTests();
|
||||
239
tests/hooks/detect-project-worktree.test.js
Normal file
239
tests/hooks/detect-project-worktree.test.js
Normal file
@@ -0,0 +1,239 @@
|
||||
/**
|
||||
* Tests for worktree project-ID mismatch fix
|
||||
*
|
||||
* Validates that detect-project.sh uses -e (not -d) for .git existence
|
||||
* checks, so that git worktrees (where .git is a file) are detected
|
||||
* correctly.
|
||||
*
|
||||
* Run with: node tests/hooks/detect-project-worktree.test.js
|
||||
*/
|
||||
|
||||
const assert = require('assert');
|
||||
const path = require('path');
|
||||
const fs = require('fs');
|
||||
const os = require('os');
|
||||
const { execSync } = require('child_process');
|
||||
|
||||
let passed = 0;
|
||||
let failed = 0;
|
||||
|
||||
function test(name, fn) {
|
||||
try {
|
||||
fn();
|
||||
console.log(` \u2713 ${name}`);
|
||||
passed++;
|
||||
} catch (err) {
|
||||
console.log(` \u2717 ${name}`);
|
||||
console.log(` Error: ${err.message}`);
|
||||
failed++;
|
||||
}
|
||||
}
|
||||
|
||||
function createTempDir() {
|
||||
return fs.mkdtempSync(path.join(os.tmpdir(), 'ecc-worktree-test-'));
|
||||
}
|
||||
|
||||
function cleanupDir(dir) {
|
||||
try {
|
||||
fs.rmSync(dir, { recursive: true, force: true });
|
||||
} catch {
|
||||
// ignore cleanup errors
|
||||
}
|
||||
}
|
||||
|
||||
const repoRoot = path.resolve(__dirname, '..', '..');
|
||||
const detectProjectPath = path.join(
|
||||
repoRoot,
|
||||
'skills',
|
||||
'continuous-learning-v2',
|
||||
'scripts',
|
||||
'detect-project.sh'
|
||||
);
|
||||
|
||||
console.log('\n=== Worktree Project-ID Mismatch Tests ===\n');
|
||||
|
||||
// ──────────────────────────────────────────────────────
|
||||
// Group 1: Content checks on detect-project.sh
|
||||
// ──────────────────────────────────────────────────────
|
||||
|
||||
console.log('--- Content checks on detect-project.sh ---');
|
||||
|
||||
test('uses -e (not -d) for .git existence check', () => {
|
||||
const content = fs.readFileSync(detectProjectPath, 'utf8');
|
||||
assert.ok(
|
||||
content.includes('[ -e "${project_root}/.git" ]'),
|
||||
'detect-project.sh should use -e for .git check'
|
||||
);
|
||||
assert.ok(
|
||||
!content.includes('[ -d "${project_root}/.git" ]'),
|
||||
'detect-project.sh should NOT use -d for .git check'
|
||||
);
|
||||
});
|
||||
|
||||
test('has command -v git fallback check', () => {
|
||||
const content = fs.readFileSync(detectProjectPath, 'utf8');
|
||||
assert.ok(
|
||||
content.includes('command -v git'),
|
||||
'detect-project.sh should check for git availability with command -v'
|
||||
);
|
||||
});
|
||||
|
||||
test('uses git -C for safe directory operations', () => {
|
||||
const content = fs.readFileSync(detectProjectPath, 'utf8');
|
||||
assert.ok(
|
||||
content.includes('git -C'),
|
||||
'detect-project.sh should use git -C for directory-scoped operations'
|
||||
);
|
||||
});
|
||||
|
||||
// ──────────────────────────────────────────────────────
|
||||
// Group 2: Behavior test — -e vs -d
|
||||
// ──────────────────────────────────────────────────────
|
||||
|
||||
console.log('\n--- Behavior test: -e vs -d ---');
|
||||
|
||||
const behaviorDir = createTempDir();
|
||||
|
||||
test('[ -d ] returns true for .git directory', () => {
|
||||
const dir = path.join(behaviorDir, 'test-d-dir');
|
||||
fs.mkdirSync(dir, { recursive: true });
|
||||
fs.mkdirSync(path.join(dir, '.git'));
|
||||
const result = execSync(`bash -c '[ -d "${dir}/.git" ] && echo yes || echo no'`).toString().trim();
|
||||
assert.strictEqual(result, 'yes');
|
||||
});
|
||||
|
||||
test('[ -d ] returns false for .git file', () => {
|
||||
const dir = path.join(behaviorDir, 'test-d-file');
|
||||
fs.mkdirSync(dir, { recursive: true });
|
||||
fs.writeFileSync(path.join(dir, '.git'), 'gitdir: /some/path\n');
|
||||
const result = execSync(`bash -c '[ -d "${dir}/.git" ] && echo yes || echo no'`).toString().trim();
|
||||
assert.strictEqual(result, 'no');
|
||||
});
|
||||
|
||||
test('[ -e ] returns true for .git directory', () => {
|
||||
const dir = path.join(behaviorDir, 'test-e-dir');
|
||||
fs.mkdirSync(dir, { recursive: true });
|
||||
fs.mkdirSync(path.join(dir, '.git'));
|
||||
const result = execSync(`bash -c '[ -e "${dir}/.git" ] && echo yes || echo no'`).toString().trim();
|
||||
assert.strictEqual(result, 'yes');
|
||||
});
|
||||
|
||||
test('[ -e ] returns true for .git file', () => {
|
||||
const dir = path.join(behaviorDir, 'test-e-file');
|
||||
fs.mkdirSync(dir, { recursive: true });
|
||||
fs.writeFileSync(path.join(dir, '.git'), 'gitdir: /some/path\n');
|
||||
const result = execSync(`bash -c '[ -e "${dir}/.git" ] && echo yes || echo no'`).toString().trim();
|
||||
assert.strictEqual(result, 'yes');
|
||||
});
|
||||
|
||||
test('[ -e ] returns false when .git does not exist', () => {
|
||||
const dir = path.join(behaviorDir, 'test-e-none');
|
||||
fs.mkdirSync(dir, { recursive: true });
|
||||
const result = execSync(`bash -c '[ -e "${dir}/.git" ] && echo yes || echo no'`).toString().trim();
|
||||
assert.strictEqual(result, 'no');
|
||||
});
|
||||
|
||||
cleanupDir(behaviorDir);
|
||||
|
||||
// ──────────────────────────────────────────────────────
|
||||
// Group 3: E2E test — detect-project.sh with worktree .git file
|
||||
// ──────────────────────────────────────────────────────
|
||||
|
||||
console.log('\n--- E2E: detect-project.sh with worktree .git file ---');
|
||||
|
||||
test('detect-project.sh sets PROJECT_NAME and non-global PROJECT_ID for worktree', () => {
|
||||
const testDir = createTempDir();
|
||||
|
||||
try {
|
||||
// Create a "main" repo with git init so we have real git structures
|
||||
const mainRepo = path.join(testDir, 'main-repo');
|
||||
fs.mkdirSync(mainRepo, { recursive: true });
|
||||
execSync('git init', { cwd: mainRepo, stdio: 'pipe' });
|
||||
execSync('git commit --allow-empty -m "init"', {
|
||||
cwd: mainRepo,
|
||||
stdio: 'pipe',
|
||||
env: {
|
||||
...process.env,
|
||||
GIT_AUTHOR_NAME: 'Test',
|
||||
GIT_AUTHOR_EMAIL: 'test@test.com',
|
||||
GIT_COMMITTER_NAME: 'Test',
|
||||
GIT_COMMITTER_EMAIL: 'test@test.com'
|
||||
}
|
||||
});
|
||||
|
||||
// Create a worktree-like directory with .git as a file
|
||||
const worktreeDir = path.join(testDir, 'my-worktree');
|
||||
fs.mkdirSync(worktreeDir, { recursive: true });
|
||||
|
||||
// Set up the worktree directory structure in the main repo
|
||||
const worktreesDir = path.join(mainRepo, '.git', 'worktrees', 'my-worktree');
|
||||
fs.mkdirSync(worktreesDir, { recursive: true });
|
||||
|
||||
// Create the gitdir file and commondir in the worktree metadata
|
||||
const mainGitDir = path.join(mainRepo, '.git');
|
||||
fs.writeFileSync(
|
||||
path.join(worktreesDir, 'commondir'),
|
||||
'../..\n'
|
||||
);
|
||||
fs.writeFileSync(
|
||||
path.join(worktreesDir, 'HEAD'),
|
||||
fs.readFileSync(path.join(mainGitDir, 'HEAD'), 'utf8')
|
||||
);
|
||||
|
||||
// Write .git file in the worktree directory (this is what git worktree creates)
|
||||
fs.writeFileSync(
|
||||
path.join(worktreeDir, '.git'),
|
||||
`gitdir: ${worktreesDir}\n`
|
||||
);
|
||||
|
||||
// Source detect-project.sh from the worktree directory and capture results
|
||||
const script = `
|
||||
export CLAUDE_PROJECT_DIR="${worktreeDir}"
|
||||
export HOME="${testDir}"
|
||||
source "${detectProjectPath}"
|
||||
echo "PROJECT_NAME=\${PROJECT_NAME}"
|
||||
echo "PROJECT_ID=\${PROJECT_ID}"
|
||||
`;
|
||||
|
||||
const result = execSync(`bash -c '${script.replace(/'/g, "'\\''")}'`, {
|
||||
cwd: worktreeDir,
|
||||
timeout: 10000,
|
||||
env: {
|
||||
...process.env,
|
||||
HOME: testDir,
|
||||
CLAUDE_PROJECT_DIR: worktreeDir
|
||||
}
|
||||
}).toString();
|
||||
|
||||
const lines = result.trim().split('\n');
|
||||
const vars = {};
|
||||
for (const line of lines) {
|
||||
const match = line.match(/^(PROJECT_NAME|PROJECT_ID)=(.*)$/);
|
||||
if (match) {
|
||||
vars[match[1]] = match[2];
|
||||
}
|
||||
}
|
||||
|
||||
assert.ok(
|
||||
vars.PROJECT_NAME && vars.PROJECT_NAME.length > 0,
|
||||
`PROJECT_NAME should be set, got: "${vars.PROJECT_NAME || ''}"`
|
||||
);
|
||||
assert.ok(
|
||||
vars.PROJECT_ID && vars.PROJECT_ID !== 'global',
|
||||
`PROJECT_ID should not be "global", got: "${vars.PROJECT_ID || ''}"`
|
||||
);
|
||||
} finally {
|
||||
cleanupDir(testDir);
|
||||
}
|
||||
});
|
||||
|
||||
// ──────────────────────────────────────────────────────
|
||||
// Summary
|
||||
// ──────────────────────────────────────────────────────
|
||||
|
||||
console.log('\n=== Test Results ===');
|
||||
console.log(`Passed: ${passed}`);
|
||||
console.log(`Failed: ${failed}`);
|
||||
console.log(`Total: ${passed + failed}\n`);
|
||||
|
||||
process.exit(failed > 0 ? 1 : 0);
|
||||
152
tests/hooks/doc-file-warning.test.js
Normal file
152
tests/hooks/doc-file-warning.test.js
Normal file
@@ -0,0 +1,152 @@
|
||||
#!/usr/bin/env node
|
||||
'use strict';
|
||||
|
||||
const assert = require('assert');
|
||||
const path = require('path');
|
||||
const { spawnSync } = require('child_process');
|
||||
|
||||
const script = path.join(__dirname, '..', '..', 'scripts', 'hooks', 'doc-file-warning.js');
|
||||
|
||||
function test(name, fn) {
|
||||
try {
|
||||
fn();
|
||||
console.log(` ✓ ${name}`);
|
||||
return true;
|
||||
} catch (err) {
|
||||
console.log(` ✗ ${name}`);
|
||||
console.log(` Error: ${err.message}`);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
function runScript(input) {
|
||||
const result = spawnSync('node', [script], {
|
||||
encoding: 'utf8',
|
||||
input: JSON.stringify(input),
|
||||
timeout: 10000,
|
||||
});
|
||||
return { code: result.status || 0, stdout: result.stdout || '', stderr: result.stderr || '' };
|
||||
}
|
||||
|
||||
function runTests() {
|
||||
console.log('\n=== Testing doc-file-warning.js ===\n');
|
||||
let passed = 0;
|
||||
let failed = 0;
|
||||
|
||||
// 1. Allowed standard doc files - no warning in stderr
|
||||
const standardFiles = [
|
||||
'README.md',
|
||||
'CLAUDE.md',
|
||||
'AGENTS.md',
|
||||
'CONTRIBUTING.md',
|
||||
'CHANGELOG.md',
|
||||
'LICENSE.md',
|
||||
'SKILL.md',
|
||||
'MEMORY.md',
|
||||
'WORKLOG.md',
|
||||
];
|
||||
for (const file of standardFiles) {
|
||||
(test(`allows standard doc file: ${file}`, () => {
|
||||
const { code, stderr } = runScript({ tool_input: { file_path: file } });
|
||||
assert.strictEqual(code, 0, `expected exit code 0, got ${code}`);
|
||||
assert.strictEqual(stderr, '', `expected no warning for ${file}, got: ${stderr}`);
|
||||
}) ? passed++ : failed++);
|
||||
}
|
||||
|
||||
// 2. Allowed directory paths - no warning
|
||||
const allowedDirPaths = [
|
||||
'docs/foo.md',
|
||||
'docs/guide/setup.md',
|
||||
'skills/bar.md',
|
||||
'skills/testing/tdd.md',
|
||||
'.history/session.md',
|
||||
'memory/patterns.md',
|
||||
'.claude/commands/deploy.md',
|
||||
'.claude/plans/roadmap.md',
|
||||
'.claude/projects/myproject.md',
|
||||
];
|
||||
for (const file of allowedDirPaths) {
|
||||
(test(`allows directory path: ${file}`, () => {
|
||||
const { code, stderr } = runScript({ tool_input: { file_path: file } });
|
||||
assert.strictEqual(code, 0, `expected exit code 0, got ${code}`);
|
||||
assert.strictEqual(stderr, '', `expected no warning for ${file}, got: ${stderr}`);
|
||||
}) ? passed++ : failed++);
|
||||
}
|
||||
|
||||
// 3. Allowed .plan.md files - no warning
|
||||
(test('allows .plan.md files', () => {
|
||||
const { code, stderr } = runScript({ tool_input: { file_path: 'feature.plan.md' } });
|
||||
assert.strictEqual(code, 0);
|
||||
assert.strictEqual(stderr, '', `expected no warning for .plan.md, got: ${stderr}`);
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
(test('allows nested .plan.md files', () => {
|
||||
const { code, stderr } = runScript({ tool_input: { file_path: 'src/refactor.plan.md' } });
|
||||
assert.strictEqual(code, 0);
|
||||
assert.strictEqual(stderr, '', `expected no warning for nested .plan.md, got: ${stderr}`);
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
// 4. Non-md/txt files always pass - no warning
|
||||
const nonDocFiles = ['foo.js', 'app.py', 'styles.css', 'data.json', 'image.png'];
|
||||
for (const file of nonDocFiles) {
|
||||
(test(`allows non-doc file: ${file}`, () => {
|
||||
const { code, stderr } = runScript({ tool_input: { file_path: file } });
|
||||
assert.strictEqual(code, 0);
|
||||
assert.strictEqual(stderr, '', `expected no warning for ${file}, got: ${stderr}`);
|
||||
}) ? passed++ : failed++);
|
||||
}
|
||||
|
||||
// 5. Non-standard doc files - warning in stderr
|
||||
const nonStandardFiles = ['random-notes.md', 'TODO.md', 'notes.txt', 'scratch.md', 'ideas.txt'];
|
||||
for (const file of nonStandardFiles) {
|
||||
(test(`warns on non-standard doc file: ${file}`, () => {
|
||||
const { code, stderr } = runScript({ tool_input: { file_path: file } });
|
||||
assert.strictEqual(code, 0, 'should still exit 0 (warn only)');
|
||||
assert.ok(stderr.includes('WARNING'), `expected warning in stderr for ${file}, got: ${stderr}`);
|
||||
assert.ok(stderr.includes(file), `expected file path in stderr for ${file}`);
|
||||
}) ? passed++ : failed++);
|
||||
}
|
||||
|
||||
// 6. Invalid/empty input - passes through without error
|
||||
(test('handles empty object input without error', () => {
|
||||
const { code, stderr } = runScript({});
|
||||
assert.strictEqual(code, 0);
|
||||
assert.strictEqual(stderr, '', `expected no warning for empty input, got: ${stderr}`);
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
(test('handles missing file_path without error', () => {
|
||||
const { code, stderr } = runScript({ tool_input: {} });
|
||||
assert.strictEqual(code, 0);
|
||||
assert.strictEqual(stderr, '', `expected no warning for missing file_path, got: ${stderr}`);
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
(test('handles empty file_path without error', () => {
|
||||
const { code, stderr } = runScript({ tool_input: { file_path: '' } });
|
||||
assert.strictEqual(code, 0);
|
||||
assert.strictEqual(stderr, '', `expected no warning for empty file_path, got: ${stderr}`);
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
// 7. Stdout always contains the original input (pass-through)
|
||||
(test('passes through input to stdout for allowed file', () => {
|
||||
const input = { tool_input: { file_path: 'README.md' } };
|
||||
const { stdout } = runScript(input);
|
||||
assert.strictEqual(stdout, JSON.stringify(input));
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
(test('passes through input to stdout for warned file', () => {
|
||||
const input = { tool_input: { file_path: 'random-notes.md' } };
|
||||
const { stdout } = runScript(input);
|
||||
assert.strictEqual(stdout, JSON.stringify(input));
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
(test('passes through input to stdout for empty input', () => {
|
||||
const input = {};
|
||||
const { stdout } = runScript(input);
|
||||
assert.strictEqual(stdout, JSON.stringify(input));
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
|
||||
process.exit(failed > 0 ? 1 : 0);
|
||||
}
|
||||
|
||||
runTests();
|
||||
397
tests/hooks/hook-flags.test.js
Normal file
397
tests/hooks/hook-flags.test.js
Normal file
@@ -0,0 +1,397 @@
|
||||
/**
|
||||
* Tests for scripts/lib/hook-flags.js
|
||||
*
|
||||
* Run with: node tests/hooks/hook-flags.test.js
|
||||
*/
|
||||
|
||||
const assert = require('assert');
|
||||
|
||||
// Import the module
|
||||
const {
|
||||
VALID_PROFILES,
|
||||
normalizeId,
|
||||
getHookProfile,
|
||||
getDisabledHookIds,
|
||||
parseProfiles,
|
||||
isHookEnabled,
|
||||
} = require('../../scripts/lib/hook-flags');
|
||||
|
||||
// Test helper
|
||||
function test(name, fn) {
|
||||
try {
|
||||
fn();
|
||||
console.log(` ✓ ${name}`);
|
||||
return true;
|
||||
} catch (err) {
|
||||
console.log(` ✗ ${name}`);
|
||||
console.log(` Error: ${err.message}`);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
// Helper to save and restore env vars
|
||||
function withEnv(vars, fn) {
|
||||
const saved = {};
|
||||
for (const key of Object.keys(vars)) {
|
||||
saved[key] = process.env[key];
|
||||
if (vars[key] === undefined) {
|
||||
delete process.env[key];
|
||||
} else {
|
||||
process.env[key] = vars[key];
|
||||
}
|
||||
}
|
||||
try {
|
||||
fn();
|
||||
} finally {
|
||||
for (const key of Object.keys(saved)) {
|
||||
if (saved[key] === undefined) {
|
||||
delete process.env[key];
|
||||
} else {
|
||||
process.env[key] = saved[key];
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Test suite
|
||||
function runTests() {
|
||||
console.log('\n=== Testing hook-flags.js ===\n');
|
||||
|
||||
let passed = 0;
|
||||
let failed = 0;
|
||||
|
||||
// VALID_PROFILES tests
|
||||
console.log('VALID_PROFILES:');
|
||||
|
||||
if (test('is a Set', () => {
|
||||
assert.ok(VALID_PROFILES instanceof Set);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('contains minimal, standard, strict', () => {
|
||||
assert.ok(VALID_PROFILES.has('minimal'));
|
||||
assert.ok(VALID_PROFILES.has('standard'));
|
||||
assert.ok(VALID_PROFILES.has('strict'));
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('contains exactly 3 profiles', () => {
|
||||
assert.strictEqual(VALID_PROFILES.size, 3);
|
||||
})) passed++; else failed++;
|
||||
|
||||
// normalizeId tests
|
||||
console.log('\nnormalizeId:');
|
||||
|
||||
if (test('returns empty string for null', () => {
|
||||
assert.strictEqual(normalizeId(null), '');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns empty string for undefined', () => {
|
||||
assert.strictEqual(normalizeId(undefined), '');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns empty string for empty string', () => {
|
||||
assert.strictEqual(normalizeId(''), '');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('trims whitespace', () => {
|
||||
assert.strictEqual(normalizeId(' hello '), 'hello');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('converts to lowercase', () => {
|
||||
assert.strictEqual(normalizeId('MyHook'), 'myhook');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('handles mixed case with whitespace', () => {
|
||||
assert.strictEqual(normalizeId(' My-Hook-ID '), 'my-hook-id');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('converts numbers to string', () => {
|
||||
assert.strictEqual(normalizeId(123), '123');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns empty string for whitespace-only input', () => {
|
||||
assert.strictEqual(normalizeId(' '), '');
|
||||
})) passed++; else failed++;
|
||||
|
||||
// getHookProfile tests
|
||||
console.log('\ngetHookProfile:');
|
||||
|
||||
if (test('defaults to standard when env var not set', () => {
|
||||
withEnv({ ECC_HOOK_PROFILE: undefined }, () => {
|
||||
assert.strictEqual(getHookProfile(), 'standard');
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns minimal when set to minimal', () => {
|
||||
withEnv({ ECC_HOOK_PROFILE: 'minimal' }, () => {
|
||||
assert.strictEqual(getHookProfile(), 'minimal');
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns standard when set to standard', () => {
|
||||
withEnv({ ECC_HOOK_PROFILE: 'standard' }, () => {
|
||||
assert.strictEqual(getHookProfile(), 'standard');
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns strict when set to strict', () => {
|
||||
withEnv({ ECC_HOOK_PROFILE: 'strict' }, () => {
|
||||
assert.strictEqual(getHookProfile(), 'strict');
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('is case-insensitive', () => {
|
||||
withEnv({ ECC_HOOK_PROFILE: 'STRICT' }, () => {
|
||||
assert.strictEqual(getHookProfile(), 'strict');
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('trims whitespace from env var', () => {
|
||||
withEnv({ ECC_HOOK_PROFILE: ' minimal ' }, () => {
|
||||
assert.strictEqual(getHookProfile(), 'minimal');
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('defaults to standard for invalid value', () => {
|
||||
withEnv({ ECC_HOOK_PROFILE: 'invalid' }, () => {
|
||||
assert.strictEqual(getHookProfile(), 'standard');
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('defaults to standard for empty string', () => {
|
||||
withEnv({ ECC_HOOK_PROFILE: '' }, () => {
|
||||
assert.strictEqual(getHookProfile(), 'standard');
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
// getDisabledHookIds tests
|
||||
console.log('\ngetDisabledHookIds:');
|
||||
|
||||
if (test('returns empty Set when env var not set', () => {
|
||||
withEnv({ ECC_DISABLED_HOOKS: undefined }, () => {
|
||||
const result = getDisabledHookIds();
|
||||
assert.ok(result instanceof Set);
|
||||
assert.strictEqual(result.size, 0);
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns empty Set for empty string', () => {
|
||||
withEnv({ ECC_DISABLED_HOOKS: '' }, () => {
|
||||
assert.strictEqual(getDisabledHookIds().size, 0);
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns empty Set for whitespace-only string', () => {
|
||||
withEnv({ ECC_DISABLED_HOOKS: ' ' }, () => {
|
||||
assert.strictEqual(getDisabledHookIds().size, 0);
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('parses single hook id', () => {
|
||||
withEnv({ ECC_DISABLED_HOOKS: 'my-hook' }, () => {
|
||||
const result = getDisabledHookIds();
|
||||
assert.strictEqual(result.size, 1);
|
||||
assert.ok(result.has('my-hook'));
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('parses multiple comma-separated hook ids', () => {
|
||||
withEnv({ ECC_DISABLED_HOOKS: 'hook-a,hook-b,hook-c' }, () => {
|
||||
const result = getDisabledHookIds();
|
||||
assert.strictEqual(result.size, 3);
|
||||
assert.ok(result.has('hook-a'));
|
||||
assert.ok(result.has('hook-b'));
|
||||
assert.ok(result.has('hook-c'));
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('trims whitespace around hook ids', () => {
|
||||
withEnv({ ECC_DISABLED_HOOKS: ' hook-a , hook-b ' }, () => {
|
||||
const result = getDisabledHookIds();
|
||||
assert.strictEqual(result.size, 2);
|
||||
assert.ok(result.has('hook-a'));
|
||||
assert.ok(result.has('hook-b'));
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('normalizes hook ids to lowercase', () => {
|
||||
withEnv({ ECC_DISABLED_HOOKS: 'MyHook,ANOTHER' }, () => {
|
||||
const result = getDisabledHookIds();
|
||||
assert.ok(result.has('myhook'));
|
||||
assert.ok(result.has('another'));
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('filters out empty entries from trailing commas', () => {
|
||||
withEnv({ ECC_DISABLED_HOOKS: 'hook-a,,hook-b,' }, () => {
|
||||
const result = getDisabledHookIds();
|
||||
assert.strictEqual(result.size, 2);
|
||||
assert.ok(result.has('hook-a'));
|
||||
assert.ok(result.has('hook-b'));
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
// parseProfiles tests
|
||||
console.log('\nparseProfiles:');
|
||||
|
||||
if (test('returns fallback for null input', () => {
|
||||
const result = parseProfiles(null);
|
||||
assert.deepStrictEqual(result, ['standard', 'strict']);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns fallback for undefined input', () => {
|
||||
const result = parseProfiles(undefined);
|
||||
assert.deepStrictEqual(result, ['standard', 'strict']);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('uses custom fallback when provided', () => {
|
||||
const result = parseProfiles(null, ['minimal']);
|
||||
assert.deepStrictEqual(result, ['minimal']);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('parses comma-separated string', () => {
|
||||
const result = parseProfiles('minimal,strict');
|
||||
assert.deepStrictEqual(result, ['minimal', 'strict']);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('parses single string value', () => {
|
||||
const result = parseProfiles('strict');
|
||||
assert.deepStrictEqual(result, ['strict']);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('parses array of profiles', () => {
|
||||
const result = parseProfiles(['minimal', 'standard']);
|
||||
assert.deepStrictEqual(result, ['minimal', 'standard']);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('filters invalid profiles from string', () => {
|
||||
const result = parseProfiles('minimal,invalid,strict');
|
||||
assert.deepStrictEqual(result, ['minimal', 'strict']);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('filters invalid profiles from array', () => {
|
||||
const result = parseProfiles(['minimal', 'bogus', 'strict']);
|
||||
assert.deepStrictEqual(result, ['minimal', 'strict']);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns fallback when all string values are invalid', () => {
|
||||
const result = parseProfiles('invalid,bogus');
|
||||
assert.deepStrictEqual(result, ['standard', 'strict']);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns fallback when all array values are invalid', () => {
|
||||
const result = parseProfiles(['invalid', 'bogus']);
|
||||
assert.deepStrictEqual(result, ['standard', 'strict']);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('is case-insensitive for string input', () => {
|
||||
const result = parseProfiles('MINIMAL,STRICT');
|
||||
assert.deepStrictEqual(result, ['minimal', 'strict']);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('is case-insensitive for array input', () => {
|
||||
const result = parseProfiles(['MINIMAL', 'STRICT']);
|
||||
assert.deepStrictEqual(result, ['minimal', 'strict']);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('trims whitespace in string input', () => {
|
||||
const result = parseProfiles(' minimal , strict ');
|
||||
assert.deepStrictEqual(result, ['minimal', 'strict']);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('handles null values in array', () => {
|
||||
const result = parseProfiles([null, 'strict']);
|
||||
assert.deepStrictEqual(result, ['strict']);
|
||||
})) passed++; else failed++;
|
||||
|
||||
// isHookEnabled tests
|
||||
console.log('\nisHookEnabled:');
|
||||
|
||||
if (test('returns true by default for a hook (standard profile)', () => {
|
||||
withEnv({ ECC_HOOK_PROFILE: undefined, ECC_DISABLED_HOOKS: undefined }, () => {
|
||||
assert.strictEqual(isHookEnabled('my-hook'), true);
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns true for empty hookId', () => {
|
||||
withEnv({ ECC_HOOK_PROFILE: undefined, ECC_DISABLED_HOOKS: undefined }, () => {
|
||||
assert.strictEqual(isHookEnabled(''), true);
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns true for null hookId', () => {
|
||||
withEnv({ ECC_HOOK_PROFILE: undefined, ECC_DISABLED_HOOKS: undefined }, () => {
|
||||
assert.strictEqual(isHookEnabled(null), true);
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns false when hook is in disabled list', () => {
|
||||
withEnv({ ECC_HOOK_PROFILE: undefined, ECC_DISABLED_HOOKS: 'my-hook' }, () => {
|
||||
assert.strictEqual(isHookEnabled('my-hook'), false);
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('disabled check is case-insensitive', () => {
|
||||
withEnv({ ECC_HOOK_PROFILE: undefined, ECC_DISABLED_HOOKS: 'MY-HOOK' }, () => {
|
||||
assert.strictEqual(isHookEnabled('my-hook'), false);
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns true when hook is not in disabled list', () => {
|
||||
withEnv({ ECC_HOOK_PROFILE: undefined, ECC_DISABLED_HOOKS: 'other-hook' }, () => {
|
||||
assert.strictEqual(isHookEnabled('my-hook'), true);
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns false when current profile is not in allowed profiles', () => {
|
||||
withEnv({ ECC_HOOK_PROFILE: 'minimal', ECC_DISABLED_HOOKS: undefined }, () => {
|
||||
assert.strictEqual(isHookEnabled('my-hook', { profiles: 'strict' }), false);
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns true when current profile is in allowed profiles', () => {
|
||||
withEnv({ ECC_HOOK_PROFILE: 'strict', ECC_DISABLED_HOOKS: undefined }, () => {
|
||||
assert.strictEqual(isHookEnabled('my-hook', { profiles: 'standard,strict' }), true);
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns true when current profile matches single allowed profile', () => {
|
||||
withEnv({ ECC_HOOK_PROFILE: 'minimal', ECC_DISABLED_HOOKS: undefined }, () => {
|
||||
assert.strictEqual(isHookEnabled('my-hook', { profiles: 'minimal' }), true);
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('disabled hooks take precedence over profile match', () => {
|
||||
withEnv({ ECC_HOOK_PROFILE: 'strict', ECC_DISABLED_HOOKS: 'my-hook' }, () => {
|
||||
assert.strictEqual(isHookEnabled('my-hook', { profiles: 'strict' }), false);
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('uses default profiles (standard, strict) when none specified', () => {
|
||||
withEnv({ ECC_HOOK_PROFILE: 'minimal', ECC_DISABLED_HOOKS: undefined }, () => {
|
||||
assert.strictEqual(isHookEnabled('my-hook'), false);
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('allows standard profile by default', () => {
|
||||
withEnv({ ECC_HOOK_PROFILE: 'standard', ECC_DISABLED_HOOKS: undefined }, () => {
|
||||
assert.strictEqual(isHookEnabled('my-hook'), true);
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('allows strict profile by default', () => {
|
||||
withEnv({ ECC_HOOK_PROFILE: 'strict', ECC_DISABLED_HOOKS: undefined }, () => {
|
||||
assert.strictEqual(isHookEnabled('my-hook'), true);
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('accepts array profiles option', () => {
|
||||
withEnv({ ECC_HOOK_PROFILE: 'minimal', ECC_DISABLED_HOOKS: undefined }, () => {
|
||||
assert.strictEqual(isHookEnabled('my-hook', { profiles: ['minimal', 'standard'] }), true);
|
||||
});
|
||||
})) passed++; else failed++;
|
||||
|
||||
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
|
||||
process.exit(failed > 0 ? 1 : 0);
|
||||
}
|
||||
|
||||
runTests();
|
||||
@@ -56,42 +56,24 @@ console.log('--- observe.sh signal throttling ---');
|
||||
|
||||
test('observe.sh contains SIGNAL_EVERY_N throttle variable', () => {
|
||||
const content = fs.readFileSync(observeShPath, 'utf8');
|
||||
assert.ok(
|
||||
content.includes('SIGNAL_EVERY_N'),
|
||||
'observe.sh should define SIGNAL_EVERY_N for throttling'
|
||||
);
|
||||
assert.ok(content.includes('SIGNAL_EVERY_N'), 'observe.sh should define SIGNAL_EVERY_N for throttling');
|
||||
});
|
||||
|
||||
test('observe.sh uses a counter file instead of signaling every call', () => {
|
||||
const content = fs.readFileSync(observeShPath, 'utf8');
|
||||
assert.ok(
|
||||
content.includes('.observer-signal-counter'),
|
||||
'observe.sh should use a signal counter file'
|
||||
);
|
||||
assert.ok(content.includes('.observer-signal-counter'), 'observe.sh should use a signal counter file');
|
||||
});
|
||||
|
||||
test('observe.sh only signals when counter reaches threshold', () => {
|
||||
const content = fs.readFileSync(observeShPath, 'utf8');
|
||||
assert.ok(
|
||||
content.includes('should_signal=0'),
|
||||
'observe.sh should default should_signal to 0'
|
||||
);
|
||||
assert.ok(
|
||||
content.includes('should_signal=1'),
|
||||
'observe.sh should set should_signal=1 when threshold reached'
|
||||
);
|
||||
assert.ok(
|
||||
content.includes('if [ "$should_signal" -eq 1 ]'),
|
||||
'observe.sh should gate kill -USR1 behind should_signal check'
|
||||
);
|
||||
assert.ok(content.includes('should_signal=0'), 'observe.sh should default should_signal to 0');
|
||||
assert.ok(content.includes('should_signal=1'), 'observe.sh should set should_signal=1 when threshold reached');
|
||||
assert.ok(content.includes('if [ "$should_signal" -eq 1 ]'), 'observe.sh should gate kill -USR1 behind should_signal check');
|
||||
});
|
||||
|
||||
test('observe.sh default throttle is 20 observations per signal', () => {
|
||||
const content = fs.readFileSync(observeShPath, 'utf8');
|
||||
assert.ok(
|
||||
content.includes('ECC_OBSERVER_SIGNAL_EVERY_N:-20'),
|
||||
'Default signal frequency should be every 20 observations'
|
||||
);
|
||||
assert.ok(content.includes('ECC_OBSERVER_SIGNAL_EVERY_N:-20'), 'Default signal frequency should be every 20 observations');
|
||||
});
|
||||
|
||||
// ──────────────────────────────────────────────────────
|
||||
@@ -102,22 +84,13 @@ console.log('\n--- observer-loop.sh re-entrancy guard ---');
|
||||
|
||||
test('observer-loop.sh defines ANALYZING guard variable', () => {
|
||||
const content = fs.readFileSync(observerLoopPath, 'utf8');
|
||||
assert.ok(
|
||||
content.includes('ANALYZING=0'),
|
||||
'observer-loop.sh should initialize ANALYZING=0'
|
||||
);
|
||||
assert.ok(content.includes('ANALYZING=0'), 'observer-loop.sh should initialize ANALYZING=0');
|
||||
});
|
||||
|
||||
test('on_usr1 checks ANALYZING before starting analysis', () => {
|
||||
const content = fs.readFileSync(observerLoopPath, 'utf8');
|
||||
assert.ok(
|
||||
content.includes('if [ "$ANALYZING" -eq 1 ]'),
|
||||
'on_usr1 should check ANALYZING flag'
|
||||
);
|
||||
assert.ok(
|
||||
content.includes('Analysis already in progress, skipping signal'),
|
||||
'on_usr1 should log when skipping due to re-entrancy'
|
||||
);
|
||||
assert.ok(content.includes('if [ "$ANALYZING" -eq 1 ]'), 'on_usr1 should check ANALYZING flag');
|
||||
assert.ok(content.includes('Analysis already in progress, skipping signal'), 'on_usr1 should log when skipping due to re-entrancy');
|
||||
});
|
||||
|
||||
test('on_usr1 sets ANALYZING=1 before and ANALYZING=0 after analysis', () => {
|
||||
@@ -139,30 +112,18 @@ console.log('\n--- observer-loop.sh cooldown throttle ---');
|
||||
|
||||
test('observer-loop.sh defines ANALYSIS_COOLDOWN', () => {
|
||||
const content = fs.readFileSync(observerLoopPath, 'utf8');
|
||||
assert.ok(
|
||||
content.includes('ANALYSIS_COOLDOWN'),
|
||||
'observer-loop.sh should define ANALYSIS_COOLDOWN'
|
||||
);
|
||||
assert.ok(content.includes('ANALYSIS_COOLDOWN'), 'observer-loop.sh should define ANALYSIS_COOLDOWN');
|
||||
});
|
||||
|
||||
test('on_usr1 enforces cooldown between analyses', () => {
|
||||
const content = fs.readFileSync(observerLoopPath, 'utf8');
|
||||
assert.ok(
|
||||
content.includes('LAST_ANALYSIS_EPOCH'),
|
||||
'Should track last analysis time'
|
||||
);
|
||||
assert.ok(
|
||||
content.includes('Analysis cooldown active'),
|
||||
'Should log when cooldown prevents analysis'
|
||||
);
|
||||
assert.ok(content.includes('LAST_ANALYSIS_EPOCH'), 'Should track last analysis time');
|
||||
assert.ok(content.includes('Analysis cooldown active'), 'Should log when cooldown prevents analysis');
|
||||
});
|
||||
|
||||
test('default cooldown is 60 seconds', () => {
|
||||
const content = fs.readFileSync(observerLoopPath, 'utf8');
|
||||
assert.ok(
|
||||
content.includes('ECC_OBSERVER_ANALYSIS_COOLDOWN:-60'),
|
||||
'Default cooldown should be 60 seconds'
|
||||
);
|
||||
assert.ok(content.includes('ECC_OBSERVER_ANALYSIS_COOLDOWN:-60'), 'Default cooldown should be 60 seconds');
|
||||
});
|
||||
|
||||
// ──────────────────────────────────────────────────────
|
||||
@@ -173,30 +134,18 @@ console.log('\n--- observer-loop.sh tail-based sampling ---');
|
||||
|
||||
test('analyze_observations uses tail to sample recent observations', () => {
|
||||
const content = fs.readFileSync(observerLoopPath, 'utf8');
|
||||
assert.ok(
|
||||
content.includes('tail -n "$MAX_ANALYSIS_LINES"'),
|
||||
'Should use tail to limit observations sent to LLM'
|
||||
);
|
||||
assert.ok(content.includes('tail -n "$MAX_ANALYSIS_LINES"'), 'Should use tail to limit observations sent to LLM');
|
||||
});
|
||||
|
||||
test('default max analysis lines is 500', () => {
|
||||
const content = fs.readFileSync(observerLoopPath, 'utf8');
|
||||
assert.ok(
|
||||
content.includes('ECC_OBSERVER_MAX_ANALYSIS_LINES:-500'),
|
||||
'Default should sample last 500 lines'
|
||||
);
|
||||
assert.ok(content.includes('ECC_OBSERVER_MAX_ANALYSIS_LINES:-500'), 'Default should sample last 500 lines');
|
||||
});
|
||||
|
||||
test('analysis temp file is created and cleaned up', () => {
|
||||
const content = fs.readFileSync(observerLoopPath, 'utf8');
|
||||
assert.ok(
|
||||
content.includes('ecc-observer-analysis'),
|
||||
'Should create a temp analysis file'
|
||||
);
|
||||
assert.ok(
|
||||
content.includes('rm -f "$prompt_file" "$analysis_file"'),
|
||||
'Should clean up both prompt and analysis temp files'
|
||||
);
|
||||
assert.ok(content.includes('ecc-observer-analysis'), 'Should create a temp analysis file');
|
||||
assert.ok(content.includes('rm -f "$prompt_file" "$analysis_file"'), 'Should clean up both prompt and analysis temp files');
|
||||
});
|
||||
|
||||
test('prompt references analysis_file not full OBSERVATIONS_FILE', () => {
|
||||
@@ -208,10 +157,7 @@ test('prompt references analysis_file not full OBSERVATIONS_FILE', () => {
|
||||
assert.ok(heredocStart > 0, 'Should find prompt heredoc start');
|
||||
assert.ok(heredocEnd > heredocStart, 'Should find prompt heredoc end');
|
||||
const promptSection = content.substring(heredocStart, heredocEnd);
|
||||
assert.ok(
|
||||
promptSection.includes('${analysis_file}'),
|
||||
'Prompt should point Claude at the sampled analysis file, not the full observations file'
|
||||
);
|
||||
assert.ok(promptSection.includes('${analysis_file}'), 'Prompt should point Claude at the sampled analysis file, not the full observations file');
|
||||
});
|
||||
|
||||
// ──────────────────────────────────────────────────────
|
||||
@@ -287,22 +233,22 @@ test('observe.sh creates counter file and increments on each call', () => {
|
||||
fs.mkdirSync(hooksDir, { recursive: true });
|
||||
|
||||
// Minimal detect-project.sh stub
|
||||
fs.writeFileSync(path.join(scriptsDir, 'detect-project.sh'), [
|
||||
'#!/bin/bash',
|
||||
`PROJECT_ID="test-project"`,
|
||||
`PROJECT_NAME="test-project"`,
|
||||
`PROJECT_ROOT="${projectDir}"`,
|
||||
`PROJECT_DIR="${projectDir}"`,
|
||||
`CLV2_PYTHON_CMD="${process.platform === 'win32' ? 'python' : 'python3'}"`,
|
||||
''
|
||||
].join('\n'));
|
||||
fs.writeFileSync(
|
||||
path.join(scriptsDir, 'detect-project.sh'),
|
||||
[
|
||||
'#!/bin/bash',
|
||||
`PROJECT_ID="test-project"`,
|
||||
`PROJECT_NAME="test-project"`,
|
||||
`PROJECT_ROOT="${projectDir}"`,
|
||||
`PROJECT_DIR="${projectDir}"`,
|
||||
`CLV2_PYTHON_CMD="${process.platform === 'win32' ? 'python' : 'python3'}"`,
|
||||
''
|
||||
].join('\n')
|
||||
);
|
||||
|
||||
// Copy observe.sh but patch SKILL_ROOT to our test dir
|
||||
let observeContent = fs.readFileSync(observeShPath, 'utf8');
|
||||
observeContent = observeContent.replace(
|
||||
'SKILL_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"',
|
||||
`SKILL_ROOT="${skillRoot}"`
|
||||
);
|
||||
observeContent = observeContent.replace('SKILL_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"', `SKILL_ROOT="${skillRoot}"`);
|
||||
const testObserve = path.join(hooksDir, 'observe.sh');
|
||||
fs.writeFileSync(testObserve, observeContent, { mode: 0o755 });
|
||||
|
||||
@@ -333,10 +279,7 @@ test('observe.sh creates counter file and increments on each call', () => {
|
||||
if (fs.existsSync(counterFile)) {
|
||||
const val = fs.readFileSync(counterFile, 'utf8').trim();
|
||||
const counterVal = parseInt(val, 10);
|
||||
assert.ok(
|
||||
counterVal >= 1 && counterVal <= 2,
|
||||
`Counter should be 1 or 2 after 2 calls, got ${counterVal}`
|
||||
);
|
||||
assert.ok(counterVal >= 1 && counterVal <= 2, `Counter should be 1 or 2 after 2 calls, got ${counterVal}`);
|
||||
} else {
|
||||
// If python3 is not available the hook exits early - that is acceptable
|
||||
const hasPython = spawnSync('python3', ['--version']).status === 0;
|
||||
@@ -348,6 +291,44 @@ test('observe.sh creates counter file and increments on each call', () => {
|
||||
cleanupDir(testDir);
|
||||
});
|
||||
|
||||
// ──────────────────────────────────────────────────────
|
||||
// Test group 7: Observer Haiku invocation flags
|
||||
// ──────────────────────────────────────────────────────
|
||||
|
||||
console.log('\n--- Observer Haiku invocation flags ---');
|
||||
|
||||
test('claude invocation includes --allowedTools flag', () => {
|
||||
const content = fs.readFileSync(observerLoopPath, 'utf8');
|
||||
assert.ok(content.includes('--allowedTools'), 'observer-loop.sh should include --allowedTools flag in claude invocation');
|
||||
});
|
||||
|
||||
test('allowedTools includes Read permission', () => {
|
||||
const content = fs.readFileSync(observerLoopPath, 'utf8');
|
||||
const match = content.match(/--allowedTools\s+"([^"]+)"/);
|
||||
assert.ok(match, 'Should find --allowedTools with quoted value');
|
||||
assert.ok(match[1].includes('Read'), `allowedTools should include Read, got: ${match[1]}`);
|
||||
});
|
||||
|
||||
test('allowedTools includes Write permission', () => {
|
||||
const content = fs.readFileSync(observerLoopPath, 'utf8');
|
||||
const match = content.match(/--allowedTools\s+"([^"]+)"/);
|
||||
assert.ok(match, 'Should find --allowedTools with quoted value');
|
||||
assert.ok(match[1].includes('Write'), `allowedTools should include Write, got: ${match[1]}`);
|
||||
});
|
||||
|
||||
test('claude invocation still includes ECC_SKIP_OBSERVE and ECC_HOOK_PROFILE guards', () => {
|
||||
const content = fs.readFileSync(observerLoopPath, 'utf8');
|
||||
// Find the claude execution line(s)
|
||||
const lines = content.split('\n');
|
||||
const claudeLine = lines.find(l => l.includes('claude --model haiku'));
|
||||
assert.ok(claudeLine, 'Should find claude --model haiku invocation line');
|
||||
// The env vars are on the same line as the claude command
|
||||
const claudeLineIndex = lines.indexOf(claudeLine);
|
||||
const fullCommand = lines.slice(Math.max(0, claudeLineIndex - 1), claudeLineIndex + 3).join(' ');
|
||||
assert.ok(fullCommand.includes('ECC_SKIP_OBSERVE=1'), 'claude invocation should include ECC_SKIP_OBSERVE=1 guard');
|
||||
assert.ok(fullCommand.includes('ECC_HOOK_PROFILE=minimal'), 'claude invocation should include ECC_HOOK_PROFILE=minimal guard');
|
||||
});
|
||||
|
||||
// ──────────────────────────────────────────────────────
|
||||
// Summary
|
||||
// ──────────────────────────────────────────────────────
|
||||
|
||||
207
tests/hooks/post-bash-hooks.test.js
Normal file
207
tests/hooks/post-bash-hooks.test.js
Normal file
@@ -0,0 +1,207 @@
|
||||
/**
|
||||
* Tests for post-bash-build-complete.js and post-bash-pr-created.js
|
||||
*
|
||||
* Run with: node tests/hooks/post-bash-hooks.test.js
|
||||
*/
|
||||
|
||||
const assert = require('assert');
|
||||
const path = require('path');
|
||||
const { spawnSync } = require('child_process');
|
||||
|
||||
const buildCompleteScript = path.join(__dirname, '..', '..', 'scripts', 'hooks', 'post-bash-build-complete.js');
|
||||
const prCreatedScript = path.join(__dirname, '..', '..', 'scripts', 'hooks', 'post-bash-pr-created.js');
|
||||
|
||||
function test(name, fn) {
|
||||
try {
|
||||
fn();
|
||||
console.log(` ✓ ${name}`);
|
||||
return true;
|
||||
} catch (err) {
|
||||
console.log(` ✗ ${name}`);
|
||||
console.log(` Error: ${err.message}`);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
function runScript(scriptPath, input) {
|
||||
return spawnSync('node', [scriptPath], {
|
||||
encoding: 'utf8',
|
||||
input,
|
||||
stdio: ['pipe', 'pipe', 'pipe']
|
||||
});
|
||||
}
|
||||
|
||||
let passed = 0;
|
||||
let failed = 0;
|
||||
|
||||
// ── post-bash-build-complete.js ──────────────────────────────────
|
||||
|
||||
console.log('\nPost-Bash Build Complete Hook Tests');
|
||||
console.log('====================================\n');
|
||||
|
||||
console.log('Build command detection:');
|
||||
|
||||
if (test('stderr contains "Build completed" for npm run build command', () => {
|
||||
const input = JSON.stringify({ tool_input: { command: 'npm run build' } });
|
||||
const result = runScript(buildCompleteScript, input);
|
||||
assert.strictEqual(result.status, 0, 'Should exit with code 0');
|
||||
assert.ok(result.stderr.includes('Build completed'), `stderr should contain "Build completed", got: ${result.stderr}`);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('stderr contains "Build completed" for pnpm build command', () => {
|
||||
const input = JSON.stringify({ tool_input: { command: 'pnpm build' } });
|
||||
const result = runScript(buildCompleteScript, input);
|
||||
assert.strictEqual(result.status, 0, 'Should exit with code 0');
|
||||
assert.ok(result.stderr.includes('Build completed'), `stderr should contain "Build completed", got: ${result.stderr}`);
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('stderr contains "Build completed" for yarn build command', () => {
|
||||
const input = JSON.stringify({ tool_input: { command: 'yarn build' } });
|
||||
const result = runScript(buildCompleteScript, input);
|
||||
assert.strictEqual(result.status, 0, 'Should exit with code 0');
|
||||
assert.ok(result.stderr.includes('Build completed'), `stderr should contain "Build completed", got: ${result.stderr}`);
|
||||
})) passed++; else failed++;
|
||||
|
||||
console.log('\nNon-build command detection:');
|
||||
|
||||
if (test('no stderr message for npm test command', () => {
|
||||
const input = JSON.stringify({ tool_input: { command: 'npm test' } });
|
||||
const result = runScript(buildCompleteScript, input);
|
||||
assert.strictEqual(result.status, 0, 'Should exit with code 0');
|
||||
assert.strictEqual(result.stderr, '', 'stderr should be empty for non-build command');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('no stderr message for ls command', () => {
|
||||
const input = JSON.stringify({ tool_input: { command: 'ls -la' } });
|
||||
const result = runScript(buildCompleteScript, input);
|
||||
assert.strictEqual(result.status, 0, 'Should exit with code 0');
|
||||
assert.strictEqual(result.stderr, '', 'stderr should be empty for non-build command');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('no stderr message for git status command', () => {
|
||||
const input = JSON.stringify({ tool_input: { command: 'git status' } });
|
||||
const result = runScript(buildCompleteScript, input);
|
||||
assert.strictEqual(result.status, 0, 'Should exit with code 0');
|
||||
assert.strictEqual(result.stderr, '', 'stderr should be empty for non-build command');
|
||||
})) passed++; else failed++;
|
||||
|
||||
console.log('\nStdout pass-through:');
|
||||
|
||||
if (test('stdout passes through input for build command', () => {
|
||||
const input = JSON.stringify({ tool_input: { command: 'npm run build' } });
|
||||
const result = runScript(buildCompleteScript, input);
|
||||
assert.strictEqual(result.stdout, input, 'stdout should be the original input');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('stdout passes through input for non-build command', () => {
|
||||
const input = JSON.stringify({ tool_input: { command: 'npm test' } });
|
||||
const result = runScript(buildCompleteScript, input);
|
||||
assert.strictEqual(result.stdout, input, 'stdout should be the original input');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('stdout passes through input for invalid JSON', () => {
|
||||
const input = 'not valid json';
|
||||
const result = runScript(buildCompleteScript, input);
|
||||
assert.strictEqual(result.stdout, input, 'stdout should be the original input');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('stdout passes through empty input', () => {
|
||||
const input = '';
|
||||
const result = runScript(buildCompleteScript, input);
|
||||
assert.strictEqual(result.stdout, input, 'stdout should be the original input');
|
||||
})) passed++; else failed++;
|
||||
|
||||
// ── post-bash-pr-created.js ──────────────────────────────────────
|
||||
|
||||
console.log('\n\nPost-Bash PR Created Hook Tests');
|
||||
console.log('================================\n');
|
||||
|
||||
console.log('PR creation detection:');
|
||||
|
||||
if (test('stderr contains PR URL when gh pr create output has PR URL', () => {
|
||||
const input = JSON.stringify({
|
||||
tool_input: { command: 'gh pr create --title "Fix bug" --body "desc"' },
|
||||
tool_output: { output: 'https://github.com/owner/repo/pull/42\n' }
|
||||
});
|
||||
const result = runScript(prCreatedScript, input);
|
||||
assert.strictEqual(result.status, 0, 'Should exit with code 0');
|
||||
assert.ok(result.stderr.includes('https://github.com/owner/repo/pull/42'), `stderr should contain PR URL, got: ${result.stderr}`);
|
||||
assert.ok(result.stderr.includes('[Hook] PR created:'), 'stderr should contain PR created message');
|
||||
assert.ok(result.stderr.includes('gh pr review 42'), 'stderr should contain review command');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('stderr contains correct repo in review command', () => {
|
||||
const input = JSON.stringify({
|
||||
tool_input: { command: 'gh pr create' },
|
||||
tool_output: { output: 'Created PR\nhttps://github.com/my-org/my-repo/pull/123\nDone' }
|
||||
});
|
||||
const result = runScript(prCreatedScript, input);
|
||||
assert.strictEqual(result.status, 0, 'Should exit with code 0');
|
||||
assert.ok(result.stderr.includes('--repo my-org/my-repo'), `stderr should contain correct repo, got: ${result.stderr}`);
|
||||
assert.ok(result.stderr.includes('gh pr review 123'), 'stderr should contain correct PR number');
|
||||
})) passed++; else failed++;
|
||||
|
||||
console.log('\nNon-PR command detection:');
|
||||
|
||||
if (test('no stderr about PR for non-gh command', () => {
|
||||
const input = JSON.stringify({ tool_input: { command: 'npm test' } });
|
||||
const result = runScript(prCreatedScript, input);
|
||||
assert.strictEqual(result.status, 0, 'Should exit with code 0');
|
||||
assert.strictEqual(result.stderr, '', 'stderr should be empty for non-PR command');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('no stderr about PR for gh issue command', () => {
|
||||
const input = JSON.stringify({ tool_input: { command: 'gh issue list' } });
|
||||
const result = runScript(prCreatedScript, input);
|
||||
assert.strictEqual(result.status, 0, 'Should exit with code 0');
|
||||
assert.strictEqual(result.stderr, '', 'stderr should be empty for non-PR create command');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('no stderr about PR for gh pr create without PR URL in output', () => {
|
||||
const input = JSON.stringify({
|
||||
tool_input: { command: 'gh pr create' },
|
||||
tool_output: { output: 'Error: could not create PR' }
|
||||
});
|
||||
const result = runScript(prCreatedScript, input);
|
||||
assert.strictEqual(result.status, 0, 'Should exit with code 0');
|
||||
assert.strictEqual(result.stderr, '', 'stderr should be empty when no PR URL in output');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('no stderr about PR for gh pr list command', () => {
|
||||
const input = JSON.stringify({ tool_input: { command: 'gh pr list' } });
|
||||
const result = runScript(prCreatedScript, input);
|
||||
assert.strictEqual(result.status, 0, 'Should exit with code 0');
|
||||
assert.strictEqual(result.stderr, '', 'stderr should be empty for gh pr list');
|
||||
})) passed++; else failed++;
|
||||
|
||||
console.log('\nStdout pass-through:');
|
||||
|
||||
if (test('stdout passes through input for PR create command', () => {
|
||||
const input = JSON.stringify({
|
||||
tool_input: { command: 'gh pr create' },
|
||||
tool_output: { output: 'https://github.com/owner/repo/pull/1' }
|
||||
});
|
||||
const result = runScript(prCreatedScript, input);
|
||||
assert.strictEqual(result.stdout, input, 'stdout should be the original input');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('stdout passes through input for non-PR command', () => {
|
||||
const input = JSON.stringify({ tool_input: { command: 'echo hello' } });
|
||||
const result = runScript(prCreatedScript, input);
|
||||
assert.strictEqual(result.stdout, input, 'stdout should be the original input');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('stdout passes through input for invalid JSON', () => {
|
||||
const input = 'not valid json';
|
||||
const result = runScript(prCreatedScript, input);
|
||||
assert.strictEqual(result.stdout, input, 'stdout should be the original input');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('stdout passes through empty input', () => {
|
||||
const input = '';
|
||||
const result = runScript(prCreatedScript, input);
|
||||
assert.strictEqual(result.stdout, input, 'stdout should be the original input');
|
||||
})) passed++; else failed++;
|
||||
|
||||
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
|
||||
process.exit(failed > 0 ? 1 : 0);
|
||||
121
tests/hooks/pre-bash-dev-server-block.test.js
Normal file
121
tests/hooks/pre-bash-dev-server-block.test.js
Normal file
@@ -0,0 +1,121 @@
|
||||
/**
|
||||
* Tests for pre-bash-dev-server-block.js hook
|
||||
*
|
||||
* Run with: node tests/hooks/pre-bash-dev-server-block.test.js
|
||||
*/
|
||||
|
||||
const assert = require('assert');
|
||||
const path = require('path');
|
||||
const { spawnSync } = require('child_process');
|
||||
|
||||
const script = path.join(__dirname, '..', '..', 'scripts', 'hooks', 'pre-bash-dev-server-block.js');
|
||||
|
||||
function test(name, fn) {
|
||||
try {
|
||||
fn();
|
||||
console.log(` ✓ ${name}`);
|
||||
return true;
|
||||
} catch (err) {
|
||||
console.log(` ✗ ${name}`);
|
||||
console.log(` Error: ${err.message}`);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
function runScript(command) {
|
||||
const input = { tool_input: { command } };
|
||||
const result = spawnSync('node', [script], {
|
||||
encoding: 'utf8',
|
||||
input: JSON.stringify(input),
|
||||
timeout: 10000,
|
||||
});
|
||||
return { code: result.status || 0, stdout: result.stdout || '', stderr: result.stderr || '' };
|
||||
}
|
||||
|
||||
function runTests() {
|
||||
console.log('\n=== Testing pre-bash-dev-server-block.js ===\n');
|
||||
|
||||
let passed = 0;
|
||||
let failed = 0;
|
||||
|
||||
const isWindows = process.platform === 'win32';
|
||||
|
||||
// --- Blocking tests (non-Windows only) ---
|
||||
|
||||
if (!isWindows) {
|
||||
(test('blocks npm run dev (exit code 2, stderr contains BLOCKED)', () => {
|
||||
const result = runScript('npm run dev');
|
||||
assert.strictEqual(result.code, 2, `Expected exit code 2, got ${result.code}`);
|
||||
assert.ok(result.stderr.includes('BLOCKED'), `Expected stderr to contain BLOCKED, got: ${result.stderr}`);
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
(test('blocks pnpm dev (exit code 2)', () => {
|
||||
const result = runScript('pnpm dev');
|
||||
assert.strictEqual(result.code, 2, `Expected exit code 2, got ${result.code}`);
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
(test('blocks yarn dev (exit code 2)', () => {
|
||||
const result = runScript('yarn dev');
|
||||
assert.strictEqual(result.code, 2, `Expected exit code 2, got ${result.code}`);
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
(test('blocks bun run dev (exit code 2)', () => {
|
||||
const result = runScript('bun run dev');
|
||||
assert.strictEqual(result.code, 2, `Expected exit code 2, got ${result.code}`);
|
||||
}) ? passed++ : failed++);
|
||||
} else {
|
||||
console.log(' (skipping blocking tests on Windows)\n');
|
||||
}
|
||||
|
||||
// --- Allow tests ---
|
||||
|
||||
(test('allows tmux-wrapped npm run dev (exit code 0)', () => {
|
||||
const result = runScript('tmux new-session -d -s dev "npm run dev"');
|
||||
assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`);
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
(test('allows npm install (exit code 0)', () => {
|
||||
const result = runScript('npm install');
|
||||
assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`);
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
(test('allows npm test (exit code 0)', () => {
|
||||
const result = runScript('npm test');
|
||||
assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`);
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
(test('allows npm run build (exit code 0)', () => {
|
||||
const result = runScript('npm run build');
|
||||
assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`);
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
// --- Edge cases ---
|
||||
|
||||
(test('empty/invalid input passes through (exit code 0)', () => {
|
||||
const result = spawnSync('node', [script], {
|
||||
encoding: 'utf8',
|
||||
input: '',
|
||||
timeout: 10000,
|
||||
});
|
||||
assert.strictEqual(result.status || 0, 0, `Expected exit code 0, got ${result.status}`);
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
(test('stdout contains original input on pass-through', () => {
|
||||
const input = { tool_input: { command: 'npm install' } };
|
||||
const inputStr = JSON.stringify(input);
|
||||
const result = spawnSync('node', [script], {
|
||||
encoding: 'utf8',
|
||||
input: inputStr,
|
||||
timeout: 10000,
|
||||
});
|
||||
assert.strictEqual(result.status || 0, 0);
|
||||
assert.strictEqual(result.stdout.trim(), inputStr, `Expected stdout to contain original input`);
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
// --- Summary ---
|
||||
|
||||
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
|
||||
process.exit(failed > 0 ? 1 : 0);
|
||||
}
|
||||
|
||||
runTests();
|
||||
104
tests/hooks/pre-bash-reminders.test.js
Normal file
104
tests/hooks/pre-bash-reminders.test.js
Normal file
@@ -0,0 +1,104 @@
|
||||
/**
|
||||
* Tests for pre-bash-git-push-reminder.js and pre-bash-tmux-reminder.js hooks
|
||||
*
|
||||
* Run with: node tests/hooks/pre-bash-reminders.test.js
|
||||
*/
|
||||
|
||||
const assert = require('assert');
|
||||
const path = require('path');
|
||||
const { spawnSync } = require('child_process');
|
||||
|
||||
const gitPushScript = path.join(__dirname, '..', '..', 'scripts', 'hooks', 'pre-bash-git-push-reminder.js');
|
||||
const tmuxScript = path.join(__dirname, '..', '..', 'scripts', 'hooks', 'pre-bash-tmux-reminder.js');
|
||||
|
||||
function test(name, fn) {
|
||||
try {
|
||||
fn();
|
||||
console.log(` ✓ ${name}`);
|
||||
return true;
|
||||
} catch (err) {
|
||||
console.log(` ✗ ${name}`);
|
||||
console.log(` Error: ${err.message}`);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
function runScript(scriptPath, command, envOverrides = {}) {
|
||||
const input = { tool_input: { command } };
|
||||
const inputStr = JSON.stringify(input);
|
||||
const result = spawnSync('node', [scriptPath], {
|
||||
encoding: 'utf8',
|
||||
input: inputStr,
|
||||
timeout: 10000,
|
||||
env: { ...process.env, ...envOverrides },
|
||||
});
|
||||
return { code: result.status || 0, stdout: result.stdout || '', stderr: result.stderr || '', inputStr };
|
||||
}
|
||||
|
||||
function runTests() {
|
||||
console.log('\n=== Testing pre-bash-git-push-reminder.js & pre-bash-tmux-reminder.js ===\n');
|
||||
|
||||
let passed = 0;
|
||||
let failed = 0;
|
||||
|
||||
// --- git-push-reminder tests ---
|
||||
|
||||
console.log(' git-push-reminder:');
|
||||
|
||||
(test('git push triggers stderr warning', () => {
|
||||
const result = runScript(gitPushScript, 'git push origin main');
|
||||
assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`);
|
||||
assert.ok(result.stderr.includes('[Hook]'), `Expected stderr to contain [Hook], got: ${result.stderr}`);
|
||||
assert.ok(result.stderr.includes('Review changes before push'), `Expected stderr to mention review`);
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
(test('git status has no warning', () => {
|
||||
const result = runScript(gitPushScript, 'git status');
|
||||
assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`);
|
||||
assert.strictEqual(result.stderr, '', `Expected no stderr, got: ${result.stderr}`);
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
(test('git push always passes through input on stdout', () => {
|
||||
const result = runScript(gitPushScript, 'git push');
|
||||
assert.strictEqual(result.stdout, result.inputStr, 'Expected stdout to match original input');
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
// --- tmux-reminder tests (non-Windows only) ---
|
||||
|
||||
const isWindows = process.platform === 'win32';
|
||||
|
||||
if (!isWindows) {
|
||||
console.log('\n tmux-reminder:');
|
||||
|
||||
(test('npm install triggers tmux suggestion', () => {
|
||||
const result = runScript(tmuxScript, 'npm install', { TMUX: '' });
|
||||
assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`);
|
||||
assert.ok(result.stderr.includes('[Hook]'), `Expected stderr to contain [Hook], got: ${result.stderr}`);
|
||||
assert.ok(result.stderr.includes('tmux'), `Expected stderr to mention tmux`);
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
(test('npm test triggers tmux suggestion', () => {
|
||||
const result = runScript(tmuxScript, 'npm test', { TMUX: '' });
|
||||
assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`);
|
||||
assert.ok(result.stderr.includes('tmux'), `Expected stderr to mention tmux`);
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
(test('regular command like ls has no tmux suggestion', () => {
|
||||
const result = runScript(tmuxScript, 'ls -la', { TMUX: '' });
|
||||
assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`);
|
||||
assert.strictEqual(result.stderr, '', `Expected no stderr for ls, got: ${result.stderr}`);
|
||||
}) ? passed++ : failed++);
|
||||
|
||||
(test('tmux reminder always passes through input on stdout', () => {
|
||||
const result = runScript(tmuxScript, 'npm install', { TMUX: '' });
|
||||
assert.strictEqual(result.stdout, result.inputStr, 'Expected stdout to match original input');
|
||||
}) ? passed++ : failed++);
|
||||
} else {
|
||||
console.log('\n (skipping tmux-reminder tests on Windows)\n');
|
||||
}
|
||||
|
||||
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
|
||||
process.exit(failed > 0 ? 1 : 0);
|
||||
}
|
||||
|
||||
runTests();
|
||||
159
tests/hooks/quality-gate.test.js
Normal file
159
tests/hooks/quality-gate.test.js
Normal file
@@ -0,0 +1,159 @@
|
||||
/**
|
||||
* Tests for scripts/hooks/quality-gate.js
|
||||
*
|
||||
* Run with: node tests/hooks/quality-gate.test.js
|
||||
*/
|
||||
|
||||
const assert = require('assert');
|
||||
const path = require('path');
|
||||
const os = require('os');
|
||||
const fs = require('fs');
|
||||
|
||||
const qualityGate = require('../../scripts/hooks/quality-gate');
|
||||
|
||||
function test(name, fn) {
|
||||
try {
|
||||
fn();
|
||||
console.log(` ✓ ${name}`);
|
||||
return true;
|
||||
} catch (err) {
|
||||
console.log(` ✗ ${name}`);
|
||||
console.log(` Error: ${err.message}`);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
let passed = 0;
|
||||
let failed = 0;
|
||||
|
||||
console.log('\nQuality Gate Hook Tests');
|
||||
console.log('========================\n');
|
||||
|
||||
// --- run() returns original input for valid JSON ---
|
||||
|
||||
console.log('run() pass-through behavior:');
|
||||
|
||||
if (test('returns original input for valid JSON with file_path', () => {
|
||||
const input = JSON.stringify({ tool_input: { file_path: '/tmp/nonexistent-file.js' } });
|
||||
const result = qualityGate.run(input);
|
||||
assert.strictEqual(result, input, 'Should return original input unchanged');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns original input for valid JSON without file_path', () => {
|
||||
const input = JSON.stringify({ tool_input: { command: 'ls' } });
|
||||
const result = qualityGate.run(input);
|
||||
assert.strictEqual(result, input, 'Should return original input unchanged');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns original input for valid JSON with nested structure', () => {
|
||||
const input = JSON.stringify({ tool_input: { file_path: '/some/path.ts', content: 'hello' }, other: [1, 2, 3] });
|
||||
const result = qualityGate.run(input);
|
||||
assert.strictEqual(result, input, 'Should return original input unchanged');
|
||||
})) passed++; else failed++;
|
||||
|
||||
// --- run() returns original input for invalid JSON ---
|
||||
|
||||
console.log('\nInvalid JSON handling:');
|
||||
|
||||
if (test('returns original input for invalid JSON (no crash)', () => {
|
||||
const input = 'this is not json at all {{{';
|
||||
const result = qualityGate.run(input);
|
||||
assert.strictEqual(result, input, 'Should return original input unchanged');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns original input for partial JSON', () => {
|
||||
const input = '{"tool_input": {';
|
||||
const result = qualityGate.run(input);
|
||||
assert.strictEqual(result, input, 'Should return original input unchanged');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns original input for JSON with trailing garbage', () => {
|
||||
const input = '{"tool_input": {}}extra';
|
||||
const result = qualityGate.run(input);
|
||||
assert.strictEqual(result, input, 'Should return original input unchanged');
|
||||
})) passed++; else failed++;
|
||||
|
||||
// --- run() returns original input when file does not exist ---
|
||||
|
||||
console.log('\nNon-existent file handling:');
|
||||
|
||||
if (test('returns original input when file_path points to non-existent file', () => {
|
||||
const input = JSON.stringify({ tool_input: { file_path: '/tmp/does-not-exist-12345.js' } });
|
||||
const result = qualityGate.run(input);
|
||||
assert.strictEqual(result, input, 'Should return original input unchanged');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns original input when file_path is a non-existent .py file', () => {
|
||||
const input = JSON.stringify({ tool_input: { file_path: '/tmp/does-not-exist-12345.py' } });
|
||||
const result = qualityGate.run(input);
|
||||
assert.strictEqual(result, input, 'Should return original input unchanged');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns original input when file_path is a non-existent .go file', () => {
|
||||
const input = JSON.stringify({ tool_input: { file_path: '/tmp/does-not-exist-12345.go' } });
|
||||
const result = qualityGate.run(input);
|
||||
assert.strictEqual(result, input, 'Should return original input unchanged');
|
||||
})) passed++; else failed++;
|
||||
|
||||
// --- run() returns original input for empty input ---
|
||||
|
||||
console.log('\nEmpty input handling:');
|
||||
|
||||
if (test('returns original input for empty string', () => {
|
||||
const input = '';
|
||||
const result = qualityGate.run(input);
|
||||
assert.strictEqual(result, input, 'Should return empty string unchanged');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('returns original input for whitespace-only string', () => {
|
||||
const input = ' ';
|
||||
const result = qualityGate.run(input);
|
||||
assert.strictEqual(result, input, 'Should return whitespace string unchanged');
|
||||
})) passed++; else failed++;
|
||||
|
||||
// --- run() handles missing tool_input gracefully ---
|
||||
|
||||
console.log('\nMissing tool_input handling:');
|
||||
|
||||
if (test('handles missing tool_input gracefully', () => {
|
||||
const input = JSON.stringify({ something_else: 'value' });
|
||||
const result = qualityGate.run(input);
|
||||
assert.strictEqual(result, input, 'Should return original input unchanged');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('handles null tool_input gracefully', () => {
|
||||
const input = JSON.stringify({ tool_input: null });
|
||||
const result = qualityGate.run(input);
|
||||
assert.strictEqual(result, input, 'Should return original input unchanged');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('handles tool_input with empty file_path', () => {
|
||||
const input = JSON.stringify({ tool_input: { file_path: '' } });
|
||||
const result = qualityGate.run(input);
|
||||
assert.strictEqual(result, input, 'Should return original input unchanged');
|
||||
})) passed++; else failed++;
|
||||
|
||||
if (test('handles empty JSON object', () => {
|
||||
const input = JSON.stringify({});
|
||||
const result = qualityGate.run(input);
|
||||
assert.strictEqual(result, input, 'Should return original input unchanged');
|
||||
})) passed++; else failed++;
|
||||
|
||||
// --- run() with a real file (but no formatter installed) ---
|
||||
|
||||
console.log('\nReal file without formatter:');
|
||||
|
||||
if (test('returns original input for existing file with no formatter configured', () => {
|
||||
const tmpFile = path.join(os.tmpdir(), `quality-gate-test-${Date.now()}.js`);
|
||||
fs.writeFileSync(tmpFile, 'const x = 1;\n');
|
||||
try {
|
||||
const input = JSON.stringify({ tool_input: { file_path: tmpFile } });
|
||||
const result = qualityGate.run(input);
|
||||
assert.strictEqual(result, input, 'Should return original input unchanged');
|
||||
} finally {
|
||||
fs.unlinkSync(tmpFile);
|
||||
}
|
||||
})) passed++; else failed++;
|
||||
|
||||
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
|
||||
process.exit(failed > 0 ? 1 : 0);
|
||||
@@ -12,7 +12,7 @@ const { spawnSync } = require('child_process');
|
||||
|
||||
const dashboard = require('../../scripts/lib/skill-evolution/dashboard');
|
||||
const versioning = require('../../scripts/lib/skill-evolution/versioning');
|
||||
const provenance = require('../../scripts/lib/skill-evolution/provenance');
|
||||
const _provenance = require('../../scripts/lib/skill-evolution/provenance');
|
||||
|
||||
const HEALTH_SCRIPT = path.join(__dirname, '..', '..', 'scripts', 'skills-health.js');
|
||||
|
||||
|
||||
Reference in New Issue
Block a user