28 Commits

Author SHA1 Message Date
Affaan Mustafa
7ccfda9e25 fix: skip flaky observer shell test on Windows (#722) 2026-03-20 07:30:52 -07:00
Affaan Mustafa
2643e0c72f fix: update catalog counts for flutter-reviewer (28 agents, 116 skills) 2026-03-20 07:11:16 -07:00
Maciej
1975a576c5 feat(agents): add flutter-reviewer agent and skill (#716)
Library-agnostic Flutter/Dart code reviewer that adapts to the project's
chosen state management solution (BLoC, Riverpod, Provider, GetX, MobX,
Signals) and architecture pattern (Clean Architecture, MVVM, feature-first).

Co-authored-by: Maciej Starosielec <maciej@code-snap.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-20 06:40:28 -07:00
Chris Yau
f563fe2a3b fix: codex sync merges AGENTS.md instead of replacing it (#715)
The sync script previously overwrote ~/.codex/AGENTS.md on every run,
destroying any user-authored content. This adds marker-based merging
(<!-- BEGIN ECC --> / <!-- END ECC -->) so only the ECC-managed section
is replaced on subsequent runs, preserving user content outside the
markers.

Merge logic:
- No file → create with markers
- Both markers present (ordered, CRLF-safe) → replace only the ECC section
- BEGIN without END (corrupted) → full replace (backup saved)
- No markers at all → append ECC block (preserves existing content)

Also fixes:
- Symlink preservation: uses cat > instead of mv to write through symlinks
- CRLF handling: strips \r in marker detection to handle Windows-edited files
- Marker ordering: validates BEGIN appears before END, not just that both exist

The legacy heading-match heuristic was intentionally removed per council
review: any unmarked file is either user-authored (append is safe) or
legacy ECC-generated (duplicates once, deduplicates on next run via
markers). A timestamped backup is always saved before any mutation.

Generated with [Claude Code](https://claude.ai/code)
via [Happy](https://happy.engineering)

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Happy <yesreply@happy.engineering>
2026-03-20 06:38:32 -07:00
Affaan Mustafa
e8495aa3fc feat: add MCP health-check hook (#711) 2026-03-20 05:56:21 -07:00
Affaan Mustafa
35071150b7 fix: sanitize SessionStart session summaries (#710) 2026-03-20 05:42:32 -07:00
Affaan Mustafa
40f18885b1 feat(rules): add C# language support (#704)
* feat(rules): add C# language support

* feat: add everything-claude-code ECC bundle (#705)

* feat: add everything-claude-code ECC bundle (.claude/ecc-tools.json)

* feat: add everything-claude-code ECC bundle (.claude/skills/everything-claude-code/SKILL.md)

* feat: add everything-claude-code ECC bundle (.agents/skills/everything-claude-code/SKILL.md)

* feat: add everything-claude-code ECC bundle (.agents/skills/everything-claude-code/agents/openai.yaml)

* feat: add everything-claude-code ECC bundle (.claude/identity.json)

* feat: add everything-claude-code ECC bundle (.codex/agents/explorer.toml)

* feat: add everything-claude-code ECC bundle (.codex/agents/reviewer.toml)

* feat: add everything-claude-code ECC bundle (.codex/agents/docs-researcher.toml)

* feat: add everything-claude-code ECC bundle (.claude/rules/everything-claude-code-guardrails.md)

* feat: add everything-claude-code ECC bundle (.claude/research/everything-claude-code-research-playbook.md)

* feat: add everything-claude-code ECC bundle (.claude/team/everything-claude-code-team-config.json)

* feat: add everything-claude-code ECC bundle (.claude/enterprise/controls.md)

* feat: add everything-claude-code ECC bundle (.claude/commands/database-migration.md)

* feat: add everything-claude-code ECC bundle (.claude/commands/feature-development.md)

* feat: add everything-claude-code ECC bundle (.claude/commands/add-language-rules.md)

---------

Co-authored-by: ecc-tools[bot] <257055122+ecc-tools[bot]@users.noreply.github.com>

* ci: retrigger

---------

Co-authored-by: ecc-tools[bot] <257055122+ecc-tools[bot]@users.noreply.github.com>
2026-03-20 05:20:23 -07:00
Affaan Mustafa
b77f49569b feat: add nuxt 4 patterns skill (#702) 2026-03-20 04:44:31 -07:00
Affaan Mustafa
bea68549c5 feat: agent description compression with lazy loading (#696)
* feat: add agent description compression with lazy loading (#491)

Agent descriptions consume ~26k tokens (121KB across 27 agents). This adds
a compression library with three modes:
- catalog: metadata only (~2-3k tokens) for agent selection
- summary: metadata + first paragraph (~4-5k tokens) for routing
- full: no compression, for when agent is invoked

Includes lazy-load function to fetch full agent body on demand.
21 tests covering parsing, compression, filtering, and real agents dir.

* fix: update JSDoc to include all stats fields in buildAgentCatalog

Add compressedBytes and mode to the documented return type, matching
the actual implementation.
2026-03-20 03:53:22 -07:00
Affaan Mustafa
b981c765ae fix: stabilize windows project metadata assertions 2026-03-20 03:39:37 -07:00
Chris Yau
b61f549444 fix(tests): resolve Windows CI test failures (#701)
* fix(tests): skip bash tests on Windows and fix USERPROFILE in resolve-ecc-root

- hooks.test.js: add SKIP_BASH guard for 8 bash-dependent tests
  (detect-project.sh, observe.sh) while keeping 207 Node.js tests running
- resolve-ecc-root.test.js: add USERPROFILE to env overrides in 2
  INLINE_RESOLVE tests so os.homedir() resolves correctly on Windows

Generated with [Claude Code](https://claude.ai/code)
via [Happy](https://happy.engineering)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Happy <yesreply@happy.engineering>

* fix(tests): handle BOM in shebang stripping and skip worktree tests on Windows

- validators.test.js: replace regex stripShebang with character-code
  approach that handles UTF-8 BOM before shebang line
- detect-project-worktree.test.js: skip entire file on Windows since
  tests invoke bash scripts directly

Generated with [Claude Code](https://claude.ai/code)
via [Happy](https://happy.engineering)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Happy <yesreply@happy.engineering>

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Happy <yesreply@happy.engineering>
2026-03-20 03:37:21 -07:00
Affaan Mustafa
162236f463 fix: normalize bash metadata paths on windows 2026-03-20 03:29:45 -07:00
Affaan Mustafa
04ad4737de fix: stabilize windows hook tests 2026-03-20 03:23:37 -07:00
Affaan Mustafa
8ebb47bdd1 fix: normalize windows bash test harness 2026-03-20 03:15:05 -07:00
Affaan Mustafa
e70c43bcd4 fix: harden windows CI tests and markdown lint 2026-03-20 03:03:57 -07:00
Affaan Mustafa
cbccb7fdc0 fix: strip validator shebangs on Windows CRLF checkouts 2026-03-20 02:44:25 -07:00
Affaan Mustafa
a2df9397ff fix: resolve ESLint errors and add npx command support in hook tests
Remove unused loadInstallManifests import and prefix unused result
variable with underscore in selective-install tests. Add npx as an
approved command prefix in hook validation tests.
2026-03-20 02:11:12 -07:00
Affaan Mustafa
47f508ec21 Revert "Add Kiro IDE support (.kiro/) (#548)"
This reverts commit ce828c1c3c.
2026-03-20 01:58:19 -07:00
Himanshu Sharma
ce828c1c3c Add Kiro IDE support (.kiro/) (#548)
Co-authored-by: Sungmin Hong <hsungmin@amazon.com>
2026-03-20 01:50:35 -07:00
Ofek Gabay
c8f631b046 feat: add block-no-verify hook for Claude Code and Cursor (#649)
Adds npx block-no-verify@1.1.2 as a PreToolUse Bash hook in hooks/hooks.json
and a beforeShellExecution hook in .cursor/hooks.json to prevent AI agents
from bypassing git hooks via the hook-bypass flag.

This closes the last enforcement gap in the ECC security stack — the bypass
flag silently skips pre-commit, commit-msg, and pre-push hooks.

Closes #648

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-20 01:50:31 -07:00
Affaan Mustafa
8511d84042 feat(skills): add rules-distill skill (rebased #561) (#678)
* feat(skills): add rules-distill — extract cross-cutting principles from skills into rules

Applies the skill-stocktake pattern to rules maintenance:
scan skills → extract shared principles → propose rule changes.

Key design decisions:
- Deterministic collection (scan scripts) + LLM judgment (cross-read & verdict)
- 6 verdict types: Append, Revise, New Section, New File, Already Covered, Too Specific
- Anti-abstraction safeguard: 2+ skills evidence, actionable behavior test, violation risk
- Rules full text passed to LLM (no grep pre-filter) for accurate matching
- Never modifies rules automatically — always requires user approval

* fix(skills): address review feedback for rules-distill

Fixes raised by CodeRabbit, Greptile, and cubic:

- Add Prerequisites section documenting skill-stocktake dependency
- Add fallback command when skill-stocktake is not installed
- Fix shell quoting: add IFS= and -r to while-read loops
- Replace hardcoded paths with env var placeholders ($CLAUDE_RULES_DIR, $SKILL_STOCKTAKE_DIR)
- Add json language identifier to code blocks
- Add "How It Works" parent heading for Phase 1/2/3
- Add "Example" section with end-to-end run output
- Add revision.reason/before/after fields to output schema for Revise verdict
- Document timestamp format (date -u +%Y-%m-%dT%H:%M:%SZ)
- Document candidate-id format (kebab-case from principle)
- Use concrete examples in results.json schema

* fix(skills): remove skill-stocktake dependency, add self-contained scripts

Address P1 review feedback:
- Add scan-skills.sh and scan-rules.sh directly in rules-distill/scripts/
  (no external dependency on skill-stocktake)
- Remove Prerequisites section (no longer needed)
- Add cross-batch merge step to prevent 2+ skills requirement
  from being silently broken across batch boundaries
- Fix nested triple-backtick fences (use quadruple backticks)
- Remove head -100 cap (silent truncation)
- Rename "When to Activate" → "When to Use" (ECC standard)
- Remove unnecessary env var placeholders (SKILL.md is a prompt, not a script)

* fix: update skill/command counts in README.md and AGENTS.md

rules-distill added 1 skill + 1 command:
- skills: 108 → 109
- commands: 57 → 58

Updates all count references to pass CI catalog validation.

* fix(skills): address Servitor review feedback for rules-distill

1. Rename SKILL_STOCKTAKE_* env vars to RULES_DISTILL_* for consistency
2. Remove unnecessary observation counting (use_7d/use_30d) from scan-skills.sh
3. Fix header comment: scan.sh → scan-skills.sh
4. Use jq for JSON construction in scan-rules.sh to properly escape
   headings containing special characters (", \)

* fix(skills): address CodeRabbit review — portability and scan scope

1. scan-rules.sh: use jq for error JSON output (proper escaping)
2. scan-rules.sh: replace GNU-only sort -z with portable sort (BSD compat)
3. scan-rules.sh: fix pipefail crash on files without H2 headings
4. scan-skills.sh: scan only SKILL.md files (skip learned/*.md and
   auxiliary docs that lack frontmatter)
5. scan-skills.sh: add portable get_mtime helper (GNU stat/date
   fallback to BSD stat/date)

* fix: sync catalog counts with filesystem (27 agents, 114 skills, 59 commands)

---------

Co-authored-by: Tatsuya Shimomoto <shimo4228@gmail.com>
2026-03-20 01:44:55 -07:00
dependabot[bot]
8a57894394 chore(deps-dev): bump flatted (#675)
Bumps the npm_and_yarn group with 1 update in the / directory: [flatted](https://github.com/WebReflection/flatted).


Updates `flatted` from 3.3.3 to 3.4.2
- [Commits](https://github.com/WebReflection/flatted/compare/v3.3.3...v3.4.2)

---
updated-dependencies:
- dependency-name: flatted
  dependency-version: 3.4.2
  dependency-type: indirect
  dependency-group: npm_and_yarn
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-20 01:42:19 -07:00
Affaan Mustafa
68484da2fc fix: auto-detect ECC root from plugin cache when CLAUDE_PLUGIN_ROOT is unset (#547) (#691)
When ECC is installed as a Claude Code plugin via the marketplace,
scripts live in the plugin cache (~/.claude/plugins/cache/...) but
commands fallback to ~/.claude/ which doesn't have the scripts.

Add resolve-ecc-root.js with a 3-step fallback chain:
  1. CLAUDE_PLUGIN_ROOT env var (existing)
  2. Standard install at ~/.claude/ (existing)
  3. NEW: auto-scan the plugin cache directory

Update sessions.md and skill-health.md commands to use the new
inline resolver. Includes 15 tests covering all fallback paths
including env var priority, standard install, cache discovery,
and the compact INLINE_RESOLVE used in command .md files.
2026-03-20 01:38:15 -07:00
Affaan Mustafa
0b0b66c02f feat: agent compression, inspection logic, governance hooks (#491, #485, #482) (#688)
Implements three roadmap features:

- Agent description compression (#491): New `agent-compress` module with
  catalog/summary/full compression modes and lazy-loading. Reduces ~26k
  token agent descriptions to ~2-3k catalog entries for context efficiency.

- Inspection logic (#485): New `inspection` module that detects recurring
  failure patterns in skill_runs. Groups by skill + normalized failure
  reason, generates structured reports with suggested remediation actions.
  Configurable threshold (default: 3 failures).

- Governance event capture hook (#482): PreToolUse/PostToolUse hook that
  detects secrets, policy violations, approval-required commands, and
  elevated privilege usage. Gated behind ECC_GOVERNANCE_CAPTURE=1 flag.
  Writes to governance_events table via JSON-line stderr output.

59 new tests (16 + 16 + 27), all passing.
2026-03-20 01:38:13 -07:00
Affaan Mustafa
28de7cc420 fix: strip ANSI escape codes from session persistence hooks (#642) (#684)
Windows terminals emit control sequences (cursor movement, screen
clearing) that leaked into session.tmp files and were injected
verbatim into Claude's context on the next session start.

Add a comprehensive stripAnsi() to utils.js that handles CSI, OSC,
charset selection, and bare ESC sequences. Apply it in session-end.js
(when extracting user messages from the transcript) and in
session-start.js (safety net before injecting session content).
2026-03-20 01:38:11 -07:00
Affaan Mustafa
9a478ad676 feat(rules): add Rust language rules (rebased #660) (#686)
* feat(rules): add Rust coding style, hooks, and patterns rules

Add language-specific rules for Rust extending the common rule set:
- coding-style.md: rustfmt, clippy, ownership idioms, error handling,
  iterator patterns, module organization, visibility
- hooks.md: PostToolUse hooks for rustfmt, clippy, cargo check
- patterns.md: trait-based repository, newtype, enum state machines,
  builder, sealed traits, API response envelope

Rules reference existing rust-patterns skill for deep content.

Generated with [Claude Code](https://claude.ai/code)
via [Happy](https://happy.engineering)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Happy <yesreply@happy.engineering>

* feat(rules): add Rust testing and security rules

Add remaining Rust language-specific rules:
- testing.md: cargo test, rstest parameterized tests, mockall mocking
  with mock! macro, tokio async tests, cargo-llvm-cov coverage
- security.md: secrets via env vars, parameterized SQL with sqlx,
  parse-don't-validate input validation, unsafe code audit requirements,
  cargo-audit dependency scanning, proper HTTP error status codes

Generated with [Claude Code](https://claude.ai/code)
via [Happy](https://happy.engineering)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Happy <yesreply@happy.engineering>

* fix(rules): address review feedback on Rust rules

Fixes from Copilot, Greptile, Cubic, and CodeRabbit reviews:
- Add missing imports: use std::borrow::Cow, use anyhow::Context
- Use anyhow::Result<T> consistently (patterns.md, security.md)
- Change sqlx placeholder from ? to $1 (Postgres is most common)
- Remove Cargo.lock from hooks.md paths (auto-generated file)
- Fix tokio::test to show attribute form #[tokio::test]
- Fix mockall mock! name collision, wrap in #[cfg(test)] mod tests
- Fix --test target to match file layout (api_test, not integration)

Generated with [Claude Code](https://claude.ai/code)
via [Happy](https://happy.engineering)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Happy <yesreply@happy.engineering>

* fix: update catalog counts in README.md and AGENTS.md

Update documented counts to match actual repository state after rebase:
- Skills: 109 → 113 (new skills merged to main)
- Commands: 57 → 58 (new command merged to main)

Generated with [Claude Code](https://claude.ai/code)
via [Happy](https://happy.engineering)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Happy <yesreply@happy.engineering>

---------

Co-authored-by: Chris Yau <chris@diveanddev.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Happy <yesreply@happy.engineering>
2026-03-20 01:19:42 -07:00
Affaan Mustafa
52e949a85b fix: sync catalog counts with filesystem (27 agents, 113 skills, 58 commands) (#693) 2026-03-20 01:19:36 -07:00
Affaan Mustafa
07f6156d8a feat: implement --with/--without selective install flags (#679)
Add agent: and skill: component families to the install component
catalog, enabling fine-grained selective install via CLI flags:

  ecc install --profile developer --with lang:typescript --without capability:orchestration
  ecc install --with lang:python --with agent:security-reviewer

Changes:
- Add agent: family (9 entries) and skill: family (10 entries) to
  manifests/install-components.json for granular component addressing
- Update install-components.schema.json to accept agent: and skill:
  family prefixes
- Register agent and skill family prefixes in COMPONENT_FAMILY_PREFIXES
  (scripts/lib/install-manifests.js)
- Add 41 comprehensive tests covering CLI parsing, request normalization,
  component catalog validation, plan resolution, target filtering,
  error handling, and end-to-end install with --with/--without flags

Closes #470
2026-03-20 00:43:32 -07:00
66 changed files with 7758 additions and 248 deletions

View File

@@ -0,0 +1,442 @@
---
name: everything-claude-code-conventions
description: Development conventions and patterns for everything-claude-code. JavaScript project with conventional commits.
---
# Everything Claude Code Conventions
> Generated from [affaan-m/everything-claude-code](https://github.com/affaan-m/everything-claude-code) on 2026-03-20
## Overview
This skill teaches Claude the development patterns and conventions used in everything-claude-code.
## Tech Stack
- **Primary Language**: JavaScript
- **Architecture**: hybrid module organization
- **Test Location**: separate
## When to Use This Skill
Activate this skill when:
- Making changes to this repository
- Adding new features following established patterns
- Writing tests that match project conventions
- Creating commits with proper message format
## Commit Conventions
Follow these commit message conventions based on 500 analyzed commits.
### Commit Style: Conventional Commits
### Prefixes Used
- `fix`
- `test`
- `feat`
- `docs`
### Message Guidelines
- Average message length: ~65 characters
- Keep first line concise and descriptive
- Use imperative mood ("Add feature" not "Added feature")
*Commit message example*
```text
feat(rules): add C# language support
```
*Commit message example*
```text
chore(deps-dev): bump flatted (#675)
```
*Commit message example*
```text
fix: auto-detect ECC root from plugin cache when CLAUDE_PLUGIN_ROOT is unset (#547) (#691)
```
*Commit message example*
```text
docs: add Antigravity setup and usage guide (#552)
```
*Commit message example*
```text
merge: PR #529 — feat(skills): add documentation-lookup, bun-runtime, nextjs-turbopack; feat(agents): add rust-reviewer
```
*Commit message example*
```text
Revert "Add Kiro IDE support (.kiro/) (#548)"
```
*Commit message example*
```text
Add Kiro IDE support (.kiro/) (#548)
```
*Commit message example*
```text
feat: add block-no-verify hook for Claude Code and Cursor (#649)
```
## Architecture
### Project Structure: Single Package
This project uses **hybrid** module organization.
### Configuration Files
- `.github/workflows/ci.yml`
- `.github/workflows/maintenance.yml`
- `.github/workflows/monthly-metrics.yml`
- `.github/workflows/release.yml`
- `.github/workflows/reusable-release.yml`
- `.github/workflows/reusable-test.yml`
- `.github/workflows/reusable-validate.yml`
- `.opencode/package.json`
- `.opencode/tsconfig.json`
- `.prettierrc`
- `eslint.config.js`
- `package.json`
### Guidelines
- This project uses a hybrid organization
- Follow existing patterns when adding new code
## Code Style
### Language: JavaScript
### Naming Conventions
| Element | Convention |
|---------|------------|
| Files | camelCase |
| Functions | camelCase |
| Classes | PascalCase |
| Constants | SCREAMING_SNAKE_CASE |
### Import Style: Relative Imports
### Export Style: Mixed Style
*Preferred import style*
```typescript
// Use relative imports
import { Button } from '../components/Button'
import { useAuth } from './hooks/useAuth'
```
## Testing
### Test Framework
No specific test framework detected — use the repository's existing test patterns.
### File Pattern: `*.test.js`
### Test Types
- **Unit tests**: Test individual functions and components in isolation
- **Integration tests**: Test interactions between multiple components/services
### Coverage
This project has coverage reporting configured. Aim for 80%+ coverage.
## Error Handling
### Error Handling Style: Try-Catch Blocks
*Standard error handling pattern*
```typescript
try {
const result = await riskyOperation()
return result
} catch (error) {
console.error('Operation failed:', error)
throw new Error('User-friendly message')
}
```
## Common Workflows
These workflows were detected from analyzing commit patterns.
### Database Migration
Database schema changes with migration files
**Frequency**: ~2 times per month
**Steps**:
1. Create migration file
2. Update schema definitions
3. Generate/update types
**Files typically involved**:
- `**/schema.*`
- `migrations/*`
**Example commit sequence**:
```
feat: implement --with/--without selective install flags (#679)
fix: sync catalog counts with filesystem (27 agents, 113 skills, 58 commands) (#693)
feat(rules): add Rust language rules (rebased #660) (#686)
```
### Feature Development
Standard feature implementation workflow
**Frequency**: ~22 times per month
**Steps**:
1. Add feature implementation
2. Add tests for feature
3. Update documentation
**Files typically involved**:
- `manifests/*`
- `schemas/*`
- `**/*.test.*`
- `**/api/**`
**Example commit sequence**:
```
feat(skills): add documentation-lookup, bun-runtime, nextjs-turbopack; feat(agents): add rust-reviewer
docs(skills): align documentation-lookup with CONTRIBUTING template; add cross-harness (Codex/Cursor) skill copies
fix: address PR review — skill template (When to use, How it works, Examples), bun.lock, next build note, rust-reviewer CI note, doc-lookup privacy/uncertainty
```
### Add Language Rules
Adds a new programming language to the rules system, including coding style, hooks, patterns, security, and testing guidelines.
**Frequency**: ~2 times per month
**Steps**:
1. Create a new directory under rules/{language}/
2. Add coding-style.md, hooks.md, patterns.md, security.md, and testing.md files with language-specific content
3. Optionally reference or link to related skills
**Files typically involved**:
- `rules/*/coding-style.md`
- `rules/*/hooks.md`
- `rules/*/patterns.md`
- `rules/*/security.md`
- `rules/*/testing.md`
**Example commit sequence**:
```
Create a new directory under rules/{language}/
Add coding-style.md, hooks.md, patterns.md, security.md, and testing.md files with language-specific content
Optionally reference or link to related skills
```
### Add New Skill
Adds a new skill to the system, documenting its workflow, triggers, and usage, often with supporting scripts.
**Frequency**: ~4 times per month
**Steps**:
1. Create a new directory under skills/{skill-name}/
2. Add SKILL.md with documentation (When to Use, How It Works, Examples, etc.)
3. Optionally add scripts or supporting files under skills/{skill-name}/scripts/
4. Address review feedback and iterate on documentation
**Files typically involved**:
- `skills/*/SKILL.md`
- `skills/*/scripts/*.sh`
- `skills/*/scripts/*.js`
**Example commit sequence**:
```
Create a new directory under skills/{skill-name}/
Add SKILL.md with documentation (When to Use, How It Works, Examples, etc.)
Optionally add scripts or supporting files under skills/{skill-name}/scripts/
Address review feedback and iterate on documentation
```
### Add New Agent
Adds a new agent to the system for code review, build resolution, or other automated tasks.
**Frequency**: ~2 times per month
**Steps**:
1. Create a new agent markdown file under agents/{agent-name}.md
2. Register the agent in AGENTS.md
3. Optionally update README.md and docs/COMMAND-AGENT-MAP.md
**Files typically involved**:
- `agents/*.md`
- `AGENTS.md`
- `README.md`
- `docs/COMMAND-AGENT-MAP.md`
**Example commit sequence**:
```
Create a new agent markdown file under agents/{agent-name}.md
Register the agent in AGENTS.md
Optionally update README.md and docs/COMMAND-AGENT-MAP.md
```
### Add New Command
Adds a new command to the system, often paired with a backing skill.
**Frequency**: ~1 times per month
**Steps**:
1. Create a new markdown file under commands/{command-name}.md
2. Optionally add or update a backing skill under skills/{skill-name}/SKILL.md
**Files typically involved**:
- `commands/*.md`
- `skills/*/SKILL.md`
**Example commit sequence**:
```
Create a new markdown file under commands/{command-name}.md
Optionally add or update a backing skill under skills/{skill-name}/SKILL.md
```
### Sync Catalog Counts
Synchronizes the documented counts of agents, skills, and commands in AGENTS.md and README.md with the actual repository state.
**Frequency**: ~3 times per month
**Steps**:
1. Update agent, skill, and command counts in AGENTS.md
2. Update the same counts in README.md (quick-start, comparison table, etc.)
3. Optionally update other documentation files
**Files typically involved**:
- `AGENTS.md`
- `README.md`
**Example commit sequence**:
```
Update agent, skill, and command counts in AGENTS.md
Update the same counts in README.md (quick-start, comparison table, etc.)
Optionally update other documentation files
```
### Add Cross Harness Skill Copies
Adds skill copies for different agent harnesses (e.g., Codex, Cursor, Antigravity) to ensure compatibility across platforms.
**Frequency**: ~2 times per month
**Steps**:
1. Copy or adapt SKILL.md to .agents/skills/{skill}/SKILL.md and/or .cursor/skills/{skill}/SKILL.md
2. Optionally add harness-specific openai.yaml or config files
3. Address review feedback to align with CONTRIBUTING template
**Files typically involved**:
- `.agents/skills/*/SKILL.md`
- `.cursor/skills/*/SKILL.md`
- `.agents/skills/*/agents/openai.yaml`
**Example commit sequence**:
```
Copy or adapt SKILL.md to .agents/skills/{skill}/SKILL.md and/or .cursor/skills/{skill}/SKILL.md
Optionally add harness-specific openai.yaml or config files
Address review feedback to align with CONTRIBUTING template
```
### Add Or Update Hook
Adds or updates git or bash hooks to enforce workflow, quality, or security policies.
**Frequency**: ~1 times per month
**Steps**:
1. Add or update hook scripts in hooks/ or scripts/hooks/
2. Register the hook in hooks/hooks.json or similar config
3. Optionally add or update tests in tests/hooks/
**Files typically involved**:
- `hooks/*.hook`
- `hooks/hooks.json`
- `scripts/hooks/*.js`
- `tests/hooks/*.test.js`
- `.cursor/hooks.json`
**Example commit sequence**:
```
Add or update hook scripts in hooks/ or scripts/hooks/
Register the hook in hooks/hooks.json or similar config
Optionally add or update tests in tests/hooks/
```
### Address Review Feedback
Addresses code review feedback by updating documentation, scripts, or configuration for clarity, correctness, or convention alignment.
**Frequency**: ~4 times per month
**Steps**:
1. Edit SKILL.md, agent, or command files to address reviewer comments
2. Update examples, headings, or configuration as requested
3. Iterate until all review feedback is resolved
**Files typically involved**:
- `skills/*/SKILL.md`
- `agents/*.md`
- `commands/*.md`
- `.agents/skills/*/SKILL.md`
- `.cursor/skills/*/SKILL.md`
**Example commit sequence**:
```
Edit SKILL.md, agent, or command files to address reviewer comments
Update examples, headings, or configuration as requested
Iterate until all review feedback is resolved
```
## Best Practices
Based on analysis of the codebase, follow these practices:
### Do
- Use conventional commit format (feat:, fix:, etc.)
- Follow *.test.js naming pattern
- Use camelCase for file names
- Prefer mixed exports
### Don't
- Don't write vague commit messages
- Don't skip tests for new features
- Don't deviate from established patterns without discussion
---
*This skill was auto-generated by [ECC Tools](https://ecc.tools). Review and customize as needed for your team.*

View File

@@ -0,0 +1,6 @@
interface:
display_name: "Everything Claude Code"
short_description: "Repo-specific patterns and workflows for everything-claude-code"
default_prompt: "Use the everything-claude-code repo skill to follow existing architecture, testing, and workflow conventions."
policy:
allow_implicit_invocation: true

View File

@@ -0,0 +1,39 @@
---
name: add-language-rules
description: Workflow command scaffold for add-language-rules in everything-claude-code.
allowed_tools: ["Bash", "Read", "Write", "Grep", "Glob"]
---
# /add-language-rules
Use this workflow when working on **add-language-rules** in `everything-claude-code`.
## Goal
Adds a new programming language to the rules system, including coding style, hooks, patterns, security, and testing guidelines.
## Common Files
- `rules/*/coding-style.md`
- `rules/*/hooks.md`
- `rules/*/patterns.md`
- `rules/*/security.md`
- `rules/*/testing.md`
## Suggested Sequence
1. Understand the current state and failure mode before editing.
2. Make the smallest coherent change that satisfies the workflow goal.
3. Run the most relevant verification for touched files.
4. Summarize what changed and what still needs review.
## Typical Commit Signals
- Create a new directory under rules/{language}/
- Add coding-style.md, hooks.md, patterns.md, security.md, and testing.md files with language-specific content
- Optionally reference or link to related skills
## Notes
- Treat this as a scaffold, not a hard-coded script.
- Update the command if the workflow evolves materially.

View File

@@ -0,0 +1,36 @@
---
name: database-migration
description: Workflow command scaffold for database-migration in everything-claude-code.
allowed_tools: ["Bash", "Read", "Write", "Grep", "Glob"]
---
# /database-migration
Use this workflow when working on **database-migration** in `everything-claude-code`.
## Goal
Database schema changes with migration files
## Common Files
- `**/schema.*`
- `migrations/*`
## Suggested Sequence
1. Understand the current state and failure mode before editing.
2. Make the smallest coherent change that satisfies the workflow goal.
3. Run the most relevant verification for touched files.
4. Summarize what changed and what still needs review.
## Typical Commit Signals
- Create migration file
- Update schema definitions
- Generate/update types
## Notes
- Treat this as a scaffold, not a hard-coded script.
- Update the command if the workflow evolves materially.

View File

@@ -0,0 +1,38 @@
---
name: feature-development
description: Workflow command scaffold for feature-development in everything-claude-code.
allowed_tools: ["Bash", "Read", "Write", "Grep", "Glob"]
---
# /feature-development
Use this workflow when working on **feature-development** in `everything-claude-code`.
## Goal
Standard feature implementation workflow
## Common Files
- `manifests/*`
- `schemas/*`
- `**/*.test.*`
- `**/api/**`
## Suggested Sequence
1. Understand the current state and failure mode before editing.
2. Make the smallest coherent change that satisfies the workflow goal.
3. Run the most relevant verification for touched files.
4. Summarize what changed and what still needs review.
## Typical Commit Signals
- Add feature implementation
- Add tests for feature
- Update documentation
## Notes
- Treat this as a scaffold, not a hard-coded script.
- Update the command if the workflow evolves materially.

334
.claude/ecc-tools.json Normal file
View File

@@ -0,0 +1,334 @@
{
"version": "1.3",
"schemaVersion": "1.0",
"generatedBy": "ecc-tools",
"generatedAt": "2026-03-20T12:07:36.496Z",
"repo": "https://github.com/affaan-m/everything-claude-code",
"profiles": {
"requested": "full",
"recommended": "full",
"effective": "full",
"requestedAlias": "full",
"recommendedAlias": "full",
"effectiveAlias": "full"
},
"requestedProfile": "full",
"profile": "full",
"recommendedProfile": "full",
"effectiveProfile": "full",
"tier": "enterprise",
"requestedComponents": [
"repo-baseline",
"workflow-automation",
"security-audits",
"research-tooling",
"team-rollout",
"governance-controls"
],
"selectedComponents": [
"repo-baseline",
"workflow-automation",
"security-audits",
"research-tooling",
"team-rollout",
"governance-controls"
],
"requestedAddComponents": [],
"requestedRemoveComponents": [],
"blockedRemovalComponents": [],
"tierFilteredComponents": [],
"requestedRootPackages": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
],
"selectedRootPackages": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
],
"requestedPackages": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
],
"requestedAddPackages": [],
"requestedRemovePackages": [],
"selectedPackages": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
],
"packages": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
],
"blockedRemovalPackages": [],
"tierFilteredRootPackages": [],
"tierFilteredPackages": [],
"conflictingPackages": [],
"dependencyGraph": {
"runtime-core": [],
"workflow-pack": [
"runtime-core"
],
"agentshield-pack": [
"workflow-pack"
],
"research-pack": [
"workflow-pack"
],
"team-config-sync": [
"runtime-core"
],
"enterprise-controls": [
"team-config-sync"
]
},
"resolutionOrder": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
],
"requestedModules": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
],
"selectedModules": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
],
"modules": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
],
"managedFiles": [
".claude/skills/everything-claude-code/SKILL.md",
".agents/skills/everything-claude-code/SKILL.md",
".agents/skills/everything-claude-code/agents/openai.yaml",
".claude/identity.json",
".codex/config.toml",
".codex/AGENTS.md",
".codex/agents/explorer.toml",
".codex/agents/reviewer.toml",
".codex/agents/docs-researcher.toml",
".claude/homunculus/instincts/inherited/everything-claude-code-instincts.yaml",
".claude/rules/everything-claude-code-guardrails.md",
".claude/research/everything-claude-code-research-playbook.md",
".claude/team/everything-claude-code-team-config.json",
".claude/enterprise/controls.md",
".claude/commands/database-migration.md",
".claude/commands/feature-development.md",
".claude/commands/add-language-rules.md"
],
"packageFiles": {
"runtime-core": [
".claude/skills/everything-claude-code/SKILL.md",
".agents/skills/everything-claude-code/SKILL.md",
".agents/skills/everything-claude-code/agents/openai.yaml",
".claude/identity.json",
".codex/config.toml",
".codex/AGENTS.md",
".codex/agents/explorer.toml",
".codex/agents/reviewer.toml",
".codex/agents/docs-researcher.toml",
".claude/homunculus/instincts/inherited/everything-claude-code-instincts.yaml"
],
"agentshield-pack": [
".claude/rules/everything-claude-code-guardrails.md"
],
"research-pack": [
".claude/research/everything-claude-code-research-playbook.md"
],
"team-config-sync": [
".claude/team/everything-claude-code-team-config.json"
],
"enterprise-controls": [
".claude/enterprise/controls.md"
],
"workflow-pack": [
".claude/commands/database-migration.md",
".claude/commands/feature-development.md",
".claude/commands/add-language-rules.md"
]
},
"moduleFiles": {
"runtime-core": [
".claude/skills/everything-claude-code/SKILL.md",
".agents/skills/everything-claude-code/SKILL.md",
".agents/skills/everything-claude-code/agents/openai.yaml",
".claude/identity.json",
".codex/config.toml",
".codex/AGENTS.md",
".codex/agents/explorer.toml",
".codex/agents/reviewer.toml",
".codex/agents/docs-researcher.toml",
".claude/homunculus/instincts/inherited/everything-claude-code-instincts.yaml"
],
"agentshield-pack": [
".claude/rules/everything-claude-code-guardrails.md"
],
"research-pack": [
".claude/research/everything-claude-code-research-playbook.md"
],
"team-config-sync": [
".claude/team/everything-claude-code-team-config.json"
],
"enterprise-controls": [
".claude/enterprise/controls.md"
],
"workflow-pack": [
".claude/commands/database-migration.md",
".claude/commands/feature-development.md",
".claude/commands/add-language-rules.md"
]
},
"files": [
{
"moduleId": "runtime-core",
"path": ".claude/skills/everything-claude-code/SKILL.md",
"description": "Repository-specific Claude Code skill generated from git history."
},
{
"moduleId": "runtime-core",
"path": ".agents/skills/everything-claude-code/SKILL.md",
"description": "Codex-facing copy of the generated repository skill."
},
{
"moduleId": "runtime-core",
"path": ".agents/skills/everything-claude-code/agents/openai.yaml",
"description": "Codex skill metadata so the repo skill appears cleanly in the skill interface."
},
{
"moduleId": "runtime-core",
"path": ".claude/identity.json",
"description": "Suggested identity.json baseline derived from repository conventions."
},
{
"moduleId": "runtime-core",
"path": ".codex/config.toml",
"description": "Repo-local Codex MCP and multi-agent baseline aligned with ECC defaults."
},
{
"moduleId": "runtime-core",
"path": ".codex/AGENTS.md",
"description": "Codex usage guide that points at the generated repo skill and workflow bundle."
},
{
"moduleId": "runtime-core",
"path": ".codex/agents/explorer.toml",
"description": "Read-only explorer role config for Codex multi-agent work."
},
{
"moduleId": "runtime-core",
"path": ".codex/agents/reviewer.toml",
"description": "Read-only reviewer role config focused on correctness and security."
},
{
"moduleId": "runtime-core",
"path": ".codex/agents/docs-researcher.toml",
"description": "Read-only docs researcher role config for API verification."
},
{
"moduleId": "runtime-core",
"path": ".claude/homunculus/instincts/inherited/everything-claude-code-instincts.yaml",
"description": "Continuous-learning instincts derived from repository patterns."
},
{
"moduleId": "agentshield-pack",
"path": ".claude/rules/everything-claude-code-guardrails.md",
"description": "Repository guardrails distilled from analysis for security and workflow review."
},
{
"moduleId": "research-pack",
"path": ".claude/research/everything-claude-code-research-playbook.md",
"description": "Research workflow playbook for source attribution and long-context tasks."
},
{
"moduleId": "team-config-sync",
"path": ".claude/team/everything-claude-code-team-config.json",
"description": "Team config scaffold that points collaborators at the shared ECC bundle."
},
{
"moduleId": "enterprise-controls",
"path": ".claude/enterprise/controls.md",
"description": "Enterprise governance scaffold for approvals, audit posture, and escalation."
},
{
"moduleId": "workflow-pack",
"path": ".claude/commands/database-migration.md",
"description": "Workflow command scaffold for database-migration."
},
{
"moduleId": "workflow-pack",
"path": ".claude/commands/feature-development.md",
"description": "Workflow command scaffold for feature-development."
},
{
"moduleId": "workflow-pack",
"path": ".claude/commands/add-language-rules.md",
"description": "Workflow command scaffold for add-language-rules."
}
],
"workflows": [
{
"command": "database-migration",
"path": ".claude/commands/database-migration.md"
},
{
"command": "feature-development",
"path": ".claude/commands/feature-development.md"
},
{
"command": "add-language-rules",
"path": ".claude/commands/add-language-rules.md"
}
],
"adapters": {
"claudeCode": {
"skillPath": ".claude/skills/everything-claude-code/SKILL.md",
"identityPath": ".claude/identity.json",
"commandPaths": [
".claude/commands/database-migration.md",
".claude/commands/feature-development.md",
".claude/commands/add-language-rules.md"
]
},
"codex": {
"configPath": ".codex/config.toml",
"agentsGuidePath": ".codex/AGENTS.md",
"skillPath": ".agents/skills/everything-claude-code/SKILL.md"
}
}
}

View File

@@ -0,0 +1,15 @@
# Enterprise Controls
This is a starter governance file for enterprise ECC deployments.
## Baseline
- Repository: https://github.com/affaan-m/everything-claude-code
- Recommended profile: full
- Keep install manifests, audit allowlists, and Codex baselines under review.
## Approval Expectations
- Security-sensitive workflow changes require explicit reviewer acknowledgement.
- Audit suppressions must include a reason and the narrowest viable matcher.
- Generated skills should be reviewed before broad rollout to teams.

14
.claude/identity.json Normal file
View File

@@ -0,0 +1,14 @@
{
"version": "2.0",
"technicalLevel": "technical",
"preferredStyle": {
"verbosity": "minimal",
"codeComments": true,
"explanations": true
},
"domains": [
"javascript"
],
"suggestedBy": "ecc-tools-repo-analysis",
"createdAt": "2026-03-20T12:07:57.119Z"
}

View File

@@ -0,0 +1,21 @@
# Everything Claude Code Research Playbook
Use this when the task is documentation-heavy, source-sensitive, or requires broad repository context.
## Defaults
- Prefer primary documentation and direct source links.
- Include concrete dates when facts may change over time.
- Keep a short evidence trail for each recommendation or conclusion.
## Suggested Flow
1. Inspect local code and docs first.
2. Browse only for unstable or external facts.
3. Summarize findings with file paths, commands, or links.
## Repo Signals
- Primary language: JavaScript
- Framework: Not detected
- Workflows detected: 10

View File

@@ -0,0 +1,34 @@
# Everything Claude Code Guardrails
Generated by ECC Tools from repository history. Review before treating it as a hard policy file.
## Commit Workflow
- Prefer `conventional` commit messaging with prefixes such as fix, test, feat, docs.
- Keep new changes aligned with the existing pull-request and review flow already present in the repo.
## Architecture
- Preserve the current `hybrid` module organization.
- Respect the current test layout: `separate`.
## Code Style
- Use `camelCase` file naming.
- Prefer `relative` imports and `mixed` exports.
## ECC Defaults
- Current recommended install profile: `full`.
- Validate risky config changes in PRs and keep the install manifest in source control.
## Detected Workflows
- database-migration: Database schema changes with migration files
- feature-development: Standard feature implementation workflow
- add-language-rules: Adds a new programming language to the rules system, including coding style, hooks, patterns, security, and testing guidelines.
## Review Reminder
- Regenerate this bundle when repository conventions materially change.
- Keep suppressions narrow and auditable.

View File

@@ -1,97 +1,442 @@
# Everything Claude Code ---
name: everything-claude-code-conventions
description: Development conventions and patterns for everything-claude-code. JavaScript project with conventional commits.
---
Use this skill when working inside the `everything-claude-code` repository and you need repo-specific guidance instead of generic coding advice. # Everything Claude Code Conventions
Optional companion instincts live at `.claude/homunculus/instincts/inherited/everything-claude-code-instincts.yaml` for teams using `continuous-learning-v2`. > Generated from [affaan-m/everything-claude-code](https://github.com/affaan-m/everything-claude-code) on 2026-03-20
## When to Use ## Overview
Activate this skill when the task touches one or more of these areas: This skill teaches Claude the development patterns and conventions used in everything-claude-code.
- cross-platform parity across Claude Code, Cursor, Codex, and OpenCode
- hook scripts, hook docs, or hook tests
- skills, commands, agents, or rules that must stay synchronized across surfaces
- release work such as version bumps, changelog updates, or plugin metadata updates
- continuous-learning or instinct workflows inside this repository
## How It Works ## Tech Stack
### 1. Follow the repo's development contract - **Primary Language**: JavaScript
- **Architecture**: hybrid module organization
- **Test Location**: separate
- Use conventional commits such as `feat:`, `fix:`, `docs:`, `test:`, `chore:`. ## When to Use This Skill
- Keep commit subjects concise and close to the repo norm of about 70 characters.
- Prefer camelCase for JavaScript and TypeScript module filenames.
- Use kebab-case for skill directories and command filenames.
- Keep test files on the existing `*.test.js` pattern.
### 2. Treat the root repo as the source of truth Activate this skill when:
- Making changes to this repository
- Adding new features following established patterns
- Writing tests that match project conventions
- Creating commits with proper message format
Start from the root implementation, then mirror changes where they are intentionally shipped. ## Commit Conventions
Typical mirror targets: Follow these commit message conventions based on 500 analyzed commits.
- `.cursor/`
- `.codex/`
- `.opencode/`
- `.agents/`
Do not assume every `.claude/` artifact needs a cross-platform copy. Only mirror files that are part of the shipped multi-platform surface. ### Commit Style: Conventional Commits
### 3. Update hooks with tests and docs together ### Prefixes Used
When changing hook behavior: - `fix`
1. update `hooks/hooks.json` or the relevant script in `scripts/hooks/` - `test`
2. update matching tests in `tests/hooks/` or `tests/integration/` - `feat`
3. update `hooks/README.md` if behavior or configuration changed - `docs`
4. verify parity for `.cursor/hooks/` and `.opencode/plugins/` when applicable
### 4. Keep release metadata in sync ### Message Guidelines
When preparing a release, verify the same version is reflected anywhere it is surfaced: - Average message length: ~65 characters
- `package.json` - Keep first line concise and descriptive
- `.claude-plugin/plugin.json` - Use imperative mood ("Add feature" not "Added feature")
- `.claude-plugin/marketplace.json`
*Commit message example*
```text
feat(rules): add C# language support
```
*Commit message example*
```text
chore(deps-dev): bump flatted (#675)
```
*Commit message example*
```text
fix: auto-detect ECC root from plugin cache when CLAUDE_PLUGIN_ROOT is unset (#547) (#691)
```
*Commit message example*
```text
docs: add Antigravity setup and usage guide (#552)
```
*Commit message example*
```text
merge: PR #529 — feat(skills): add documentation-lookup, bun-runtime, nextjs-turbopack; feat(agents): add rust-reviewer
```
*Commit message example*
```text
Revert "Add Kiro IDE support (.kiro/) (#548)"
```
*Commit message example*
```text
Add Kiro IDE support (.kiro/) (#548)
```
*Commit message example*
```text
feat: add block-no-verify hook for Claude Code and Cursor (#649)
```
## Architecture
### Project Structure: Single Package
This project uses **hybrid** module organization.
### Configuration Files
- `.github/workflows/ci.yml`
- `.github/workflows/maintenance.yml`
- `.github/workflows/monthly-metrics.yml`
- `.github/workflows/release.yml`
- `.github/workflows/reusable-release.yml`
- `.github/workflows/reusable-test.yml`
- `.github/workflows/reusable-validate.yml`
- `.opencode/package.json` - `.opencode/package.json`
- release notes or changelog entries when the release process expects them - `.opencode/tsconfig.json`
- `.prettierrc`
- `eslint.config.js`
- `package.json`
### 5. Be explicit about continuous-learning changes ### Guidelines
If the task touches `skills/continuous-learning-v2/` or imported instincts: - This project uses a hybrid organization
- prefer accurate, low-noise instincts over auto-generated bulk output - Follow existing patterns when adding new code
- keep instinct files importable by `instinct-cli.py`
- remove duplicated or contradictory instincts instead of layering more guidance on top
## Examples ## Code Style
### Naming examples ### Language: JavaScript
```text ### Naming Conventions
skills/continuous-learning-v2/SKILL.md
commands/update-docs.md | Element | Convention |
scripts/hooks/session-start.js |---------|------------|
tests/hooks/hooks.test.js | Files | camelCase |
| Functions | camelCase |
| Classes | PascalCase |
| Constants | SCREAMING_SNAKE_CASE |
### Import Style: Relative Imports
### Export Style: Mixed Style
*Preferred import style*
```typescript
// Use relative imports
import { Button } from '../components/Button'
import { useAuth } from './hooks/useAuth'
``` ```
### Commit examples ## Testing
```text ### Test Framework
fix: harden session summary extraction on Stop hook
docs: align Codex config examples with current schema No specific test framework detected — use the repository's existing test patterns.
test: cover Windows formatter fallback behavior
### File Pattern: `*.test.js`
### Test Types
- **Unit tests**: Test individual functions and components in isolation
- **Integration tests**: Test interactions between multiple components/services
### Coverage
This project has coverage reporting configured. Aim for 80%+ coverage.
## Error Handling
### Error Handling Style: Try-Catch Blocks
*Standard error handling pattern*
```typescript
try {
const result = await riskyOperation()
return result
} catch (error) {
console.error('Operation failed:', error)
throw new Error('User-friendly message')
}
``` ```
### Skill update checklist ## Common Workflows
```text These workflows were detected from analyzing commit patterns.
1. Update the root skill or command.
2. Mirror it only where that surface is shipped. ### Database Migration
3. Run targeted tests first, then the broader suite if behavior changed.
4. Review docs and release notes for user-visible changes. Database schema changes with migration files
**Frequency**: ~2 times per month
**Steps**:
1. Create migration file
2. Update schema definitions
3. Generate/update types
**Files typically involved**:
- `**/schema.*`
- `migrations/*`
**Example commit sequence**:
```
feat: implement --with/--without selective install flags (#679)
fix: sync catalog counts with filesystem (27 agents, 113 skills, 58 commands) (#693)
feat(rules): add Rust language rules (rebased #660) (#686)
``` ```
### Release checklist ### Feature Development
```text Standard feature implementation workflow
1. Bump package and plugin versions.
2. Run npm test. **Frequency**: ~22 times per month
3. Verify platform-specific manifests.
4. Publish the release notes with a human-readable summary. **Steps**:
1. Add feature implementation
2. Add tests for feature
3. Update documentation
**Files typically involved**:
- `manifests/*`
- `schemas/*`
- `**/*.test.*`
- `**/api/**`
**Example commit sequence**:
``` ```
feat(skills): add documentation-lookup, bun-runtime, nextjs-turbopack; feat(agents): add rust-reviewer
docs(skills): align documentation-lookup with CONTRIBUTING template; add cross-harness (Codex/Cursor) skill copies
fix: address PR review — skill template (When to use, How it works, Examples), bun.lock, next build note, rust-reviewer CI note, doc-lookup privacy/uncertainty
```
### Add Language Rules
Adds a new programming language to the rules system, including coding style, hooks, patterns, security, and testing guidelines.
**Frequency**: ~2 times per month
**Steps**:
1. Create a new directory under rules/{language}/
2. Add coding-style.md, hooks.md, patterns.md, security.md, and testing.md files with language-specific content
3. Optionally reference or link to related skills
**Files typically involved**:
- `rules/*/coding-style.md`
- `rules/*/hooks.md`
- `rules/*/patterns.md`
- `rules/*/security.md`
- `rules/*/testing.md`
**Example commit sequence**:
```
Create a new directory under rules/{language}/
Add coding-style.md, hooks.md, patterns.md, security.md, and testing.md files with language-specific content
Optionally reference or link to related skills
```
### Add New Skill
Adds a new skill to the system, documenting its workflow, triggers, and usage, often with supporting scripts.
**Frequency**: ~4 times per month
**Steps**:
1. Create a new directory under skills/{skill-name}/
2. Add SKILL.md with documentation (When to Use, How It Works, Examples, etc.)
3. Optionally add scripts or supporting files under skills/{skill-name}/scripts/
4. Address review feedback and iterate on documentation
**Files typically involved**:
- `skills/*/SKILL.md`
- `skills/*/scripts/*.sh`
- `skills/*/scripts/*.js`
**Example commit sequence**:
```
Create a new directory under skills/{skill-name}/
Add SKILL.md with documentation (When to Use, How It Works, Examples, etc.)
Optionally add scripts or supporting files under skills/{skill-name}/scripts/
Address review feedback and iterate on documentation
```
### Add New Agent
Adds a new agent to the system for code review, build resolution, or other automated tasks.
**Frequency**: ~2 times per month
**Steps**:
1. Create a new agent markdown file under agents/{agent-name}.md
2. Register the agent in AGENTS.md
3. Optionally update README.md and docs/COMMAND-AGENT-MAP.md
**Files typically involved**:
- `agents/*.md`
- `AGENTS.md`
- `README.md`
- `docs/COMMAND-AGENT-MAP.md`
**Example commit sequence**:
```
Create a new agent markdown file under agents/{agent-name}.md
Register the agent in AGENTS.md
Optionally update README.md and docs/COMMAND-AGENT-MAP.md
```
### Add New Command
Adds a new command to the system, often paired with a backing skill.
**Frequency**: ~1 times per month
**Steps**:
1. Create a new markdown file under commands/{command-name}.md
2. Optionally add or update a backing skill under skills/{skill-name}/SKILL.md
**Files typically involved**:
- `commands/*.md`
- `skills/*/SKILL.md`
**Example commit sequence**:
```
Create a new markdown file under commands/{command-name}.md
Optionally add or update a backing skill under skills/{skill-name}/SKILL.md
```
### Sync Catalog Counts
Synchronizes the documented counts of agents, skills, and commands in AGENTS.md and README.md with the actual repository state.
**Frequency**: ~3 times per month
**Steps**:
1. Update agent, skill, and command counts in AGENTS.md
2. Update the same counts in README.md (quick-start, comparison table, etc.)
3. Optionally update other documentation files
**Files typically involved**:
- `AGENTS.md`
- `README.md`
**Example commit sequence**:
```
Update agent, skill, and command counts in AGENTS.md
Update the same counts in README.md (quick-start, comparison table, etc.)
Optionally update other documentation files
```
### Add Cross Harness Skill Copies
Adds skill copies for different agent harnesses (e.g., Codex, Cursor, Antigravity) to ensure compatibility across platforms.
**Frequency**: ~2 times per month
**Steps**:
1. Copy or adapt SKILL.md to .agents/skills/{skill}/SKILL.md and/or .cursor/skills/{skill}/SKILL.md
2. Optionally add harness-specific openai.yaml or config files
3. Address review feedback to align with CONTRIBUTING template
**Files typically involved**:
- `.agents/skills/*/SKILL.md`
- `.cursor/skills/*/SKILL.md`
- `.agents/skills/*/agents/openai.yaml`
**Example commit sequence**:
```
Copy or adapt SKILL.md to .agents/skills/{skill}/SKILL.md and/or .cursor/skills/{skill}/SKILL.md
Optionally add harness-specific openai.yaml or config files
Address review feedback to align with CONTRIBUTING template
```
### Add Or Update Hook
Adds or updates git or bash hooks to enforce workflow, quality, or security policies.
**Frequency**: ~1 times per month
**Steps**:
1. Add or update hook scripts in hooks/ or scripts/hooks/
2. Register the hook in hooks/hooks.json or similar config
3. Optionally add or update tests in tests/hooks/
**Files typically involved**:
- `hooks/*.hook`
- `hooks/hooks.json`
- `scripts/hooks/*.js`
- `tests/hooks/*.test.js`
- `.cursor/hooks.json`
**Example commit sequence**:
```
Add or update hook scripts in hooks/ or scripts/hooks/
Register the hook in hooks/hooks.json or similar config
Optionally add or update tests in tests/hooks/
```
### Address Review Feedback
Addresses code review feedback by updating documentation, scripts, or configuration for clarity, correctness, or convention alignment.
**Frequency**: ~4 times per month
**Steps**:
1. Edit SKILL.md, agent, or command files to address reviewer comments
2. Update examples, headings, or configuration as requested
3. Iterate until all review feedback is resolved
**Files typically involved**:
- `skills/*/SKILL.md`
- `agents/*.md`
- `commands/*.md`
- `.agents/skills/*/SKILL.md`
- `.cursor/skills/*/SKILL.md`
**Example commit sequence**:
```
Edit SKILL.md, agent, or command files to address reviewer comments
Update examples, headings, or configuration as requested
Iterate until all review feedback is resolved
```
## Best Practices
Based on analysis of the codebase, follow these practices:
### Do
- Use conventional commit format (feat:, fix:, etc.)
- Follow *.test.js naming pattern
- Use camelCase for file names
- Prefer mixed exports
### Don't
- Don't write vague commit messages
- Don't skip tests for new features
- Don't deviate from established patterns without discussion
---
*This skill was auto-generated by [ECC Tools](https://ecc.tools). Review and customize as needed for your team.*

View File

@@ -0,0 +1,15 @@
{
"version": "1.0",
"generatedBy": "ecc-tools",
"profile": "full",
"sharedSkills": [
".claude/skills/everything-claude-code/SKILL.md",
".agents/skills/everything-claude-code/SKILL.md"
],
"commandFiles": [
".claude/commands/database-migration.md",
".claude/commands/feature-development.md",
".claude/commands/add-language-rules.md"
],
"updatedAt": "2026-03-20T12:07:36.496Z"
}

View File

@@ -6,4 +6,4 @@ developer_instructions = """
Verify APIs, framework behavior, and release-note claims against primary documentation before changes land. Verify APIs, framework behavior, and release-note claims against primary documentation before changes land.
Cite the exact docs or file paths that support each claim. Cite the exact docs or file paths that support each claim.
Do not invent undocumented behavior. Do not invent undocumented behavior.
""" """

View File

@@ -6,4 +6,4 @@ developer_instructions = """
Stay in exploration mode. Stay in exploration mode.
Trace the real execution path, cite files and symbols, and avoid proposing fixes unless the parent agent asks for them. Trace the real execution path, cite files and symbols, and avoid proposing fixes unless the parent agent asks for them.
Prefer targeted search and file reads over broad scans. Prefer targeted search and file reads over broad scans.
""" """

View File

@@ -6,4 +6,4 @@ developer_instructions = """
Review like an owner. Review like an owner.
Prioritize correctness, security, behavioral regressions, and missing tests. Prioritize correctness, security, behavioral regressions, and missing tests.
Lead with concrete findings and avoid style-only feedback unless it hides a real bug. Lead with concrete findings and avoid style-only feedback unless it hides a real bug.
""" """

View File

@@ -15,6 +15,11 @@
} }
], ],
"beforeShellExecution": [ "beforeShellExecution": [
{
"command": "npx block-no-verify@1.1.2",
"event": "beforeShellExecution",
"description": "Block git hook-bypass flag to protect pre-commit, commit-msg, and pre-push hooks from being skipped"
},
{ {
"command": "node .cursor/hooks/before-shell-execution.js", "command": "node .cursor/hooks/before-shell-execution.js",
"event": "beforeShellExecution", "event": "beforeShellExecution",

View File

@@ -1,6 +1,6 @@
# Everything Claude Code (ECC) — Agent Instructions # Everything Claude Code (ECC) — Agent Instructions
This is a **production-ready AI coding plugin** providing 27 specialized agents, 109 skills, 57 commands, and automated hook workflows for software development. This is a **production-ready AI coding plugin** providing 28 specialized agents, 116 skills, 59 commands, and automated hook workflows for software development.
**Version:** 1.9.0 **Version:** 1.9.0
@@ -141,9 +141,9 @@ Troubleshoot failures: check test isolation → verify mocks → fix implementat
## Project Structure ## Project Structure
``` ```
agents/ — 27 specialized subagents agents/ — 28 specialized subagents
skills/ — 109 workflow skills and domain knowledge skills/ — 115 workflow skills and domain knowledge
commands/ — 57 slash commands commands/ — 59 slash commands
hooks/ — Trigger-based automations hooks/ — Trigger-based automations
rules/ — Always-follow guidelines (common + per-language) rules/ — Always-follow guidelines (common + per-language)
scripts/ — Cross-platform Node.js utilities scripts/ — Cross-platform Node.js utilities

View File

@@ -203,7 +203,7 @@ For manual install instructions see the README in the `rules/` folder.
/plugin list everything-claude-code@everything-claude-code /plugin list everything-claude-code@everything-claude-code
``` ```
**That's it!** You now have access to 27 agents, 109 skills, and 57 commands. **That's it!** You now have access to 28 agents, 116 skills, and 59 commands.
--- ---
@@ -264,7 +264,7 @@ everything-claude-code/
| |-- plugin.json # Plugin metadata and component paths | |-- plugin.json # Plugin metadata and component paths
| |-- marketplace.json # Marketplace catalog for /plugin marketplace add | |-- marketplace.json # Marketplace catalog for /plugin marketplace add
| |
|-- agents/ # 27 specialized subagents for delegation |-- agents/ # 28 specialized subagents for delegation
| |-- planner.md # Feature implementation planning | |-- planner.md # Feature implementation planning
| |-- architect.md # System design decisions | |-- architect.md # System design decisions
| |-- tdd-guide.md # Test-driven development | |-- tdd-guide.md # Test-driven development
@@ -1069,9 +1069,9 @@ The configuration is automatically detected from `.opencode/opencode.json`.
| Feature | Claude Code | OpenCode | Status | | Feature | Claude Code | OpenCode | Status |
|---------|-------------|----------|--------| |---------|-------------|----------|--------|
| Agents | ✅ 27 agents | ✅ 12 agents | **Claude Code leads** | | Agents | ✅ 28 agents | ✅ 12 agents | **Claude Code leads** |
| Commands | ✅ 57 commands | ✅ 31 commands | **Claude Code leads** | | Commands | ✅ 59 commands | ✅ 31 commands | **Claude Code leads** |
| Skills | ✅ 109 skills | ✅ 37 skills | **Claude Code leads** | | Skills | ✅ 116 skills | ✅ 37 skills | **Claude Code leads** |
| Hooks | ✅ 8 event types | ✅ 11 events | **OpenCode has more!** | | Hooks | ✅ 8 event types | ✅ 11 events | **OpenCode has more!** |
| Rules | ✅ 29 rules | ✅ 13 instructions | **Claude Code leads** | | Rules | ✅ 29 rules | ✅ 13 instructions | **Claude Code leads** |
| MCP Servers | ✅ 14 servers | ✅ Full | **Full parity** | | MCP Servers | ✅ 14 servers | ✅ Full | **Full parity** |

243
agents/flutter-reviewer.md Normal file
View File

@@ -0,0 +1,243 @@
---
name: flutter-reviewer
description: Flutter and Dart code reviewer. Reviews Flutter code for widget best practices, state management patterns, Dart idioms, performance pitfalls, accessibility, and clean architecture violations. Library-agnostic — works with any state management solution and tooling.
tools: ["Read", "Grep", "Glob", "Bash"]
model: sonnet
---
You are a senior Flutter and Dart code reviewer ensuring idiomatic, performant, and maintainable code.
## Your Role
- Review Flutter/Dart code for idiomatic patterns and framework best practices
- Detect state management anti-patterns and widget rebuild issues regardless of which solution is used
- Enforce the project's chosen architecture boundaries
- Identify performance, accessibility, and security issues
- You DO NOT refactor or rewrite code — you report findings only
## Workflow
### Step 1: Gather Context
Run `git diff --staged` and `git diff` to see changes. If no diff, check `git log --oneline -5`. Identify changed Dart files.
### Step 2: Understand Project Structure
Check for:
- `pubspec.yaml` — dependencies and project type
- `analysis_options.yaml` — lint rules
- `CLAUDE.md` — project-specific conventions
- Whether this is a monorepo (melos) or single-package project
- **Identify the state management approach** (BLoC, Riverpod, Provider, GetX, MobX, Signals, or built-in). Adapt review to the chosen solution's conventions.
- **Identify the routing and DI approach** to avoid flagging idiomatic usage as violations
### Step 2b: Security Review
Check before continuing — if any CRITICAL security issue is found, stop and hand off to `security-reviewer`:
- Hardcoded API keys, tokens, or secrets in Dart source
- Sensitive data in plaintext storage instead of platform-secure storage
- Missing input validation on user input and deep link URLs
- Cleartext HTTP traffic; sensitive data logged via `print()`/`debugPrint()`
- Exported Android components and iOS URL schemes without proper guards
### Step 3: Read and Review
Read changed files fully. Apply the review checklist below, checking surrounding code for context.
### Step 4: Report Findings
Use the output format below. Only report issues with >80% confidence.
**Noise control:**
- Consolidate similar issues (e.g. "5 widgets missing `const` constructors" not 5 separate findings)
- Skip stylistic preferences unless they violate project conventions or cause functional issues
- Only flag unchanged code for CRITICAL security issues
- Prioritize bugs, security, data loss, and correctness over style
## Review Checklist
### Architecture (CRITICAL)
Adapt to the project's chosen architecture (Clean Architecture, MVVM, feature-first, etc.):
- **Business logic in widgets** — Complex logic belongs in a state management component, not in `build()` or callbacks
- **Data models leaking across layers** — If the project separates DTOs and domain entities, they must be mapped at boundaries; if models are shared, review for consistency
- **Cross-layer imports** — Imports must respect the project's layer boundaries; inner layers must not depend on outer layers
- **Framework leaking into pure-Dart layers** — If the project has a domain/model layer intended to be framework-free, it must not import Flutter or platform code
- **Circular dependencies** — Package A depends on B and B depends on A
- **Private `src/` imports across packages** — Importing `package:other/src/internal.dart` breaks Dart package encapsulation
- **Direct instantiation in business logic** — State managers should receive dependencies via injection, not construct them internally
- **Missing abstractions at layer boundaries** — Concrete classes imported across layers instead of depending on interfaces
### State Management (CRITICAL)
**Universal (all solutions):**
- **Boolean flag soup** — `isLoading`/`isError`/`hasData` as separate fields allows impossible states; use sealed types, union variants, or the solution's built-in async state type
- **Non-exhaustive state handling** — All state variants must be handled exhaustively; unhandled variants silently break
- **Single responsibility violated** — Avoid "god" managers handling unrelated concerns
- **Direct API/DB calls from widgets** — Data access should go through a service/repository layer
- **Subscribing in `build()`** — Never call `.listen()` inside build methods; use declarative builders
- **Stream/subscription leaks** — All manual subscriptions must be cancelled in `dispose()`/`close()`
- **Missing error/loading states** — Every async operation must model loading, success, and error distinctly
**Immutable-state solutions (BLoC, Riverpod, Redux):**
- **Mutable state** — State must be immutable; create new instances via `copyWith`, never mutate in-place
- **Missing value equality** — State classes must implement `==`/`hashCode` so the framework detects changes
**Reactive-mutation solutions (MobX, GetX, Signals):**
- **Mutations outside reactivity API** — State must only change through `@action`, `.value`, `.obs`, etc.; direct mutation bypasses tracking
- **Missing computed state** — Derivable values should use the solution's computed mechanism, not be stored redundantly
**Cross-component dependencies:**
- In **Riverpod**, `ref.watch` between providers is expected — flag only circular or tangled chains
- In **BLoC**, blocs should not directly depend on other blocs — prefer shared repositories
- In other solutions, follow documented conventions for inter-component communication
### Widget Composition (HIGH)
- **Oversized `build()`** — Exceeding ~80 lines; extract subtrees to separate widget classes
- **`_build*()` helper methods** — Private methods returning widgets prevent framework optimizations; extract to classes
- **Missing `const` constructors** — Widgets with all-final fields must declare `const` to prevent unnecessary rebuilds
- **Object allocation in parameters** — Inline `TextStyle(...)` without `const` causes rebuilds
- **`StatefulWidget` overuse** — Prefer `StatelessWidget` when no mutable local state is needed
- **Missing `key` in list items** — `ListView.builder` items without stable `ValueKey` cause state bugs
- **Hardcoded colors/text styles** — Use `Theme.of(context).colorScheme`/`textTheme`; hardcoded styles break dark mode
- **Hardcoded spacing** — Prefer design tokens or named constants over magic numbers
### Performance (HIGH)
- **Unnecessary rebuilds** — State consumers wrapping too much tree; scope narrow and use selectors
- **Expensive work in `build()`** — Sorting, filtering, regex, or I/O in build; compute in the state layer
- **`MediaQuery.of(context)` overuse** — Use specific accessors (`MediaQuery.sizeOf(context)`)
- **Concrete list constructors for large data** — Use `ListView.builder`/`GridView.builder` for lazy construction
- **Missing image optimization** — No caching, no `cacheWidth`/`cacheHeight`, full-res thumbnails
- **`Opacity` in animations** — Use `AnimatedOpacity` or `FadeTransition`
- **Missing `const` propagation** — `const` widgets stop rebuild propagation; use wherever possible
- **`IntrinsicHeight`/`IntrinsicWidth` overuse** — Cause extra layout passes; avoid in scrollable lists
- **`RepaintBoundary` missing** — Complex independently-repainting subtrees should be wrapped
### Dart Idioms (MEDIUM)
- **Missing type annotations / implicit `dynamic`** — Enable `strict-casts`, `strict-inference`, `strict-raw-types` to catch these
- **`!` bang overuse** — Prefer `?.`, `??`, `case var v?`, or `requireNotNull`
- **Broad exception catching** — `catch (e)` without `on` clause; specify exception types
- **Catching `Error` subtypes** — `Error` indicates bugs, not recoverable conditions
- **`var` where `final` works** — Prefer `final` for locals, `const` for compile-time constants
- **Relative imports** — Use `package:` imports for consistency
- **Missing Dart 3 patterns** — Prefer switch expressions and `if-case` over verbose `is` checks
- **`print()` in production** — Use `dart:developer` `log()` or the project's logging package
- **`late` overuse** — Prefer nullable types or constructor initialization
- **Ignoring `Future` return values** — Use `await` or mark with `unawaited()`
- **Unused `async`** — Functions marked `async` that never `await` add unnecessary overhead
- **Mutable collections exposed** — Public APIs should return unmodifiable views
- **String concatenation in loops** — Use `StringBuffer` for iterative building
- **Mutable fields in `const` classes** — Fields in `const` constructor classes must be final
### Resource Lifecycle (HIGH)
- **Missing `dispose()`** — Every resource from `initState()` (controllers, subscriptions, timers) must be disposed
- **`BuildContext` used after `await`** — Check `context.mounted` (Flutter 3.7+) before navigation/dialogs after async gaps
- **`setState` after `dispose`** — Async callbacks must check `mounted` before calling `setState`
- **`BuildContext` stored in long-lived objects** — Never store context in singletons or static fields
- **Unclosed `StreamController`** / **`Timer` not cancelled** — Must be cleaned up in `dispose()`
- **Duplicated lifecycle logic** — Identical init/dispose blocks should be extracted to reusable patterns
### Error Handling (HIGH)
- **Missing global error capture** — Both `FlutterError.onError` and `PlatformDispatcher.instance.onError` must be set
- **No error reporting service** — Crashlytics/Sentry or equivalent should be integrated with non-fatal reporting
- **Missing state management error observer** — Wire errors to reporting (BlocObserver, ProviderObserver, etc.)
- **Red screen in production** — `ErrorWidget.builder` not customized for release mode
- **Raw exceptions reaching UI** — Map to user-friendly, localized messages before presentation layer
### Testing (HIGH)
- **Missing unit tests** — State manager changes must have corresponding tests
- **Missing widget tests** — New/changed widgets should have widget tests
- **Missing golden tests** — Design-critical components should have pixel-perfect regression tests
- **Untested state transitions** — All paths (loading→success, loading→error, retry, empty) must be tested
- **Test isolation violated** — External dependencies must be mocked; no shared mutable state between tests
- **Flaky async tests** — Use `pumpAndSettle` or explicit `pump(Duration)`, not timing assumptions
### Accessibility (MEDIUM)
- **Missing semantic labels** — Images without `semanticLabel`, icons without `tooltip`
- **Small tap targets** — Interactive elements below 48x48 pixels
- **Color-only indicators** — Color alone conveying meaning without icon/text alternative
- **Missing `ExcludeSemantics`/`MergeSemantics`** — Decorative elements and related widget groups need proper semantics
- **Text scaling ignored** — Hardcoded sizes that don't respect system accessibility settings
### Platform, Responsive & Navigation (MEDIUM)
- **Missing `SafeArea`** — Content obscured by notches/status bars
- **Broken back navigation** — Android back button or iOS swipe-to-go-back not working as expected
- **Missing platform permissions** — Required permissions not declared in `AndroidManifest.xml` or `Info.plist`
- **No responsive layout** — Fixed layouts that break on tablets/desktops/landscape
- **Text overflow** — Unbounded text without `Flexible`/`Expanded`/`FittedBox`
- **Mixed navigation patterns** — `Navigator.push` mixed with declarative router; pick one
- **Hardcoded route paths** — Use constants, enums, or generated routes
- **Missing deep link validation** — URLs not sanitized before navigation
- **Missing auth guards** — Protected routes accessible without redirect
### Internationalization (MEDIUM)
- **Hardcoded user-facing strings** — All visible text must use a localization system
- **String concatenation for localized text** — Use parameterized messages
- **Locale-unaware formatting** — Dates, numbers, currencies must use locale-aware formatters
### Dependencies & Build (LOW)
- **No strict static analysis** — Project should have strict `analysis_options.yaml`
- **Stale/unused dependencies** — Run `flutter pub outdated`; remove unused packages
- **Dependency overrides in production** — Only with comment linking to tracking issue
- **Unjustified lint suppressions** — `// ignore:` without explanatory comment
- **Hardcoded path deps in monorepo** — Use workspace resolution, not `path: ../../`
### Security (CRITICAL)
- **Hardcoded secrets** — API keys, tokens, or credentials in Dart source
- **Insecure storage** — Sensitive data in plaintext instead of Keychain/EncryptedSharedPreferences
- **Cleartext traffic** — HTTP without HTTPS; missing network security config
- **Sensitive logging** — Tokens, PII, or credentials in `print()`/`debugPrint()`
- **Missing input validation** — User input passed to APIs/navigation without sanitization
- **Unsafe deep links** — Handlers that act without validation
If any CRITICAL security issue is present, stop and escalate to `security-reviewer`.
## Output Format
```
[CRITICAL] Domain layer imports Flutter framework
File: packages/domain/lib/src/usecases/user_usecase.dart:3
Issue: `import 'package:flutter/material.dart'` — domain must be pure Dart.
Fix: Move widget-dependent logic to presentation layer.
[HIGH] State consumer wraps entire screen
File: lib/features/cart/presentation/cart_page.dart:42
Issue: Consumer rebuilds entire page on every state change.
Fix: Narrow scope to the subtree that depends on changed state, or use a selector.
```
## Summary Format
End every review with:
```
## Review Summary
| Severity | Count | Status |
|----------|-------|--------|
| CRITICAL | 0 | pass |
| HIGH | 1 | block |
| MEDIUM | 2 | info |
| LOW | 0 | note |
Verdict: BLOCK — HIGH issues must be fixed before merge.
```
## Approval Criteria
- **Approve**: No CRITICAL or HIGH issues
- **Block**: Any CRITICAL or HIGH issues — must fix before merge
Refer to the `flutter-dart-code-review` skill for the comprehensive review checklist.

11
commands/rules-distill.md Normal file
View File

@@ -0,0 +1,11 @@
---
description: "Scan skills to extract cross-cutting principles and distill them into rules"
---
# /rules-distill — Distill Principles from Skills into Rules
Scan installed skills, extract cross-cutting principles, and distill them into rules.
## Process
Follow the full workflow defined in the `rules-distill` skill.

View File

@@ -29,8 +29,8 @@ Use `/sessions info` when you need operator-surface context for a swarm: branch,
**Script:** **Script:**
```bash ```bash
node -e " node -e "
const sm = require((process.env.CLAUDE_PLUGIN_ROOT||require('path').join(require('os').homedir(),'.claude'))+'/scripts/lib/session-manager'); const sm = require((()=>{var e=process.env.CLAUDE_PLUGIN_ROOT;if(e&&e.trim())return e.trim();var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(f.existsSync(p.join(d,q)))return d;try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q)))return c}}catch(x){}return d})()+'/scripts/lib/session-manager');
const aa = require((process.env.CLAUDE_PLUGIN_ROOT||require('path').join(require('os').homedir(),'.claude'))+'/scripts/lib/session-aliases'); const aa = require((()=>{var e=process.env.CLAUDE_PLUGIN_ROOT;if(e&&e.trim())return e.trim();var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(f.existsSync(p.join(d,q)))return d;try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q)))return c}}catch(x){}return d})()+'/scripts/lib/session-aliases');
const path = require('path'); const path = require('path');
const result = sm.getAllSessions({ limit: 20 }); const result = sm.getAllSessions({ limit: 20 });
@@ -70,8 +70,8 @@ Load and display a session's content (by ID or alias).
**Script:** **Script:**
```bash ```bash
node -e " node -e "
const sm = require((process.env.CLAUDE_PLUGIN_ROOT||require('path').join(require('os').homedir(),'.claude'))+'/scripts/lib/session-manager'); const sm = require((()=>{var e=process.env.CLAUDE_PLUGIN_ROOT;if(e&&e.trim())return e.trim();var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(f.existsSync(p.join(d,q)))return d;try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q)))return c}}catch(x){}return d})()+'/scripts/lib/session-manager');
const aa = require((process.env.CLAUDE_PLUGIN_ROOT||require('path').join(require('os').homedir(),'.claude'))+'/scripts/lib/session-aliases'); const aa = require((()=>{var e=process.env.CLAUDE_PLUGIN_ROOT;if(e&&e.trim())return e.trim();var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(f.existsSync(p.join(d,q)))return d;try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q)))return c}}catch(x){}return d})()+'/scripts/lib/session-aliases');
const id = process.argv[1]; const id = process.argv[1];
// First try to resolve as alias // First try to resolve as alias
@@ -143,8 +143,8 @@ Create a memorable alias for a session.
**Script:** **Script:**
```bash ```bash
node -e " node -e "
const sm = require((process.env.CLAUDE_PLUGIN_ROOT||require('path').join(require('os').homedir(),'.claude'))+'/scripts/lib/session-manager'); const sm = require((()=>{var e=process.env.CLAUDE_PLUGIN_ROOT;if(e&&e.trim())return e.trim();var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(f.existsSync(p.join(d,q)))return d;try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q)))return c}}catch(x){}return d})()+'/scripts/lib/session-manager');
const aa = require((process.env.CLAUDE_PLUGIN_ROOT||require('path').join(require('os').homedir(),'.claude'))+'/scripts/lib/session-aliases'); const aa = require((()=>{var e=process.env.CLAUDE_PLUGIN_ROOT;if(e&&e.trim())return e.trim();var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(f.existsSync(p.join(d,q)))return d;try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q)))return c}}catch(x){}return d})()+'/scripts/lib/session-aliases');
const sessionId = process.argv[1]; const sessionId = process.argv[1];
const aliasName = process.argv[2]; const aliasName = process.argv[2];
@@ -183,7 +183,7 @@ Delete an existing alias.
**Script:** **Script:**
```bash ```bash
node -e " node -e "
const aa = require((process.env.CLAUDE_PLUGIN_ROOT||require('path').join(require('os').homedir(),'.claude'))+'/scripts/lib/session-aliases'); const aa = require((()=>{var e=process.env.CLAUDE_PLUGIN_ROOT;if(e&&e.trim())return e.trim();var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(f.existsSync(p.join(d,q)))return d;try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q)))return c}}catch(x){}return d})()+'/scripts/lib/session-aliases');
const aliasName = process.argv[1]; const aliasName = process.argv[1];
if (!aliasName) { if (!aliasName) {
@@ -212,8 +212,8 @@ Show detailed information about a session.
**Script:** **Script:**
```bash ```bash
node -e " node -e "
const sm = require((process.env.CLAUDE_PLUGIN_ROOT||require('path').join(require('os').homedir(),'.claude'))+'/scripts/lib/session-manager'); const sm = require((()=>{var e=process.env.CLAUDE_PLUGIN_ROOT;if(e&&e.trim())return e.trim();var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(f.existsSync(p.join(d,q)))return d;try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q)))return c}}catch(x){}return d})()+'/scripts/lib/session-manager');
const aa = require((process.env.CLAUDE_PLUGIN_ROOT||require('path').join(require('os').homedir(),'.claude'))+'/scripts/lib/session-aliases'); const aa = require((()=>{var e=process.env.CLAUDE_PLUGIN_ROOT;if(e&&e.trim())return e.trim();var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(f.existsSync(p.join(d,q)))return d;try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q)))return c}}catch(x){}return d})()+'/scripts/lib/session-aliases');
const id = process.argv[1]; const id = process.argv[1];
const resolved = aa.resolveAlias(id); const resolved = aa.resolveAlias(id);
@@ -262,7 +262,7 @@ Show all session aliases.
**Script:** **Script:**
```bash ```bash
node -e " node -e "
const aa = require((process.env.CLAUDE_PLUGIN_ROOT||require('path').join(require('os').homedir(),'.claude'))+'/scripts/lib/session-aliases'); const aa = require((()=>{var e=process.env.CLAUDE_PLUGIN_ROOT;if(e&&e.trim())return e.trim();var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(f.existsSync(p.join(d,q)))return d;try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q)))return c}}catch(x){}return d})()+'/scripts/lib/session-aliases');
const aliases = aa.listAliases(); const aliases = aa.listAliases();
console.log('Session Aliases (' + aliases.length + '):'); console.log('Session Aliases (' + aliases.length + '):');

View File

@@ -13,19 +13,22 @@ Shows a comprehensive health dashboard for all skills in the portfolio with succ
Run the skill health CLI in dashboard mode: Run the skill health CLI in dashboard mode:
```bash ```bash
node "${CLAUDE_PLUGIN_ROOT}/scripts/skills-health.js" --dashboard ECC_ROOT="${CLAUDE_PLUGIN_ROOT:-$(node -e "var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(!f.existsSync(p.join(d,q))){try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q))){d=c;break}}}catch(x){}}console.log(d)")}"
node "$ECC_ROOT/scripts/skills-health.js" --dashboard
``` ```
For a specific panel only: For a specific panel only:
```bash ```bash
node "${CLAUDE_PLUGIN_ROOT}/scripts/skills-health.js" --dashboard --panel failures ECC_ROOT="${CLAUDE_PLUGIN_ROOT:-$(node -e "var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(!f.existsSync(p.join(d,q))){try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q))){d=c;break}}}catch(x){}}console.log(d)")}"
node "$ECC_ROOT/scripts/skills-health.js" --dashboard --panel failures
``` ```
For machine-readable output: For machine-readable output:
```bash ```bash
node "${CLAUDE_PLUGIN_ROOT}/scripts/skills-health.js" --dashboard --json ECC_ROOT="${CLAUDE_PLUGIN_ROOT:-$(node -e "var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(!f.existsSync(p.join(d,q))){try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q))){d=c;break}}}catch(x){}}console.log(d)")}"
node "$ECC_ROOT/scripts/skills-health.js" --dashboard --json
``` ```
## Usage ## Usage

View File

@@ -71,7 +71,7 @@
## 归属 ## 归属
本行为准则改编自 \[贡献者公约]\[homepage] 2.0 版本,可访问 本行为准则改编自 [贡献者公约][homepage] 2.0 版本,可访问
<https://www.contributor-covenant.org/version/2/0/code_of_conduct.html> 获取。 <https://www.contributor-covenant.org/version/2/0/code_of_conduct.html> 获取。
社区影响指南的灵感来源于 [Mozilla 的行为准则执行阶梯](https://github.com/mozilla/diversity)。 社区影响指南的灵感来源于 [Mozilla 的行为准则执行阶梯](https://github.com/mozilla/diversity)。

View File

@@ -315,6 +315,6 @@ result = "".join(str(item) for item in items)
| 海象运算符 (`:=`) | 3.8+ | | 海象运算符 (`:=`) | 3.8+ |
| 仅限位置参数 | 3.8+ | | 仅限位置参数 | 3.8+ |
| Match 语句 | 3.10+ | | Match 语句 | 3.10+ |
| 类型联合 (\`x | None\`) | 3.10+ | | 类型联合 (`x \| None`) | 3.10+ |
确保你的项目 `pyproject.toml` 或 `setup.py` 指定了正确的最低 Python 版本。 确保你的项目 `pyproject.toml` 或 `setup.py` 指定了正确的最低 Python 版本。

View File

@@ -2,6 +2,16 @@
"$schema": "https://json.schemastore.org/claude-code-settings.json", "$schema": "https://json.schemastore.org/claude-code-settings.json",
"hooks": { "hooks": {
"PreToolUse": [ "PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "npx block-no-verify@1.1.2"
}
],
"description": "Block git hook-bypass flag to protect pre-commit, commit-msg, and pre-push hooks from being skipped"
},
{ {
"matcher": "Bash", "matcher": "Bash",
"hooks": [ "hooks": [
@@ -74,6 +84,27 @@
} }
], ],
"description": "Optional InsAIts AI security monitor for Bash/Edit/Write flows. Enable with ECC_ENABLE_INSAITS=1. Requires: pip install insa-its" "description": "Optional InsAIts AI security monitor for Bash/Edit/Write flows. Enable with ECC_ENABLE_INSAITS=1. Requires: pip install insa-its"
},
{
"matcher": "Bash|Write|Edit|MultiEdit",
"hooks": [
{
"type": "command",
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:governance-capture\" \"scripts/hooks/governance-capture.js\" \"standard,strict\"",
"timeout": 10
}
],
"description": "Capture governance events (secrets, policy violations, approval requests). Enable with ECC_GOVERNANCE_CAPTURE=1"
},
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"pre:mcp-health-check\" \"scripts/hooks/mcp-health-check.js\" \"standard,strict\""
}
],
"description": "Check MCP server health before MCP tool execution and block unhealthy MCP calls"
} }
], ],
"PreCompact": [ "PreCompact": [
@@ -165,6 +196,17 @@
], ],
"description": "Warn about console.log statements after edits" "description": "Warn about console.log statements after edits"
}, },
{
"matcher": "Bash|Write|Edit|MultiEdit",
"hooks": [
{
"type": "command",
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"post:governance-capture\" \"scripts/hooks/governance-capture.js\" \"standard,strict\"",
"timeout": 10
}
],
"description": "Capture governance events from tool outputs. Enable with ECC_GOVERNANCE_CAPTURE=1"
},
{ {
"matcher": "*", "matcher": "*",
"hooks": [ "hooks": [
@@ -178,6 +220,18 @@
"description": "Capture tool use results for continuous learning" "description": "Capture tool use results for continuous learning"
} }
], ],
"PostToolUseFailure": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/hooks/run-with-flags.js\" \"post:mcp-health-check\" \"scripts/hooks/mcp-health-check.js\" \"standard,strict\""
}
],
"description": "Track failed MCP tool calls, mark unhealthy servers, and attempt reconnect"
}
],
"Stop": [ "Stop": [
{ {
"matcher": "*", "matcher": "*",

View File

@@ -250,6 +250,158 @@
"modules": [ "modules": [
"document-processing" "document-processing"
] ]
},
{
"id": "agent:architect",
"family": "agent",
"description": "System design and architecture agent.",
"modules": [
"agents-core"
]
},
{
"id": "agent:code-reviewer",
"family": "agent",
"description": "Code review agent for quality and security checks.",
"modules": [
"agents-core"
]
},
{
"id": "agent:security-reviewer",
"family": "agent",
"description": "Security vulnerability analysis agent.",
"modules": [
"agents-core"
]
},
{
"id": "agent:tdd-guide",
"family": "agent",
"description": "Test-driven development guidance agent.",
"modules": [
"agents-core"
]
},
{
"id": "agent:planner",
"family": "agent",
"description": "Feature implementation planning agent.",
"modules": [
"agents-core"
]
},
{
"id": "agent:build-error-resolver",
"family": "agent",
"description": "Build error resolution agent.",
"modules": [
"agents-core"
]
},
{
"id": "agent:e2e-runner",
"family": "agent",
"description": "Playwright E2E testing agent.",
"modules": [
"agents-core"
]
},
{
"id": "agent:refactor-cleaner",
"family": "agent",
"description": "Dead code cleanup and refactoring agent.",
"modules": [
"agents-core"
]
},
{
"id": "agent:doc-updater",
"family": "agent",
"description": "Documentation update agent.",
"modules": [
"agents-core"
]
},
{
"id": "skill:tdd-workflow",
"family": "skill",
"description": "Test-driven development workflow skill.",
"modules": [
"workflow-quality"
]
},
{
"id": "skill:continuous-learning",
"family": "skill",
"description": "Session pattern extraction and continuous learning skill.",
"modules": [
"workflow-quality"
]
},
{
"id": "skill:eval-harness",
"family": "skill",
"description": "Evaluation harness for AI regression testing.",
"modules": [
"workflow-quality"
]
},
{
"id": "skill:verification-loop",
"family": "skill",
"description": "Verification loop for code quality assurance.",
"modules": [
"workflow-quality"
]
},
{
"id": "skill:strategic-compact",
"family": "skill",
"description": "Strategic context compaction for long sessions.",
"modules": [
"workflow-quality"
]
},
{
"id": "skill:coding-standards",
"family": "skill",
"description": "Language-agnostic coding standards and best practices.",
"modules": [
"framework-language"
]
},
{
"id": "skill:frontend-patterns",
"family": "skill",
"description": "React and frontend engineering patterns.",
"modules": [
"framework-language"
]
},
{
"id": "skill:backend-patterns",
"family": "skill",
"description": "API design, database, and backend engineering patterns.",
"modules": [
"framework-language"
]
},
{
"id": "skill:security-review",
"family": "skill",
"description": "Security review checklist and vulnerability analysis.",
"modules": [
"security"
]
},
{
"id": "skill:deep-research",
"family": "skill",
"description": "Deep research and investigation workflows.",
"modules": [
"research-apis"
]
} }
] ]
} }

6
package-lock.json generated
View File

@@ -1133,9 +1133,9 @@
} }
}, },
"node_modules/flatted": { "node_modules/flatted": {
"version": "3.3.3", "version": "3.4.2",
"resolved": "https://registry.npmjs.org/flatted/-/flatted-3.3.3.tgz", "resolved": "https://registry.npmjs.org/flatted/-/flatted-3.4.2.tgz",
"integrity": "sha512-GX+ysw4PBCz0PzosHDepZGANEuFCMLrnRTiEy9McGjmkCQYwRq4A/X786G/fjM/+OjsWSU1ZrY5qyARZmO/uwg==", "integrity": "sha512-PjDse7RzhcPkIJwy5t7KPWQSZ9cAbzQXcafsetQoD7sOJRQlGikNbx7yZp2OotDnJyrDcbyRq3Ttb18iYOqkxA==",
"dev": true, "dev": true,
"license": "ISC" "license": "ISC"
}, },

View File

@@ -0,0 +1,72 @@
---
paths:
- "**/*.cs"
- "**/*.csx"
---
# C# Coding Style
> This file extends [common/coding-style.md](../common/coding-style.md) with C#-specific content.
## Standards
- Follow current .NET conventions and enable nullable reference types
- Prefer explicit access modifiers on public and internal APIs
- Keep files aligned with the primary type they define
## Types and Models
- Prefer `record` or `record struct` for immutable value-like models
- Use `class` for entities or types with identity and lifecycle
- Use `interface` for service boundaries and abstractions
- Avoid `dynamic` in application code; prefer generics or explicit models
```csharp
public sealed record UserDto(Guid Id, string Email);
public interface IUserRepository
{
Task<UserDto?> FindByIdAsync(Guid id, CancellationToken cancellationToken);
}
```
## Immutability
- Prefer `init` setters, constructor parameters, and immutable collections for shared state
- Do not mutate input models in-place when producing updated state
```csharp
public sealed record UserProfile(string Name, string Email);
public static UserProfile Rename(UserProfile profile, string name) =>
profile with { Name = name };
```
## Async and Error Handling
- Prefer `async`/`await` over blocking calls like `.Result` or `.Wait()`
- Pass `CancellationToken` through public async APIs
- Throw specific exceptions and log with structured properties
```csharp
public async Task<Order> LoadOrderAsync(
Guid orderId,
CancellationToken cancellationToken)
{
try
{
return await repository.FindAsync(orderId, cancellationToken)
?? throw new InvalidOperationException($"Order {orderId} was not found.");
}
catch (Exception ex)
{
logger.LogError(ex, "Failed to load order {OrderId}", orderId);
throw;
}
}
```
## Formatting
- Use `dotnet format` for formatting and analyzer fixes
- Keep `using` directives organized and remove unused imports
- Prefer expression-bodied members only when they stay readable

25
rules/csharp/hooks.md Normal file
View File

@@ -0,0 +1,25 @@
---
paths:
- "**/*.cs"
- "**/*.csx"
- "**/*.csproj"
- "**/*.sln"
- "**/Directory.Build.props"
- "**/Directory.Build.targets"
---
# C# Hooks
> This file extends [common/hooks.md](../common/hooks.md) with C#-specific content.
## PostToolUse Hooks
Configure in `~/.claude/settings.json`:
- **dotnet format**: Auto-format edited C# files and apply analyzer fixes
- **dotnet build**: Verify the solution or project still compiles after edits
- **dotnet test --no-build**: Re-run the nearest relevant test project after behavior changes
## Stop Hooks
- Run a final `dotnet build` before ending a session with broad C# changes
- Warn on modified `appsettings*.json` files so secrets do not get committed

50
rules/csharp/patterns.md Normal file
View File

@@ -0,0 +1,50 @@
---
paths:
- "**/*.cs"
- "**/*.csx"
---
# C# Patterns
> This file extends [common/patterns.md](../common/patterns.md) with C#-specific content.
## API Response Pattern
```csharp
public sealed record ApiResponse<T>(
bool Success,
T? Data = default,
string? Error = null,
object? Meta = null);
```
## Repository Pattern
```csharp
public interface IRepository<T>
{
Task<IReadOnlyList<T>> FindAllAsync(CancellationToken cancellationToken);
Task<T?> FindByIdAsync(Guid id, CancellationToken cancellationToken);
Task<T> CreateAsync(T entity, CancellationToken cancellationToken);
Task<T> UpdateAsync(T entity, CancellationToken cancellationToken);
Task DeleteAsync(Guid id, CancellationToken cancellationToken);
}
```
## Options Pattern
Use strongly typed options for config instead of reading raw strings throughout the codebase.
```csharp
public sealed class PaymentsOptions
{
public const string SectionName = "Payments";
public required string BaseUrl { get; init; }
public required string ApiKeySecretName { get; init; }
}
```
## Dependency Injection
- Depend on interfaces at service boundaries
- Keep constructors focused; if a service needs too many dependencies, split responsibilities
- Register lifetimes intentionally: singleton for stateless/shared services, scoped for request data, transient for lightweight pure workers

58
rules/csharp/security.md Normal file
View File

@@ -0,0 +1,58 @@
---
paths:
- "**/*.cs"
- "**/*.csx"
- "**/*.csproj"
- "**/appsettings*.json"
---
# C# Security
> This file extends [common/security.md](../common/security.md) with C#-specific content.
## Secret Management
- Never hardcode API keys, tokens, or connection strings in source code
- Use environment variables, user secrets for local development, and a secret manager in production
- Keep `appsettings.*.json` free of real credentials
```csharp
// BAD
const string ApiKey = "sk-live-123";
// GOOD
var apiKey = builder.Configuration["OpenAI:ApiKey"]
?? throw new InvalidOperationException("OpenAI:ApiKey is not configured.");
```
## SQL Injection Prevention
- Always use parameterized queries with ADO.NET, Dapper, or EF Core
- Never concatenate user input into SQL strings
- Validate sort fields and filter operators before using dynamic query composition
```csharp
const string sql = "SELECT * FROM Orders WHERE CustomerId = @customerId";
await connection.QueryAsync<Order>(sql, new { customerId });
```
## Input Validation
- Validate DTOs at the application boundary
- Use data annotations, FluentValidation, or explicit guard clauses
- Reject invalid model state before running business logic
## Authentication and Authorization
- Prefer framework auth handlers instead of custom token parsing
- Enforce authorization policies at endpoint or handler boundaries
- Never log raw tokens, passwords, or PII
## Error Handling
- Return safe client-facing messages
- Log detailed exceptions with structured context server-side
- Do not expose stack traces, SQL text, or filesystem paths in API responses
## References
See skill: `security-review` for broader application security review checklists.

46
rules/csharp/testing.md Normal file
View File

@@ -0,0 +1,46 @@
---
paths:
- "**/*.cs"
- "**/*.csx"
- "**/*.csproj"
---
# C# Testing
> This file extends [common/testing.md](../common/testing.md) with C#-specific content.
## Test Framework
- Prefer **xUnit** for unit and integration tests
- Use **FluentAssertions** for readable assertions
- Use **Moq** or **NSubstitute** for mocking dependencies
- Use **Testcontainers** when integration tests need real infrastructure
## Test Organization
- Mirror `src/` structure under `tests/`
- Separate unit, integration, and end-to-end coverage clearly
- Name tests by behavior, not implementation details
```csharp
public sealed class OrderServiceTests
{
[Fact]
public async Task FindByIdAsync_ReturnsOrder_WhenOrderExists()
{
// Arrange
// Act
// Assert
}
}
```
## ASP.NET Core Integration Tests
- Use `WebApplicationFactory<TEntryPoint>` for API integration coverage
- Test auth, validation, and serialization through HTTP, not by bypassing middleware
## Coverage
- Target 80%+ line coverage
- Focus coverage on domain logic, validation, auth, and failure paths
- Run `dotnet test` in CI with coverage collection enabled where available

151
rules/rust/coding-style.md Normal file
View File

@@ -0,0 +1,151 @@
---
paths:
- "**/*.rs"
---
# Rust Coding Style
> This file extends [common/coding-style.md](../common/coding-style.md) with Rust-specific content.
## Formatting
- **rustfmt** for enforcement — always run `cargo fmt` before committing
- **clippy** for lints — `cargo clippy -- -D warnings` (treat warnings as errors)
- 4-space indent (rustfmt default)
- Max line width: 100 characters (rustfmt default)
## Immutability
Rust variables are immutable by default — embrace this:
- Use `let` by default; only use `let mut` when mutation is required
- Prefer returning new values over mutating in place
- Use `Cow<'_, T>` when a function may or may not need to allocate
```rust
use std::borrow::Cow;
// GOOD — immutable by default, new value returned
fn normalize(input: &str) -> Cow<'_, str> {
if input.contains(' ') {
Cow::Owned(input.replace(' ', "_"))
} else {
Cow::Borrowed(input)
}
}
// BAD — unnecessary mutation
fn normalize_bad(input: &mut String) {
*input = input.replace(' ', "_");
}
```
## Naming
Follow standard Rust conventions:
- `snake_case` for functions, methods, variables, modules, crates
- `PascalCase` (UpperCamelCase) for types, traits, enums, type parameters
- `SCREAMING_SNAKE_CASE` for constants and statics
- Lifetimes: short lowercase (`'a`, `'de`) — descriptive names for complex cases (`'input`)
## Ownership and Borrowing
- Borrow (`&T`) by default; take ownership only when you need to store or consume
- Never clone to satisfy the borrow checker without understanding the root cause
- Accept `&str` over `String`, `&[T]` over `Vec<T>` in function parameters
- Use `impl Into<String>` for constructors that need to own a `String`
```rust
// GOOD — borrows when ownership isn't needed
fn word_count(text: &str) -> usize {
text.split_whitespace().count()
}
// GOOD — takes ownership in constructor via Into
fn new(name: impl Into<String>) -> Self {
Self { name: name.into() }
}
// BAD — takes String when &str suffices
fn word_count_bad(text: String) -> usize {
text.split_whitespace().count()
}
```
## Error Handling
- Use `Result<T, E>` and `?` for propagation — never `unwrap()` in production code
- **Libraries**: define typed errors with `thiserror`
- **Applications**: use `anyhow` for flexible error context
- Add context with `.with_context(|| format!("failed to ..."))?`
- Reserve `unwrap()` / `expect()` for tests and truly unreachable states
```rust
// GOOD — library error with thiserror
#[derive(Debug, thiserror::Error)]
pub enum ConfigError {
#[error("failed to read config: {0}")]
Io(#[from] std::io::Error),
#[error("invalid config format: {0}")]
Parse(String),
}
// GOOD — application error with anyhow
use anyhow::Context;
fn load_config(path: &str) -> anyhow::Result<Config> {
let content = std::fs::read_to_string(path)
.with_context(|| format!("failed to read {path}"))?;
toml::from_str(&content)
.with_context(|| format!("failed to parse {path}"))
}
```
## Iterators Over Loops
Prefer iterator chains for transformations; use loops for complex control flow:
```rust
// GOOD — declarative and composable
let active_emails: Vec<&str> = users.iter()
.filter(|u| u.is_active)
.map(|u| u.email.as_str())
.collect();
// GOOD — loop for complex logic with early returns
for user in &users {
if let Some(verified) = verify_email(&user.email)? {
send_welcome(&verified)?;
}
}
```
## Module Organization
Organize by domain, not by type:
```text
src/
├── main.rs
├── lib.rs
├── auth/ # Domain module
│ ├── mod.rs
│ ├── token.rs
│ └── middleware.rs
├── orders/ # Domain module
│ ├── mod.rs
│ ├── model.rs
│ └── service.rs
└── db/ # Infrastructure
├── mod.rs
└── pool.rs
```
## Visibility
- Default to private; use `pub(crate)` for internal sharing
- Only mark `pub` what is part of the crate's public API
- Re-export public API from `lib.rs`
## References
See skill: `rust-patterns` for comprehensive Rust idioms and patterns.

16
rules/rust/hooks.md Normal file
View File

@@ -0,0 +1,16 @@
---
paths:
- "**/*.rs"
- "**/Cargo.toml"
---
# Rust Hooks
> This file extends [common/hooks.md](../common/hooks.md) with Rust-specific content.
## PostToolUse Hooks
Configure in `~/.claude/settings.json`:
- **cargo fmt**: Auto-format `.rs` files after edit
- **cargo clippy**: Run lint checks after editing Rust files
- **cargo check**: Verify compilation after changes (faster than `cargo build`)

168
rules/rust/patterns.md Normal file
View File

@@ -0,0 +1,168 @@
---
paths:
- "**/*.rs"
---
# Rust Patterns
> This file extends [common/patterns.md](../common/patterns.md) with Rust-specific content.
## Repository Pattern with Traits
Encapsulate data access behind a trait:
```rust
pub trait OrderRepository: Send + Sync {
fn find_by_id(&self, id: u64) -> Result<Option<Order>, StorageError>;
fn find_all(&self) -> Result<Vec<Order>, StorageError>;
fn save(&self, order: &Order) -> Result<Order, StorageError>;
fn delete(&self, id: u64) -> Result<(), StorageError>;
}
```
Concrete implementations handle storage details (Postgres, SQLite, in-memory for tests).
## Service Layer
Business logic in service structs; inject dependencies via constructor:
```rust
pub struct OrderService {
repo: Box<dyn OrderRepository>,
payment: Box<dyn PaymentGateway>,
}
impl OrderService {
pub fn new(repo: Box<dyn OrderRepository>, payment: Box<dyn PaymentGateway>) -> Self {
Self { repo, payment }
}
pub fn place_order(&self, request: CreateOrderRequest) -> anyhow::Result<OrderSummary> {
let order = Order::from(request);
self.payment.charge(order.total())?;
let saved = self.repo.save(&order)?;
Ok(OrderSummary::from(saved))
}
}
```
## Newtype Pattern for Type Safety
Prevent argument mix-ups with distinct wrapper types:
```rust
struct UserId(u64);
struct OrderId(u64);
fn get_order(user: UserId, order: OrderId) -> anyhow::Result<Order> {
// Can't accidentally swap user and order IDs at call sites
todo!()
}
```
## Enum State Machines
Model states as enums — make illegal states unrepresentable:
```rust
enum ConnectionState {
Disconnected,
Connecting { attempt: u32 },
Connected { session_id: String },
Failed { reason: String, retries: u32 },
}
fn handle(state: &ConnectionState) {
match state {
ConnectionState::Disconnected => connect(),
ConnectionState::Connecting { attempt } if *attempt > 3 => abort(),
ConnectionState::Connecting { .. } => wait(),
ConnectionState::Connected { session_id } => use_session(session_id),
ConnectionState::Failed { retries, .. } if *retries < 5 => retry(),
ConnectionState::Failed { reason, .. } => log_failure(reason),
}
}
```
Always match exhaustively — no wildcard `_` for business-critical enums.
## Builder Pattern
Use for structs with many optional parameters:
```rust
pub struct ServerConfig {
host: String,
port: u16,
max_connections: usize,
}
impl ServerConfig {
pub fn builder(host: impl Into<String>, port: u16) -> ServerConfigBuilder {
ServerConfigBuilder {
host: host.into(),
port,
max_connections: 100,
}
}
}
pub struct ServerConfigBuilder {
host: String,
port: u16,
max_connections: usize,
}
impl ServerConfigBuilder {
pub fn max_connections(mut self, n: usize) -> Self {
self.max_connections = n;
self
}
pub fn build(self) -> ServerConfig {
ServerConfig {
host: self.host,
port: self.port,
max_connections: self.max_connections,
}
}
}
```
## Sealed Traits for Extensibility Control
Use a private module to seal a trait, preventing external implementations:
```rust
mod private {
pub trait Sealed {}
}
pub trait Format: private::Sealed {
fn encode(&self, data: &[u8]) -> Vec<u8>;
}
pub struct Json;
impl private::Sealed for Json {}
impl Format for Json {
fn encode(&self, data: &[u8]) -> Vec<u8> { todo!() }
}
```
## API Response Envelope
Consistent API responses using a generic enum:
```rust
#[derive(Debug, serde::Serialize)]
#[serde(tag = "status")]
pub enum ApiResponse<T: serde::Serialize> {
#[serde(rename = "ok")]
Ok { data: T },
#[serde(rename = "error")]
Error { message: String },
}
```
## References
See skill: `rust-patterns` for comprehensive patterns including ownership, traits, generics, concurrency, and async.

141
rules/rust/security.md Normal file
View File

@@ -0,0 +1,141 @@
---
paths:
- "**/*.rs"
---
# Rust Security
> This file extends [common/security.md](../common/security.md) with Rust-specific content.
## Secrets Management
- Never hardcode API keys, tokens, or credentials in source code
- Use environment variables: `std::env::var("API_KEY")`
- Fail fast if required secrets are missing at startup
- Keep `.env` files in `.gitignore`
```rust
// BAD
const API_KEY: &str = "sk-abc123...";
// GOOD — environment variable with early validation
fn load_api_key() -> anyhow::Result<String> {
std::env::var("PAYMENT_API_KEY")
.context("PAYMENT_API_KEY must be set")
}
```
## SQL Injection Prevention
- Always use parameterized queries — never format user input into SQL strings
- Use query builder or ORM (sqlx, diesel, sea-orm) with bind parameters
```rust
// BAD — SQL injection via format string
let query = format!("SELECT * FROM users WHERE name = '{name}'");
sqlx::query(&query).fetch_one(&pool).await?;
// GOOD — parameterized query with sqlx
// Placeholder syntax varies by backend: Postgres: $1 | MySQL: ? | SQLite: $1
sqlx::query("SELECT * FROM users WHERE name = $1")
.bind(&name)
.fetch_one(&pool)
.await?;
```
## Input Validation
- Validate all user input at system boundaries before processing
- Use the type system to enforce invariants (newtype pattern)
- Parse, don't validate — convert unstructured data to typed structs at the boundary
- Reject invalid input with clear error messages
```rust
// Parse, don't validate — invalid states are unrepresentable
pub struct Email(String);
impl Email {
pub fn parse(input: &str) -> Result<Self, ValidationError> {
let trimmed = input.trim();
let at_pos = trimmed.find('@')
.filter(|&p| p > 0 && p < trimmed.len() - 1)
.ok_or_else(|| ValidationError::InvalidEmail(input.to_string()))?;
let domain = &trimmed[at_pos + 1..];
if trimmed.len() > 254 || !domain.contains('.') {
return Err(ValidationError::InvalidEmail(input.to_string()));
}
// For production use, prefer a validated email crate (e.g., `email_address`)
Ok(Self(trimmed.to_string()))
}
pub fn as_str(&self) -> &str {
&self.0
}
}
```
## Unsafe Code
- Minimize `unsafe` blocks — prefer safe abstractions
- Every `unsafe` block must have a `// SAFETY:` comment explaining the invariant
- Never use `unsafe` to bypass the borrow checker for convenience
- Audit all `unsafe` code during review — it is a red flag without justification
- Prefer `safe` FFI wrappers around C libraries
```rust
// GOOD — safety comment documents ALL required invariants
let widget: &Widget = {
// SAFETY: `ptr` is non-null, aligned, points to an initialized Widget,
// and no mutable references or mutations exist for its lifetime.
unsafe { &*ptr }
};
// BAD — no safety justification
unsafe { &*ptr }
```
## Dependency Security
- Run `cargo audit` to scan for known CVEs in dependencies
- Run `cargo deny check` for license and advisory compliance
- Use `cargo tree` to audit transitive dependencies
- Keep dependencies updated — set up Dependabot or Renovate
- Minimize dependency count — evaluate before adding new crates
```bash
# Security audit
cargo audit
# Deny advisories, duplicate versions, and restricted licenses
cargo deny check
# Inspect dependency tree
cargo tree
cargo tree -d # Show duplicates only
```
## Error Messages
- Never expose internal paths, stack traces, or database errors in API responses
- Log detailed errors server-side; return generic messages to clients
- Use `tracing` or `log` for structured server-side logging
```rust
// Map errors to appropriate status codes and generic messages
// (Example uses axum; adapt the response type to your framework)
match order_service.find_by_id(id) {
Ok(order) => Ok((StatusCode::OK, Json(order))),
Err(ServiceError::NotFound(_)) => {
tracing::info!(order_id = id, "order not found");
Err((StatusCode::NOT_FOUND, "Resource not found"))
}
Err(e) => {
tracing::error!(order_id = id, error = %e, "unexpected error");
Err((StatusCode::INTERNAL_SERVER_ERROR, "Internal server error"))
}
}
```
## References
See skill: `rust-patterns` for unsafe code guidelines and ownership patterns.
See skill: `security-review` for general security checklists.

154
rules/rust/testing.md Normal file
View File

@@ -0,0 +1,154 @@
---
paths:
- "**/*.rs"
---
# Rust Testing
> This file extends [common/testing.md](../common/testing.md) with Rust-specific content.
## Test Framework
- **`#[test]`** with `#[cfg(test)]` modules for unit tests
- **rstest** for parameterized tests and fixtures
- **proptest** for property-based testing
- **mockall** for trait-based mocking
- **`#[tokio::test]`** for async tests
## Test Organization
```text
my_crate/
├── src/
│ ├── lib.rs # Unit tests in #[cfg(test)] modules
│ ├── auth/
│ │ └── mod.rs # #[cfg(test)] mod tests { ... }
│ └── orders/
│ └── service.rs # #[cfg(test)] mod tests { ... }
├── tests/ # Integration tests (each file = separate binary)
│ ├── api_test.rs
│ ├── db_test.rs
│ └── common/ # Shared test utilities
│ └── mod.rs
└── benches/ # Criterion benchmarks
└── benchmark.rs
```
Unit tests go inside `#[cfg(test)]` modules in the same file. Integration tests go in `tests/`.
## Unit Test Pattern
```rust
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn creates_user_with_valid_email() {
let user = User::new("Alice", "alice@example.com").unwrap();
assert_eq!(user.name, "Alice");
}
#[test]
fn rejects_invalid_email() {
let result = User::new("Bob", "not-an-email");
assert!(result.is_err());
assert!(result.unwrap_err().to_string().contains("invalid email"));
}
}
```
## Parameterized Tests
```rust
use rstest::rstest;
#[rstest]
#[case("hello", 5)]
#[case("", 0)]
#[case("rust", 4)]
fn test_string_length(#[case] input: &str, #[case] expected: usize) {
assert_eq!(input.len(), expected);
}
```
## Async Tests
```rust
#[tokio::test]
async fn fetches_data_successfully() {
let client = TestClient::new().await;
let result = client.get("/data").await;
assert!(result.is_ok());
}
```
## Mocking with mockall
Define traits in production code; generate mocks in test modules:
```rust
// Production trait — pub so integration tests can import it
pub trait UserRepository {
fn find_by_id(&self, id: u64) -> Option<User>;
}
#[cfg(test)]
mod tests {
use super::*;
use mockall::predicate::eq;
mockall::mock! {
pub Repo {}
impl UserRepository for Repo {
fn find_by_id(&self, id: u64) -> Option<User>;
}
}
#[test]
fn service_returns_user_when_found() {
let mut mock = MockRepo::new();
mock.expect_find_by_id()
.with(eq(42))
.times(1)
.returning(|_| Some(User { id: 42, name: "Alice".into() }));
let service = UserService::new(Box::new(mock));
let user = service.get_user(42).unwrap();
assert_eq!(user.name, "Alice");
}
}
```
## Test Naming
Use descriptive names that explain the scenario:
- `creates_user_with_valid_email()`
- `rejects_order_when_insufficient_stock()`
- `returns_none_when_not_found()`
## Coverage
- Target 80%+ line coverage
- Use **cargo-llvm-cov** for coverage reporting
- Focus on business logic — exclude generated code and FFI bindings
```bash
cargo llvm-cov # Summary
cargo llvm-cov --html # HTML report
cargo llvm-cov --fail-under-lines 80 # Fail if below threshold
```
## Testing Commands
```bash
cargo test # Run all tests
cargo test -- --nocapture # Show println output
cargo test test_name # Run tests matching pattern
cargo test --lib # Unit tests only
cargo test --test api_test # Specific integration test (tests/api_test.rs)
cargo test --doc # Doc tests only
```
## References
See skill: `rust-testing` for comprehensive testing patterns including property-based testing, fixtures, and benchmarking with Criterion.

View File

@@ -26,7 +26,7 @@
"properties": { "properties": {
"id": { "id": {
"type": "string", "type": "string",
"pattern": "^(baseline|lang|framework|capability):[a-z0-9-]+$" "pattern": "^(baseline|lang|framework|capability|agent|skill):[a-z0-9-]+$"
}, },
"family": { "family": {
"type": "string", "type": "string",
@@ -34,7 +34,9 @@
"baseline", "baseline",
"language", "language",
"framework", "framework",
"capability" "capability",
"agent",
"skill"
] ]
}, },
"description": { "description": {

View File

@@ -0,0 +1,280 @@
#!/usr/bin/env node
/**
* Governance Event Capture Hook
*
* PreToolUse/PostToolUse hook that detects governance-relevant events
* and writes them to the governance_events table in the state store.
*
* Captured event types:
* - secret_detected: Hardcoded secrets in tool input/output
* - policy_violation: Actions that violate configured policies
* - security_finding: Security-relevant tool invocations
* - approval_requested: Operations requiring explicit approval
*
* Enable: Set ECC_GOVERNANCE_CAPTURE=1
* Configure session: Set ECC_SESSION_ID for session correlation
*/
'use strict';
const crypto = require('crypto');
const MAX_STDIN = 1024 * 1024;
// Patterns that indicate potential hardcoded secrets
const SECRET_PATTERNS = [
{ name: 'aws_key', pattern: /(?:AKIA|ASIA)[A-Z0-9]{16}/i },
{ name: 'generic_secret', pattern: /(?:secret|password|token|api[_-]?key)\s*[:=]\s*["'][^"']{8,}/i },
{ name: 'private_key', pattern: /-----BEGIN (?:RSA |EC |DSA )?PRIVATE KEY-----/ },
{ name: 'jwt', pattern: /eyJ[A-Za-z0-9_-]{10,}\.eyJ[A-Za-z0-9_-]{10,}\.[A-Za-z0-9_-]{10,}/ },
{ name: 'github_token', pattern: /gh[pousr]_[A-Za-z0-9_]{36,}/ },
];
// Tool names that represent security-relevant operations
const SECURITY_RELEVANT_TOOLS = new Set([
'Bash', // Could execute arbitrary commands
]);
// Commands that require governance approval
const APPROVAL_COMMANDS = [
/git\s+push\s+.*--force/,
/git\s+reset\s+--hard/,
/rm\s+-rf?\s/,
/DROP\s+(?:TABLE|DATABASE)/i,
/DELETE\s+FROM\s+\w+\s*(?:;|$)/i,
];
// File patterns that indicate policy-sensitive paths
const SENSITIVE_PATHS = [
/\.env(?:\.|$)/,
/credentials/i,
/secrets?\./i,
/\.pem$/,
/\.key$/,
/id_rsa/,
];
/**
* Generate a unique event ID.
*/
function generateEventId() {
return `gov-${Date.now()}-${crypto.randomBytes(4).toString('hex')}`;
}
/**
* Scan text content for hardcoded secrets.
* Returns array of { name, match } for each detected secret.
*/
function detectSecrets(text) {
if (!text || typeof text !== 'string') return [];
const findings = [];
for (const { name, pattern } of SECRET_PATTERNS) {
if (pattern.test(text)) {
findings.push({ name });
}
}
return findings;
}
/**
* Check if a command requires governance approval.
*/
function detectApprovalRequired(command) {
if (!command || typeof command !== 'string') return [];
const findings = [];
for (const pattern of APPROVAL_COMMANDS) {
if (pattern.test(command)) {
findings.push({ pattern: pattern.source });
}
}
return findings;
}
/**
* Check if a file path is policy-sensitive.
*/
function detectSensitivePath(filePath) {
if (!filePath || typeof filePath !== 'string') return false;
return SENSITIVE_PATHS.some(pattern => pattern.test(filePath));
}
/**
* Analyze a hook input payload and return governance events to capture.
*
* @param {Object} input - Parsed hook input (tool_name, tool_input, tool_output)
* @param {Object} [context] - Additional context (sessionId, hookPhase)
* @returns {Array<Object>} Array of governance event objects
*/
function analyzeForGovernanceEvents(input, context = {}) {
const events = [];
const toolName = input.tool_name || '';
const toolInput = input.tool_input || {};
const toolOutput = typeof input.tool_output === 'string' ? input.tool_output : '';
const sessionId = context.sessionId || null;
const hookPhase = context.hookPhase || 'unknown';
// 1. Secret detection in tool input content
const inputText = typeof toolInput === 'object'
? JSON.stringify(toolInput)
: String(toolInput);
const inputSecrets = detectSecrets(inputText);
const outputSecrets = detectSecrets(toolOutput);
const allSecrets = [...inputSecrets, ...outputSecrets];
if (allSecrets.length > 0) {
events.push({
id: generateEventId(),
sessionId,
eventType: 'secret_detected',
payload: {
toolName,
hookPhase,
secretTypes: allSecrets.map(s => s.name),
location: inputSecrets.length > 0 ? 'input' : 'output',
severity: 'critical',
},
resolvedAt: null,
resolution: null,
});
}
// 2. Approval-required commands (Bash only)
if (toolName === 'Bash') {
const command = toolInput.command || '';
const approvalFindings = detectApprovalRequired(command);
if (approvalFindings.length > 0) {
events.push({
id: generateEventId(),
sessionId,
eventType: 'approval_requested',
payload: {
toolName,
hookPhase,
command: command.slice(0, 200),
matchedPatterns: approvalFindings.map(f => f.pattern),
severity: 'high',
},
resolvedAt: null,
resolution: null,
});
}
}
// 3. Policy violation: writing to sensitive paths
const filePath = toolInput.file_path || toolInput.path || '';
if (filePath && detectSensitivePath(filePath)) {
events.push({
id: generateEventId(),
sessionId,
eventType: 'policy_violation',
payload: {
toolName,
hookPhase,
filePath: filePath.slice(0, 200),
reason: 'sensitive_file_access',
severity: 'warning',
},
resolvedAt: null,
resolution: null,
});
}
// 4. Security-relevant tool usage tracking
if (SECURITY_RELEVANT_TOOLS.has(toolName) && hookPhase === 'post') {
const command = toolInput.command || '';
const hasElevated = /sudo\s/.test(command) || /chmod\s/.test(command) || /chown\s/.test(command);
if (hasElevated) {
events.push({
id: generateEventId(),
sessionId,
eventType: 'security_finding',
payload: {
toolName,
hookPhase,
command: command.slice(0, 200),
reason: 'elevated_privilege_command',
severity: 'medium',
},
resolvedAt: null,
resolution: null,
});
}
}
return events;
}
/**
* Core hook logic — exported so run-with-flags.js can call directly.
*
* @param {string} rawInput - Raw JSON string from stdin
* @returns {string} The original input (pass-through)
*/
function run(rawInput) {
// Gate on feature flag
if (String(process.env.ECC_GOVERNANCE_CAPTURE || '').toLowerCase() !== '1') {
return rawInput;
}
try {
const input = JSON.parse(rawInput);
const sessionId = process.env.ECC_SESSION_ID || null;
const hookPhase = process.env.CLAUDE_HOOK_EVENT_NAME || 'unknown';
const events = analyzeForGovernanceEvents(input, {
sessionId,
hookPhase: hookPhase.startsWith('Pre') ? 'pre' : 'post',
});
if (events.length > 0) {
// Write events to stderr as JSON-lines for the caller to capture.
// The state store write is async and handled by a separate process
// to avoid blocking the hook pipeline.
for (const event of events) {
process.stderr.write(
`[governance] ${JSON.stringify(event)}\n`
);
}
}
} catch {
// Silently ignore parse errors — never block the tool pipeline.
}
return rawInput;
}
// ── stdin entry point ────────────────────────────────
if (require.main === module) {
let raw = '';
process.stdin.setEncoding('utf8');
process.stdin.on('data', chunk => {
if (raw.length < MAX_STDIN) {
const remaining = MAX_STDIN - raw.length;
raw += chunk.substring(0, remaining);
}
});
process.stdin.on('end', () => {
const result = run(raw);
process.stdout.write(result);
});
}
module.exports = {
APPROVAL_COMMANDS,
SECRET_PATTERNS,
SECURITY_RELEVANT_TOOLS,
SENSITIVE_PATHS,
analyzeForGovernanceEvents,
detectApprovalRequired,
detectSecrets,
detectSensitivePath,
generateEventId,
run,
};

View File

@@ -0,0 +1,588 @@
#!/usr/bin/env node
'use strict';
/**
* MCP health-check hook.
*
* Compatible with Claude Code's existing hook events:
* - PreToolUse: probe MCP server health before MCP tool execution
* - PostToolUseFailure: mark unhealthy servers, attempt reconnect, and re-probe
*
* The hook persists health state outside the conversation context so it
* survives compaction and later turns.
*/
const fs = require('fs');
const os = require('os');
const path = require('path');
const http = require('http');
const https = require('https');
const { spawn, spawnSync } = require('child_process');
const MAX_STDIN = 1024 * 1024;
const DEFAULT_TTL_MS = 2 * 60 * 1000;
const DEFAULT_TIMEOUT_MS = 5000;
const DEFAULT_BACKOFF_MS = 30 * 1000;
const MAX_BACKOFF_MS = 10 * 60 * 1000;
const HEALTHY_HTTP_CODES = new Set([200, 201, 202, 204, 301, 302, 303, 304, 307, 308, 405]);
const RECONNECT_STATUS_CODES = new Set([401, 403, 429, 503]);
const FAILURE_PATTERNS = [
{ code: 401, pattern: /\b401\b|unauthori[sz]ed|auth(?:entication)?\s+(?:failed|expired|invalid)/i },
{ code: 403, pattern: /\b403\b|forbidden|permission denied/i },
{ code: 429, pattern: /\b429\b|rate limit|too many requests/i },
{ code: 503, pattern: /\b503\b|service unavailable|overloaded|temporarily unavailable/i },
{ code: 'transport', pattern: /ECONNREFUSED|ENOTFOUND|EAI_AGAIN|timed? out|socket hang up|connection (?:failed|lost|reset|closed)/i }
];
function envNumber(name, fallback) {
const value = Number(process.env[name]);
return Number.isFinite(value) && value >= 0 ? value : fallback;
}
function stateFilePath() {
if (process.env.ECC_MCP_HEALTH_STATE_PATH) {
return path.resolve(process.env.ECC_MCP_HEALTH_STATE_PATH);
}
return path.join(os.homedir(), '.claude', 'mcp-health-cache.json');
}
function configPaths() {
if (process.env.ECC_MCP_CONFIG_PATH) {
return process.env.ECC_MCP_CONFIG_PATH
.split(path.delimiter)
.map(entry => entry.trim())
.filter(Boolean)
.map(entry => path.resolve(entry));
}
const cwd = process.cwd();
const home = os.homedir();
return [
path.join(cwd, '.claude.json'),
path.join(cwd, '.claude', 'settings.json'),
path.join(home, '.claude.json'),
path.join(home, '.claude', 'settings.json')
];
}
function readJsonFile(filePath) {
try {
return JSON.parse(fs.readFileSync(filePath, 'utf8'));
} catch {
return null;
}
}
function loadState(filePath) {
const state = readJsonFile(filePath);
if (!state || typeof state !== 'object' || Array.isArray(state)) {
return { version: 1, servers: {} };
}
if (!state.servers || typeof state.servers !== 'object' || Array.isArray(state.servers)) {
state.servers = {};
}
return state;
}
function saveState(filePath, state) {
try {
fs.mkdirSync(path.dirname(filePath), { recursive: true });
fs.writeFileSync(filePath, JSON.stringify(state, null, 2));
} catch {
// Never block the hook on state persistence errors.
}
}
function readRawStdin() {
return new Promise(resolve => {
let raw = '';
process.stdin.setEncoding('utf8');
process.stdin.on('data', chunk => {
if (raw.length < MAX_STDIN) {
const remaining = MAX_STDIN - raw.length;
raw += chunk.substring(0, remaining);
}
});
process.stdin.on('end', () => resolve(raw));
process.stdin.on('error', () => resolve(raw));
});
}
function safeParse(raw) {
try {
return raw.trim() ? JSON.parse(raw) : {};
} catch {
return {};
}
}
function extractMcpTarget(input) {
const toolName = String(input.tool_name || input.name || '');
const explicitServer = input.server
|| input.mcp_server
|| input.tool_input?.server
|| input.tool_input?.mcp_server
|| input.tool_input?.connector
|| null;
const explicitTool = input.tool
|| input.mcp_tool
|| input.tool_input?.tool
|| input.tool_input?.mcp_tool
|| null;
if (explicitServer) {
return {
server: String(explicitServer),
tool: explicitTool ? String(explicitTool) : toolName
};
}
if (!toolName.startsWith('mcp__')) {
return null;
}
const segments = toolName.slice(5).split('__');
if (segments.length < 2 || !segments[0]) {
return null;
}
return {
server: segments[0],
tool: segments.slice(1).join('__')
};
}
function resolveServerConfig(serverName) {
for (const filePath of configPaths()) {
const data = readJsonFile(filePath);
const server = data?.mcpServers?.[serverName]
|| data?.mcp_servers?.[serverName]
|| null;
if (server && typeof server === 'object' && !Array.isArray(server)) {
return {
config: server,
source: filePath
};
}
}
return null;
}
function markHealthy(state, serverName, now, details = {}) {
state.servers[serverName] = {
status: 'healthy',
checkedAt: now,
expiresAt: now + envNumber('ECC_MCP_HEALTH_TTL_MS', DEFAULT_TTL_MS),
failureCount: 0,
lastError: null,
lastFailureCode: null,
nextRetryAt: now,
lastRestoredAt: now,
...details
};
}
function markUnhealthy(state, serverName, now, failureCode, errorMessage) {
const previous = state.servers[serverName] || {};
const failureCount = Number(previous.failureCount || 0) + 1;
const backoffBase = envNumber('ECC_MCP_HEALTH_BACKOFF_MS', DEFAULT_BACKOFF_MS);
const nextRetryDelay = Math.min(backoffBase * (2 ** Math.max(failureCount - 1, 0)), MAX_BACKOFF_MS);
state.servers[serverName] = {
status: 'unhealthy',
checkedAt: now,
expiresAt: now,
failureCount,
lastError: errorMessage || null,
lastFailureCode: failureCode || null,
nextRetryAt: now + nextRetryDelay,
lastRestoredAt: previous.lastRestoredAt || null
};
}
function failureSummary(input) {
const output = input.tool_output;
const pieces = [
typeof input.error === 'string' ? input.error : '',
typeof input.message === 'string' ? input.message : '',
typeof input.tool_response === 'string' ? input.tool_response : '',
typeof output === 'string' ? output : '',
typeof output?.output === 'string' ? output.output : '',
typeof output?.stderr === 'string' ? output.stderr : '',
typeof input.tool_input?.error === 'string' ? input.tool_input.error : ''
].filter(Boolean);
return pieces.join('\n');
}
function detectFailureCode(text) {
const summary = String(text || '');
for (const entry of FAILURE_PATTERNS) {
if (entry.pattern.test(summary)) {
return entry.code;
}
}
return null;
}
function requestHttp(urlString, headers, timeoutMs) {
return new Promise(resolve => {
let settled = false;
let timedOut = false;
const url = new URL(urlString);
const client = url.protocol === 'https:' ? https : http;
const req = client.request(
url,
{
method: 'GET',
headers,
},
res => {
if (settled) return;
settled = true;
res.resume();
resolve({
ok: HEALTHY_HTTP_CODES.has(res.statusCode),
statusCode: res.statusCode,
reason: `HTTP ${res.statusCode}`
});
}
);
req.setTimeout(timeoutMs, () => {
timedOut = true;
req.destroy(new Error('timeout'));
});
req.on('error', error => {
if (settled) return;
settled = true;
resolve({
ok: false,
statusCode: null,
reason: timedOut ? 'request timed out' : error.message
});
});
req.end();
});
}
function probeCommandServer(serverName, config) {
return new Promise(resolve => {
const command = config.command;
const args = Array.isArray(config.args) ? config.args.map(arg => String(arg)) : [];
const timeoutMs = envNumber('ECC_MCP_HEALTH_TIMEOUT_MS', DEFAULT_TIMEOUT_MS);
const mergedEnv = {
...process.env,
...(config.env && typeof config.env === 'object' && !Array.isArray(config.env) ? config.env : {})
};
let stderr = '';
let done = false;
function finish(result) {
if (done) return;
done = true;
resolve(result);
}
let child;
try {
child = spawn(command, args, {
env: mergedEnv,
cwd: process.cwd(),
stdio: ['pipe', 'ignore', 'pipe']
});
} catch (error) {
finish({
ok: false,
statusCode: null,
reason: error.message
});
return;
}
child.stderr.on('data', chunk => {
if (stderr.length < 4000) {
const remaining = 4000 - stderr.length;
stderr += String(chunk).slice(0, remaining);
}
});
child.on('error', error => {
finish({
ok: false,
statusCode: null,
reason: error.message
});
});
child.on('exit', (code, signal) => {
finish({
ok: false,
statusCode: code,
reason: stderr.trim() || `process exited before handshake (${signal || code || 'unknown'})`
});
});
const timer = setTimeout(() => {
try {
child.kill('SIGTERM');
} catch {
// ignore
}
setTimeout(() => {
try {
child.kill('SIGKILL');
} catch {
// ignore
}
}, 200).unref?.();
finish({
ok: true,
statusCode: null,
reason: `${serverName} accepted a new stdio process`
});
}, timeoutMs);
if (typeof timer.unref === 'function') {
timer.unref();
}
});
}
async function probeServer(serverName, resolvedConfig) {
const config = resolvedConfig.config;
if (config.type === 'http' || config.url) {
const result = await requestHttp(config.url, config.headers || {}, envNumber('ECC_MCP_HEALTH_TIMEOUT_MS', DEFAULT_TIMEOUT_MS));
return {
ok: result.ok,
failureCode: RECONNECT_STATUS_CODES.has(result.statusCode) ? result.statusCode : null,
reason: result.reason,
source: resolvedConfig.source
};
}
if (config.command) {
const result = await probeCommandServer(serverName, config);
return {
ok: result.ok,
failureCode: RECONNECT_STATUS_CODES.has(result.statusCode) ? result.statusCode : null,
reason: result.reason,
source: resolvedConfig.source
};
}
return {
ok: false,
failureCode: null,
reason: 'unsupported MCP server config',
source: resolvedConfig.source
};
}
function reconnectCommand(serverName) {
const key = `ECC_MCP_RECONNECT_${String(serverName).toUpperCase().replace(/[^A-Z0-9]/g, '_')}`;
const command = process.env[key] || process.env.ECC_MCP_RECONNECT_COMMAND || '';
if (!command.trim()) {
return null;
}
return command.includes('{server}')
? command.replace(/\{server\}/g, serverName)
: command;
}
function attemptReconnect(serverName) {
const command = reconnectCommand(serverName);
if (!command) {
return { attempted: false, success: false, reason: 'no reconnect command configured' };
}
const result = spawnSync(command, {
shell: true,
env: process.env,
cwd: process.cwd(),
encoding: 'utf8',
timeout: envNumber('ECC_MCP_RECONNECT_TIMEOUT_MS', DEFAULT_TIMEOUT_MS)
});
if (result.error) {
return { attempted: true, success: false, reason: result.error.message };
}
if (result.status !== 0) {
return {
attempted: true,
success: false,
reason: (result.stderr || result.stdout || `reconnect exited ${result.status}`).trim()
};
}
return { attempted: true, success: true, reason: 'reconnect command completed' };
}
function shouldFailOpen() {
return /^(1|true|yes)$/i.test(String(process.env.ECC_MCP_HEALTH_FAIL_OPEN || ''));
}
function emitLogs(logs) {
for (const line of logs) {
process.stderr.write(`${line}\n`);
}
}
async function handlePreToolUse(rawInput, input, target, statePathValue, now) {
const logs = [];
const state = loadState(statePathValue);
const previous = state.servers[target.server] || {};
if (previous.status === 'healthy' && Number(previous.expiresAt || 0) > now) {
return { rawInput, exitCode: 0, logs };
}
if (previous.status === 'unhealthy' && Number(previous.nextRetryAt || 0) > now) {
logs.push(
`[MCPHealthCheck] ${target.server} is marked unhealthy until ${new Date(previous.nextRetryAt).toISOString()}; skipping ${target.tool || 'tool'}`
);
return { rawInput, exitCode: shouldFailOpen() ? 0 : 2, logs };
}
const resolvedConfig = resolveServerConfig(target.server);
if (!resolvedConfig) {
logs.push(`[MCPHealthCheck] No MCP config found for ${target.server}; skipping preflight probe`);
return { rawInput, exitCode: 0, logs };
}
const probe = await probeServer(target.server, resolvedConfig);
if (probe.ok) {
markHealthy(state, target.server, now, { source: resolvedConfig.source });
saveState(statePathValue, state);
if (previous.status === 'unhealthy') {
logs.push(`[MCPHealthCheck] ${target.server} connection restored`);
}
return { rawInput, exitCode: 0, logs };
}
let reconnect = { attempted: false, success: false, reason: 'probe failed' };
if (probe.failureCode || previous.status === 'unhealthy') {
reconnect = attemptReconnect(target.server);
if (reconnect.success) {
const reprobe = await probeServer(target.server, resolvedConfig);
if (reprobe.ok) {
markHealthy(state, target.server, now, {
source: resolvedConfig.source,
restoredBy: 'reconnect-command'
});
saveState(statePathValue, state);
logs.push(`[MCPHealthCheck] ${target.server} connection restored after reconnect`);
return { rawInput, exitCode: 0, logs };
}
probe.reason = `${probe.reason}; reconnect reprobe failed: ${reprobe.reason}`;
}
}
markUnhealthy(state, target.server, now, probe.failureCode, probe.reason);
saveState(statePathValue, state);
const reconnectSuffix = reconnect.attempted
? ` Reconnect attempt: ${reconnect.success ? 'ok' : reconnect.reason}.`
: '';
logs.push(
`[MCPHealthCheck] ${target.server} is unavailable (${probe.reason}). Blocking ${target.tool || 'tool'} so Claude can fall back to non-MCP tools.${reconnectSuffix}`
);
return { rawInput, exitCode: shouldFailOpen() ? 0 : 2, logs };
}
async function handlePostToolUseFailure(rawInput, input, target, statePathValue, now) {
const logs = [];
const summary = failureSummary(input);
const failureCode = detectFailureCode(summary);
if (!failureCode) {
return { rawInput, exitCode: 0, logs };
}
const state = loadState(statePathValue);
markUnhealthy(state, target.server, now, failureCode, summary.slice(0, 500));
saveState(statePathValue, state);
logs.push(`[MCPHealthCheck] ${target.server} reported ${failureCode}; marking server unhealthy and attempting reconnect`);
const reconnect = attemptReconnect(target.server);
if (!reconnect.attempted) {
logs.push(`[MCPHealthCheck] ${target.server} reconnect skipped: ${reconnect.reason}`);
return { rawInput, exitCode: 0, logs };
}
if (!reconnect.success) {
logs.push(`[MCPHealthCheck] ${target.server} reconnect failed: ${reconnect.reason}`);
return { rawInput, exitCode: 0, logs };
}
const resolvedConfig = resolveServerConfig(target.server);
if (!resolvedConfig) {
logs.push(`[MCPHealthCheck] ${target.server} reconnect completed but no config was available for a follow-up probe`);
return { rawInput, exitCode: 0, logs };
}
const reprobe = await probeServer(target.server, resolvedConfig);
if (!reprobe.ok) {
logs.push(`[MCPHealthCheck] ${target.server} reconnect command ran, but health probe still failed: ${reprobe.reason}`);
return { rawInput, exitCode: 0, logs };
}
const refreshed = loadState(statePathValue);
markHealthy(refreshed, target.server, now, {
source: resolvedConfig.source,
restoredBy: 'post-failure-reconnect'
});
saveState(statePathValue, refreshed);
logs.push(`[MCPHealthCheck] ${target.server} connection restored`);
return { rawInput, exitCode: 0, logs };
}
async function main() {
const rawInput = await readRawStdin();
const input = safeParse(rawInput);
const target = extractMcpTarget(input);
if (!target) {
process.stdout.write(rawInput);
process.exit(0);
return;
}
const eventName = process.env.CLAUDE_HOOK_EVENT_NAME || 'PreToolUse';
const now = Date.now();
const statePathValue = stateFilePath();
const result = eventName === 'PostToolUseFailure'
? await handlePostToolUseFailure(rawInput, input, target, statePathValue, now)
: await handlePreToolUse(rawInput, input, target, statePathValue, now);
emitLogs(result.logs);
process.stdout.write(result.rawInput);
process.exit(result.exitCode);
}
main().catch(error => {
process.stderr.write(`[MCPHealthCheck] Unexpected error: ${error.message}\n`);
process.exit(0);
});

View File

@@ -21,6 +21,7 @@ const {
readFile, readFile,
writeFile, writeFile,
runCommand, runCommand,
stripAnsi,
log log
} = require('../lib/utils'); } = require('../lib/utils');
@@ -58,8 +59,9 @@ function extractSessionSummary(transcriptPath) {
: Array.isArray(rawContent) : Array.isArray(rawContent)
? rawContent.map(c => (c && c.text) || '').join(' ') ? rawContent.map(c => (c && c.text) || '').join(' ')
: ''; : '';
if (text.trim()) { const cleaned = stripAnsi(text).trim();
userMessages.push(text.trim().slice(0, 200)); if (cleaned) {
userMessages.push(cleaned.slice(0, 200));
} }
} }

View File

@@ -15,6 +15,7 @@ const {
findFiles, findFiles,
ensureDir, ensureDir,
readFile, readFile,
stripAnsi,
log, log,
output output
} = require('../lib/utils'); } = require('../lib/utils');
@@ -39,7 +40,7 @@ async function main() {
log(`[SessionStart] Latest: ${latest.path}`); log(`[SessionStart] Latest: ${latest.path}`);
// Read and inject the latest session content into Claude's context // Read and inject the latest session content into Claude's context
const content = readFile(latest.path); const content = stripAnsi(readFile(latest.path));
if (content && !content.includes('[Session context goes here]')) { if (content && !content.includes('[Session context goes here]')) {
// Only inject if the session has actual content (not the blank template) // Only inject if the session has actual content (not the blank template)
output(`Previous session summary:\n${content}`); output(`Previous session summary:\n${content}`);

View File

@@ -0,0 +1,244 @@
'use strict';
const fs = require('fs');
const path = require('path');
/**
* Parse YAML frontmatter from a markdown string.
* Returns { frontmatter: {}, body: string }.
*/
function parseFrontmatter(content) {
const match = content.match(/^---\r?\n([\s\S]*?)\r?\n---(?:\r?\n([\s\S]*))?$/);
if (!match) {
return { frontmatter: {}, body: content };
}
const frontmatter = {};
for (const line of match[1].split('\n')) {
const colonIdx = line.indexOf(':');
if (colonIdx === -1) continue;
const key = line.slice(0, colonIdx).trim();
let value = line.slice(colonIdx + 1).trim();
// Handle JSON arrays (e.g. tools: ["Read", "Grep"])
if (value.startsWith('[') && value.endsWith(']')) {
try {
value = JSON.parse(value);
} catch {
// keep as string
}
}
// Strip surrounding quotes
if (typeof value === 'string' && value.startsWith('"') && value.endsWith('"')) {
value = value.slice(1, -1);
}
frontmatter[key] = value;
}
return { frontmatter, body: match[2] || '' };
}
/**
* Extract the first meaningful paragraph from agent body as a summary.
* Skips headings, list items, code blocks, and table rows.
*/
function extractSummary(body, maxSentences = 1) {
const lines = body.split('\n');
const paragraphs = [];
let current = [];
let inCodeBlock = false;
for (const line of lines) {
const trimmed = line.trim();
// Track fenced code blocks
if (trimmed.startsWith('```')) {
inCodeBlock = !inCodeBlock;
continue;
}
if (inCodeBlock) continue;
if (trimmed === '') {
if (current.length > 0) {
paragraphs.push(current.join(' '));
current = [];
}
continue;
}
// Skip headings, list items (bold, plain, asterisk), numbered lists, table rows
if (
trimmed.startsWith('#') ||
trimmed.startsWith('- ') ||
trimmed.startsWith('* ') ||
/^\d+\.\s/.test(trimmed) ||
trimmed.startsWith('|')
) {
if (current.length > 0) {
paragraphs.push(current.join(' '));
current = [];
}
continue;
}
current.push(trimmed);
}
if (current.length > 0) {
paragraphs.push(current.join(' '));
}
const firstParagraph = paragraphs.find(p => p.length > 0);
if (!firstParagraph) return '';
const sentences = firstParagraph.match(/[^.!?]+[.!?]+/g) || [firstParagraph];
return sentences.slice(0, maxSentences).map(s => s.trim()).join(' ').trim();
}
/**
* Load and parse a single agent file.
*/
function loadAgent(filePath) {
const content = fs.readFileSync(filePath, 'utf8');
const { frontmatter, body } = parseFrontmatter(content);
const fileName = path.basename(filePath, '.md');
return {
fileName,
name: frontmatter.name || fileName,
description: frontmatter.description || '',
tools: Array.isArray(frontmatter.tools) ? frontmatter.tools : [],
model: frontmatter.model || 'sonnet',
body,
byteSize: Buffer.byteLength(content, 'utf8'),
};
}
/**
* Load all agents from a directory.
*/
function loadAgents(agentsDir) {
if (!fs.existsSync(agentsDir)) return [];
return fs.readdirSync(agentsDir)
.filter(f => f.endsWith('.md'))
.sort()
.map(f => loadAgent(path.join(agentsDir, f)));
}
/**
* Compress an agent to catalog entry (metadata only).
*/
function compressToCatalog(agent) {
return {
name: agent.name,
description: agent.description,
tools: agent.tools,
model: agent.model,
};
}
/**
* Compress an agent to summary entry (metadata + first paragraph).
*/
function compressToSummary(agent) {
return {
...compressToCatalog(agent),
summary: extractSummary(agent.body),
};
}
const allowedModes = ['catalog', 'summary', 'full'];
/**
* Build a compressed catalog from a directory of agents.
*
* Modes:
* - 'catalog': name, description, tools, model only (~2-3k tokens for 27 agents)
* - 'summary': catalog + first paragraph summary (~4-5k tokens)
* - 'full': no compression, full body included
*
* Returns { agents: [], stats: { totalAgents, originalBytes, compressedBytes, compressedTokenEstimate, mode } }
*/
function buildAgentCatalog(agentsDir, options = {}) {
const mode = options.mode || 'catalog';
if (!allowedModes.includes(mode)) {
throw new Error(`Invalid mode "${mode}". Allowed modes: ${allowedModes.join(', ')}`);
}
const filter = options.filter || null;
let agents = loadAgents(agentsDir);
if (typeof filter === 'function') {
agents = agents.filter(filter);
}
const originalBytes = agents.reduce((sum, a) => sum + a.byteSize, 0);
let compressed;
if (mode === 'catalog') {
compressed = agents.map(compressToCatalog);
} else if (mode === 'summary') {
compressed = agents.map(compressToSummary);
} else {
compressed = agents.map(a => ({
name: a.name,
description: a.description,
tools: a.tools,
model: a.model,
body: a.body,
}));
}
const compressedJson = JSON.stringify(compressed);
// Rough token estimate: ~4 chars per token for English text
const compressedTokenEstimate = Math.ceil(compressedJson.length / 4);
return {
agents: compressed,
stats: {
totalAgents: agents.length,
originalBytes,
compressedBytes: Buffer.byteLength(compressedJson, 'utf8'),
compressedTokenEstimate,
mode,
},
};
}
/**
* Lazy-load a single agent's full content by name.
* Returns null if not found.
*/
function lazyLoadAgent(agentsDir, agentName) {
// Validate agentName: only allow alphanumeric, hyphen, underscore
if (!/^[\w-]+$/.test(agentName)) {
return null;
}
const filePath = path.resolve(agentsDir, `${agentName}.md`);
// Verify the resolved path is still within agentsDir
const resolvedAgentsDir = path.resolve(agentsDir);
if (!filePath.startsWith(resolvedAgentsDir + path.sep)) {
return null;
}
if (!fs.existsSync(filePath)) return null;
return loadAgent(filePath);
}
module.exports = {
buildAgentCatalog,
compressToCatalog,
compressToSummary,
extractSummary,
lazyLoadAgent,
loadAgent,
loadAgents,
parseFrontmatter,
};

212
scripts/lib/inspection.js Normal file
View File

@@ -0,0 +1,212 @@
'use strict';
const DEFAULT_FAILURE_THRESHOLD = 3;
const DEFAULT_WINDOW_SIZE = 50;
const FAILURE_OUTCOMES = new Set(['failure', 'failed', 'error']);
/**
* Normalize a failure reason string for grouping.
* Strips timestamps, UUIDs, file paths, and numeric suffixes.
*/
function normalizeFailureReason(reason) {
if (!reason || typeof reason !== 'string') {
return 'unknown';
}
return reason
.trim()
.toLowerCase()
// Strip ISO timestamps (note: already lowercased, so t/z not T/Z)
.replace(/\d{4}-\d{2}-\d{2}[t ]\d{2}:\d{2}:\d{2}[.\dz]*/g, '<timestamp>')
// Strip UUIDs (already lowercased)
.replace(/[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}/g, '<uuid>')
// Strip file paths
.replace(/\/[\w./-]+/g, '<path>')
// Collapse whitespace
.replace(/\s+/g, ' ')
.trim();
}
/**
* Group skill runs by skill ID and normalized failure reason.
*
* @param {Array} skillRuns - Array of skill run objects
* @returns {Map<string, { skillId: string, normalizedReason: string, runs: Array }>}
*/
function groupFailures(skillRuns) {
const groups = new Map();
for (const run of skillRuns) {
const outcome = String(run.outcome || '').toLowerCase();
if (!FAILURE_OUTCOMES.has(outcome)) {
continue;
}
const normalizedReason = normalizeFailureReason(run.failureReason);
const key = `${run.skillId}::${normalizedReason}`;
if (!groups.has(key)) {
groups.set(key, {
skillId: run.skillId,
normalizedReason,
runs: [],
});
}
groups.get(key).runs.push(run);
}
return groups;
}
/**
* Detect recurring failure patterns from skill runs.
*
* @param {Array} skillRuns - Array of skill run objects (newest first)
* @param {Object} [options]
* @param {number} [options.threshold=3] - Minimum failure count to trigger pattern detection
* @returns {Array<Object>} Array of detected patterns sorted by count descending
*/
function detectPatterns(skillRuns, options = {}) {
const threshold = options.threshold ?? DEFAULT_FAILURE_THRESHOLD;
const groups = groupFailures(skillRuns);
const patterns = [];
for (const [, group] of groups) {
if (group.runs.length < threshold) {
continue;
}
const sortedRuns = [...group.runs].sort(
(a, b) => (b.createdAt || '').localeCompare(a.createdAt || '')
);
const firstSeen = sortedRuns[sortedRuns.length - 1].createdAt || null;
const lastSeen = sortedRuns[0].createdAt || null;
const sessionIds = [...new Set(sortedRuns.map(r => r.sessionId).filter(Boolean))];
const versions = [...new Set(sortedRuns.map(r => r.skillVersion).filter(Boolean))];
// Collect unique raw failure reasons for this normalized group
const rawReasons = [...new Set(sortedRuns.map(r => r.failureReason).filter(Boolean))];
patterns.push({
skillId: group.skillId,
normalizedReason: group.normalizedReason,
count: group.runs.length,
firstSeen,
lastSeen,
sessionIds,
versions,
rawReasons,
runIds: sortedRuns.map(r => r.id),
});
}
// Sort by count descending, then by lastSeen descending
return patterns.sort((a, b) => {
if (b.count !== a.count) return b.count - a.count;
return (b.lastSeen || '').localeCompare(a.lastSeen || '');
});
}
/**
* Generate an inspection report from detected patterns.
*
* @param {Array} patterns - Output from detectPatterns()
* @param {Object} [options]
* @param {string} [options.generatedAt] - ISO timestamp for the report
* @returns {Object} Inspection report
*/
function generateReport(patterns, options = {}) {
const generatedAt = options.generatedAt || new Date().toISOString();
if (patterns.length === 0) {
return {
generatedAt,
status: 'clean',
patternCount: 0,
patterns: [],
summary: 'No recurring failure patterns detected.',
};
}
const totalFailures = patterns.reduce((sum, p) => sum + p.count, 0);
const affectedSkills = [...new Set(patterns.map(p => p.skillId))];
return {
generatedAt,
status: 'attention_needed',
patternCount: patterns.length,
totalFailures,
affectedSkills,
patterns: patterns.map(p => ({
skillId: p.skillId,
normalizedReason: p.normalizedReason,
count: p.count,
firstSeen: p.firstSeen,
lastSeen: p.lastSeen,
sessionIds: p.sessionIds,
versions: p.versions,
rawReasons: p.rawReasons.slice(0, 5),
suggestedAction: suggestAction(p),
})),
summary: `Found ${patterns.length} recurring failure pattern(s) across ${affectedSkills.length} skill(s) (${totalFailures} total failures).`,
};
}
/**
* Suggest a remediation action based on pattern characteristics.
*/
function suggestAction(pattern) {
const reason = pattern.normalizedReason;
if (reason.includes('timeout')) {
return 'Increase timeout or optimize skill execution time.';
}
if (reason.includes('permission') || reason.includes('denied') || reason.includes('auth')) {
return 'Check tool permissions and authentication configuration.';
}
if (reason.includes('not found') || reason.includes('missing')) {
return 'Verify required files/dependencies exist before skill execution.';
}
if (reason.includes('parse') || reason.includes('syntax') || reason.includes('json')) {
return 'Review input/output format expectations and add validation.';
}
if (pattern.versions.length > 1) {
return 'Failure spans multiple versions. Consider rollback to last stable version.';
}
return 'Investigate root cause and consider adding error handling.';
}
/**
* Run full inspection pipeline: query skill runs, detect patterns, generate report.
*
* @param {Object} store - State store instance with listRecentSessions, getSessionDetail
* @param {Object} [options]
* @param {number} [options.threshold] - Minimum failure count
* @param {number} [options.windowSize] - Number of recent skill runs to analyze
* @returns {Object} Inspection report
*/
function inspect(store, options = {}) {
const windowSize = options.windowSize ?? DEFAULT_WINDOW_SIZE;
const threshold = options.threshold ?? DEFAULT_FAILURE_THRESHOLD;
const status = store.getStatus({ recentSkillRunLimit: windowSize });
const skillRuns = status.skillRuns.recent || [];
const patterns = detectPatterns(skillRuns, { threshold });
return generateReport(patterns, { generatedAt: status.generatedAt });
}
module.exports = {
DEFAULT_FAILURE_THRESHOLD,
DEFAULT_WINDOW_SIZE,
detectPatterns,
generateReport,
groupFailures,
inspect,
normalizeFailureReason,
suggestAction,
};

View File

@@ -10,6 +10,8 @@ const COMPONENT_FAMILY_PREFIXES = {
language: 'lang:', language: 'lang:',
framework: 'framework:', framework: 'framework:',
capability: 'capability:', capability: 'capability:',
agent: 'agent:',
skill: 'skill:',
}; };
const LEGACY_COMPAT_BASE_MODULE_IDS_BY_TARGET = Object.freeze({ const LEGACY_COMPAT_BASE_MODULE_IDS_BY_TARGET = Object.freeze({
claude: [ claude: [

View File

@@ -0,0 +1,89 @@
'use strict';
const fs = require('fs');
const path = require('path');
const os = require('os');
/**
* Resolve the ECC source root directory.
*
* Tries, in order:
* 1. CLAUDE_PLUGIN_ROOT env var (set by Claude Code for hooks, or by user)
* 2. Standard install location (~/.claude/) — when scripts exist there
* 3. Plugin cache auto-detection — scans ~/.claude/plugins/cache/everything-claude-code/
* 4. Fallback to ~/.claude/ (original behaviour)
*
* @param {object} [options]
* @param {string} [options.homeDir] Override home directory (for testing)
* @param {string} [options.envRoot] Override CLAUDE_PLUGIN_ROOT (for testing)
* @param {string} [options.probe] Relative path used to verify a candidate root
* contains ECC scripts. Default: 'scripts/lib/utils.js'
* @returns {string} Resolved ECC root path
*/
function resolveEccRoot(options = {}) {
const envRoot = options.envRoot !== undefined
? options.envRoot
: (process.env.CLAUDE_PLUGIN_ROOT || '');
if (envRoot && envRoot.trim()) {
return envRoot.trim();
}
const homeDir = options.homeDir || os.homedir();
const claudeDir = path.join(homeDir, '.claude');
const probe = options.probe || path.join('scripts', 'lib', 'utils.js');
// Standard install — files are copied directly into ~/.claude/
if (fs.existsSync(path.join(claudeDir, probe))) {
return claudeDir;
}
// Plugin cache — Claude Code stores marketplace plugins under
// ~/.claude/plugins/cache/<plugin-name>/<org>/<version>/
try {
const cacheBase = path.join(claudeDir, 'plugins', 'cache', 'everything-claude-code');
const orgDirs = fs.readdirSync(cacheBase, { withFileTypes: true });
for (const orgEntry of orgDirs) {
if (!orgEntry.isDirectory()) continue;
const orgPath = path.join(cacheBase, orgEntry.name);
let versionDirs;
try {
versionDirs = fs.readdirSync(orgPath, { withFileTypes: true });
} catch {
continue;
}
for (const verEntry of versionDirs) {
if (!verEntry.isDirectory()) continue;
const candidate = path.join(orgPath, verEntry.name);
if (fs.existsSync(path.join(candidate, probe))) {
return candidate;
}
}
}
} catch {
// Plugin cache doesn't exist or isn't readable — continue to fallback
}
return claudeDir;
}
/**
* Compact inline version for embedding in command .md code blocks.
*
* This is the minified form of resolveEccRoot() suitable for use in
* node -e "..." scripts where require() is not available before the
* root is known.
*
* Usage in commands:
* const _r = <paste INLINE_RESOLVE>;
* const sm = require(_r + '/scripts/lib/session-manager');
*/
const INLINE_RESOLVE = `(()=>{var e=process.env.CLAUDE_PLUGIN_ROOT;if(e&&e.trim())return e.trim();var p=require('path'),f=require('fs'),h=require('os').homedir(),d=p.join(h,'.claude'),q=p.join('scripts','lib','utils.js');if(f.existsSync(p.join(d,q)))return d;try{var b=p.join(d,'plugins','cache','everything-claude-code');for(var o of f.readdirSync(b))for(var v of f.readdirSync(p.join(b,o))){var c=p.join(b,o,v);if(f.existsSync(p.join(c,q)))return c}}catch(x){}return d})()`;
module.exports = {
resolveEccRoot,
INLINE_RESOLVE,
};

View File

@@ -464,6 +464,24 @@ function countInFile(filePath, pattern) {
return matches ? matches.length : 0; return matches ? matches.length : 0;
} }
/**
* Strip all ANSI escape sequences from a string.
*
* Handles:
* - CSI sequences: \x1b[ … <letter> (colors, cursor movement, erase, etc.)
* - OSC sequences: \x1b] … BEL/ST (window titles, hyperlinks)
* - Charset selection: \x1b(B
* - Bare ESC + single letter: \x1b <letter> (e.g. \x1bM for reverse index)
*
* @param {string} str - Input string possibly containing ANSI codes
* @returns {string} Cleaned string with all escape sequences removed
*/
function stripAnsi(str) {
if (typeof str !== 'string') return '';
// eslint-disable-next-line no-control-regex
return str.replace(/\x1b(?:\[[0-9;?]*[A-Za-z]|\][^\x07\x1b]*(?:\x07|\x1b\\)|\([A-Z]|[A-Z])/g, '');
}
/** /**
* Search for pattern in file and return matching lines with line numbers * Search for pattern in file and return matching lines with line numbers
*/ */
@@ -530,6 +548,9 @@ module.exports = {
countInFile, countInFile,
grepFile, grepFile,
// String sanitisation
stripAnsi,
// Hook I/O // Hook I/O
readStdinJson, readStdinJson,
log, log,

View File

@@ -3,7 +3,7 @@ set -euo pipefail
# Sync Everything Claude Code (ECC) assets into a local Codex CLI setup. # Sync Everything Claude Code (ECC) assets into a local Codex CLI setup.
# - Backs up ~/.codex config and AGENTS.md # - Backs up ~/.codex config and AGENTS.md
# - Replaces AGENTS.md with ECC AGENTS.md # - Merges ECC AGENTS.md into existing AGENTS.md (marker-based, preserves user content)
# - Syncs Codex-ready skills from .agents/skills # - Syncs Codex-ready skills from .agents/skills
# - Generates prompt files from commands/*.md # - Generates prompt files from commands/*.md
# - Generates Codex QA wrappers and optional language rule-pack prompts # - Generates Codex QA wrappers and optional language rule-pack prompts
@@ -143,16 +143,68 @@ if [[ -f "$AGENTS_FILE" ]]; then
run_or_echo "cp \"$AGENTS_FILE\" \"$BACKUP_DIR/AGENTS.md\"" run_or_echo "cp \"$AGENTS_FILE\" \"$BACKUP_DIR/AGENTS.md\""
fi fi
log "Replacing global AGENTS.md with ECC AGENTS + Codex supplement" ECC_BEGIN_MARKER="<!-- BEGIN ECC -->"
ECC_END_MARKER="<!-- END ECC -->"
compose_ecc_block() {
printf '%s\n' "$ECC_BEGIN_MARKER"
cat "$AGENTS_ROOT_SRC"
printf '\n\n---\n\n'
printf '# Codex Supplement (From ECC .codex/AGENTS.md)\n\n'
cat "$AGENTS_CODEX_SUPP_SRC"
printf '\n%s\n' "$ECC_END_MARKER"
}
log "Merging ECC AGENTS into $AGENTS_FILE (preserving user content)"
if [[ "$MODE" == "dry-run" ]]; then if [[ "$MODE" == "dry-run" ]]; then
printf '[dry-run] compose %s from %s + %s\n' "$AGENTS_FILE" "$AGENTS_ROOT_SRC" "$AGENTS_CODEX_SUPP_SRC" printf '[dry-run] merge ECC block into %s from %s + %s\n' "$AGENTS_FILE" "$AGENTS_ROOT_SRC" "$AGENTS_CODEX_SUPP_SRC"
else else
{ replace_ecc_section() {
cat "$AGENTS_ROOT_SRC" # Replace the ECC block between markers in $AGENTS_FILE with fresh content.
printf '\n\n---\n\n' # Uses awk to correctly handle all positions including line 1.
printf '# Codex Supplement (From ECC .codex/AGENTS.md)\n\n' local tmp
cat "$AGENTS_CODEX_SUPP_SRC" tmp="$(mktemp)"
} > "$AGENTS_FILE" local ecc_tmp
ecc_tmp="$(mktemp)"
compose_ecc_block > "$ecc_tmp"
awk -v begin="$ECC_BEGIN_MARKER" -v end="$ECC_END_MARKER" -v ecc="$ecc_tmp" '
{ gsub(/\r$/, "") }
$0 == begin { skip = 1; while ((getline line < ecc) > 0) print line; close(ecc); next }
$0 == end { skip = 0; next }
!skip { print }
' "$AGENTS_FILE" > "$tmp"
# Write through the path (preserves symlinks) instead of mv
cat "$tmp" > "$AGENTS_FILE"
rm -f "$tmp" "$ecc_tmp"
}
if [[ ! -f "$AGENTS_FILE" ]]; then
# No existing file — create fresh with markers
compose_ecc_block > "$AGENTS_FILE"
elif awk -v b="$ECC_BEGIN_MARKER" -v e="$ECC_END_MARKER" '
{ gsub(/\r$/, "") }
$0 == b { found_b = NR } $0 == e { found_e = NR }
END { exit !(found_b && found_e && found_b < found_e) }
' "$AGENTS_FILE"; then
# Existing file with matched, correctly ordered ECC markers — replace only the ECC section
replace_ecc_section
elif grep -qF "$ECC_BEGIN_MARKER" "$AGENTS_FILE"; then
# BEGIN marker exists but END marker is missing (corrupted). Warn and
# replace the file entirely to restore a valid state. Backup was saved.
log "WARNING: found BEGIN marker but no END marker — replacing file (backup saved)"
compose_ecc_block > "$AGENTS_FILE"
else
# Existing file without markers — append ECC block, preserve user content.
# Note: legacy ECC-only files (from old '>' overwrite) will get a second copy
# on this first run. This is intentional — the alternative (heading-match
# heuristic) risks false-positive overwrites of user-authored files. The next
# run deduplicates via markers, and a timestamped backup was saved above.
log "No ECC markers found — appending managed block (backup saved)"
{
printf '\n\n'
compose_ecc_block
} >> "$AGENTS_FILE"
fi
fi fi
log "Syncing ECC Codex skills" log "Syncing ECC Codex skills"

View File

@@ -0,0 +1,435 @@
---
name: flutter-dart-code-review
description: Library-agnostic Flutter/Dart code review checklist covering widget best practices, state management patterns (BLoC, Riverpod, Provider, GetX, MobX, Signals), Dart idioms, performance, accessibility, security, and clean architecture.
origin: ECC
---
# Flutter/Dart Code Review Best Practices
Comprehensive, library-agnostic checklist for reviewing Flutter/Dart applications. These principles apply regardless of which state management solution, routing library, or DI framework is used.
---
## 1. General Project Health
- [ ] Project follows consistent folder structure (feature-first or layer-first)
- [ ] Proper separation of concerns: UI, business logic, data layers
- [ ] No business logic in widgets; widgets are purely presentational
- [ ] `pubspec.yaml` is clean — no unused dependencies, versions pinned appropriately
- [ ] `analysis_options.yaml` includes a strict lint set with strict analyzer settings enabled
- [ ] No `print()` statements in production code — use `dart:developer` `log()` or a logging package
- [ ] Generated files (`.g.dart`, `.freezed.dart`, `.gr.dart`) are up-to-date or in `.gitignore`
- [ ] Platform-specific code isolated behind abstractions
---
## 2. Dart Language Pitfalls
- [ ] **Implicit dynamic**: Missing type annotations leading to `dynamic` — enable `strict-casts`, `strict-inference`, `strict-raw-types`
- [ ] **Null safety misuse**: Excessive `!` (bang operator) instead of proper null checks or Dart 3 pattern matching (`if (value case var v?)`)
- [ ] **Type promotion failures**: Using `this.field` where local variable promotion would work
- [ ] **Catching too broadly**: `catch (e)` without `on` clause; always specify exception types
- [ ] **Catching `Error`**: `Error` subtypes indicate bugs and should not be caught
- [ ] **Unused `async`**: Functions marked `async` that never `await` — unnecessary overhead
- [ ] **`late` overuse**: `late` used where nullable or constructor initialization would be safer; defers errors to runtime
- [ ] **String concatenation in loops**: Use `StringBuffer` instead of `+` for iterative string building
- [ ] **Mutable state in `const` contexts**: Fields in `const` constructor classes should not be mutable
- [ ] **Ignoring `Future` return values**: Use `await` or explicitly call `unawaited()` to signal intent
- [ ] **`var` where `final` works**: Prefer `final` for locals and `const` for compile-time constants
- [ ] **Relative imports**: Use `package:` imports for consistency
- [ ] **Mutable collections exposed**: Public APIs should return unmodifiable views, not raw `List`/`Map`
- [ ] **Missing Dart 3 pattern matching**: Prefer switch expressions and `if-case` over verbose `is` checks and manual casting
- [ ] **Throwaway classes for multiple returns**: Use Dart 3 records `(String, int)` instead of single-use DTOs
- [ ] **`print()` in production code**: Use `dart:developer` `log()` or the project's logging package; `print()` has no log levels and cannot be filtered
---
## 3. Widget Best Practices
### Widget decomposition:
- [ ] No single widget with a `build()` method exceeding ~80-100 lines
- [ ] Widgets split by encapsulation AND by how they change (rebuild boundaries)
- [ ] Private `_build*()` helper methods that return widgets are extracted to separate widget classes (enables element reuse, const propagation, and framework optimizations)
- [ ] Stateless widgets preferred over Stateful where no mutable local state is needed
- [ ] Extracted widgets are in separate files when reusable
### Const usage:
- [ ] `const` constructors used wherever possible — prevents unnecessary rebuilds
- [ ] `const` literals for collections that don't change (`const []`, `const {}`)
- [ ] Constructor is declared `const` when all fields are final
### Key usage:
- [ ] `ValueKey` used in lists/grids to preserve state across reorders
- [ ] `GlobalKey` used sparingly — only when accessing state across the tree is truly needed
- [ ] `UniqueKey` avoided in `build()` — it forces rebuild every frame
- [ ] `ObjectKey` used when identity is based on a data object rather than a single value
### Theming & design system:
- [ ] Colors come from `Theme.of(context).colorScheme` — no hardcoded `Colors.red` or hex values
- [ ] Text styles come from `Theme.of(context).textTheme` — no inline `TextStyle` with raw font sizes
- [ ] Dark mode compatibility verified — no assumptions about light background
- [ ] Spacing and sizing use consistent design tokens or constants, not magic numbers
### Build method complexity:
- [ ] No network calls, file I/O, or heavy computation in `build()`
- [ ] No `Future.then()` or `async` work in `build()`
- [ ] No subscription creation (`.listen()`) in `build()`
- [ ] `setState()` localized to smallest possible subtree
---
## 4. State Management (Library-Agnostic)
These principles apply to all Flutter state management solutions (BLoC, Riverpod, Provider, GetX, MobX, Signals, ValueNotifier, etc.).
### Architecture:
- [ ] Business logic lives outside the widget layer — in a state management component (BLoC, Notifier, Controller, Store, ViewModel, etc.)
- [ ] State managers receive dependencies via injection, not by constructing them internally
- [ ] A service or repository layer abstracts data sources — widgets and state managers should not call APIs or databases directly
- [ ] State managers have a single responsibility — no "god" managers handling unrelated concerns
- [ ] Cross-component dependencies follow the solution's conventions:
- In **Riverpod**: providers depending on providers via `ref.watch` is expected — flag only circular or overly tangled chains
- In **BLoC**: blocs should not directly depend on other blocs — prefer shared repositories or presentation-layer coordination
- In other solutions: follow the documented conventions for inter-component communication
### Immutability & value equality (for immutable-state solutions: BLoC, Riverpod, Redux):
- [ ] State objects are immutable — new instances created via `copyWith()` or constructors, never mutated in-place
- [ ] State classes implement `==` and `hashCode` properly (all fields included in comparison)
- [ ] Mechanism is consistent across the project — manual override, `Equatable`, `freezed`, Dart records, or other
- [ ] Collections inside state objects are not exposed as raw mutable `List`/`Map`
### Reactivity discipline (for reactive-mutation solutions: MobX, GetX, Signals):
- [ ] State is only mutated through the solution's reactive API (`@action` in MobX, `.value` on signals, `.obs` in GetX) — direct field mutation bypasses change tracking
- [ ] Derived values use the solution's computed mechanism rather than being stored redundantly
- [ ] Reactions and disposers are properly cleaned up (`ReactionDisposer` in MobX, effect cleanup in Signals)
### State shape design:
- [ ] Mutually exclusive states use sealed types, union variants, or the solution's built-in async state type (e.g. Riverpod's `AsyncValue`) — not boolean flags (`isLoading`, `isError`, `hasData`)
- [ ] Every async operation models loading, success, and error as distinct states
- [ ] All state variants are handled exhaustively in UI — no silently ignored cases
- [ ] Error states carry error information for display; loading states don't carry stale data
- [ ] Nullable data is not used as a loading indicator — states are explicit
```dart
// BAD — boolean flag soup allows impossible states
class UserState {
bool isLoading = false;
bool hasError = false; // isLoading && hasError is representable!
User? user;
}
// GOOD (immutable approach) — sealed types make impossible states unrepresentable
sealed class UserState {}
class UserInitial extends UserState {}
class UserLoading extends UserState {}
class UserLoaded extends UserState {
final User user;
const UserLoaded(this.user);
}
class UserError extends UserState {
final String message;
const UserError(this.message);
}
// GOOD (reactive approach) — observable enum + data, mutations via reactivity API
// enum UserStatus { initial, loading, loaded, error }
// Use your solution's observable/signal to wrap status and data separately
```
### Rebuild optimization:
- [ ] State consumer widgets (Builder, Consumer, Observer, Obx, Watch, etc.) scoped as narrow as possible
- [ ] Selectors used to rebuild only when specific fields change — not on every state emission
- [ ] `const` widgets used to stop rebuild propagation through the tree
- [ ] Computed/derived state is calculated reactively, not stored redundantly
### Subscriptions & disposal:
- [ ] All manual subscriptions (`.listen()`) are cancelled in `dispose()` / `close()`
- [ ] Stream controllers are closed when no longer needed
- [ ] Timers are cancelled in disposal lifecycle
- [ ] Framework-managed lifecycle is preferred over manual subscription (declarative builders over `.listen()`)
- [ ] `mounted` check before `setState` in async callbacks
- [ ] `BuildContext` not used after `await` without checking `context.mounted` (Flutter 3.7+) — stale context causes crashes
- [ ] No navigation, dialogs, or scaffold messages after async gaps without verifying the widget is still mounted
- [ ] `BuildContext` never stored in singletons, state managers, or static fields
### Local vs global state:
- [ ] Ephemeral UI state (checkbox, slider, animation) uses local state (`setState`, `ValueNotifier`)
- [ ] Shared state is lifted only as high as needed — not over-globalized
- [ ] Feature-scoped state is properly disposed when the feature is no longer active
---
## 5. Performance
### Unnecessary rebuilds:
- [ ] `setState()` not called at root widget level — localize state changes
- [ ] `const` widgets used to stop rebuild propagation
- [ ] `RepaintBoundary` used around complex subtrees that repaint independently
- [ ] `AnimatedBuilder` child parameter used for subtrees independent of animation
### Expensive operations in build():
- [ ] No sorting, filtering, or mapping large collections in `build()` — compute in state management layer
- [ ] No regex compilation in `build()`
- [ ] `MediaQuery.of(context)` usage is specific (e.g., `MediaQuery.sizeOf(context)`)
### Image optimization:
- [ ] Network images use caching (any caching solution appropriate for the project)
- [ ] Appropriate image resolution for target device (no loading 4K images for thumbnails)
- [ ] `Image.asset` with `cacheWidth`/`cacheHeight` to decode at display size
- [ ] Placeholder and error widgets provided for network images
### Lazy loading:
- [ ] `ListView.builder` / `GridView.builder` used instead of `ListView(children: [...])` for large or dynamic lists (concrete constructors are fine for small, static lists)
- [ ] Pagination implemented for large data sets
- [ ] Deferred loading (`deferred as`) used for heavy libraries in web builds
### Other:
- [ ] `Opacity` widget avoided in animations — use `AnimatedOpacity` or `FadeTransition`
- [ ] Clipping avoided in animations — pre-clip images
- [ ] `operator ==` not overridden on widgets — use `const` constructors instead
- [ ] Intrinsic dimension widgets (`IntrinsicHeight`, `IntrinsicWidth`) used sparingly (extra layout pass)
---
## 6. Testing
### Test types and expectations:
- [ ] **Unit tests**: Cover all business logic (state managers, repositories, utility functions)
- [ ] **Widget tests**: Cover individual widget behavior, interactions, and visual output
- [ ] **Integration tests**: Cover critical user flows end-to-end
- [ ] **Golden tests**: Pixel-perfect comparisons for design-critical UI components
### Coverage targets:
- [ ] Aim for 80%+ line coverage on business logic
- [ ] All state transitions have corresponding tests (loading → success, loading → error, retry, etc.)
- [ ] Edge cases tested: empty states, error states, loading states, boundary values
### Test isolation:
- [ ] External dependencies (API clients, databases, services) are mocked or faked
- [ ] Each test file tests exactly one class/unit
- [ ] Tests verify behavior, not implementation details
- [ ] Stubs define only the behavior needed for each test (minimal stubbing)
- [ ] No shared mutable state between test cases
### Widget test quality:
- [ ] `pumpWidget` and `pump` used correctly for async operations
- [ ] `find.byType`, `find.text`, `find.byKey` used appropriately
- [ ] No flaky tests depending on timing — use `pumpAndSettle` or explicit `pump(Duration)`
- [ ] Tests run in CI and failures block merges
---
## 7. Accessibility
### Semantic widgets:
- [ ] `Semantics` widget used to provide screen reader labels where automatic labels are insufficient
- [ ] `ExcludeSemantics` used for purely decorative elements
- [ ] `MergeSemantics` used to combine related widgets into a single accessible element
- [ ] Images have `semanticLabel` property set
### Screen reader support:
- [ ] All interactive elements are focusable and have meaningful descriptions
- [ ] Focus order is logical (follows visual reading order)
### Visual accessibility:
- [ ] Contrast ratio >= 4.5:1 for text against background
- [ ] Tappable targets are at least 48x48 pixels
- [ ] Color is not the sole indicator of state (use icons/text alongside)
- [ ] Text scales with system font size settings
### Interaction accessibility:
- [ ] No no-op `onPressed` callbacks — every button does something or is disabled
- [ ] Error fields suggest corrections
- [ ] Context does not change unexpectedly while user is inputting data
---
## 8. Platform-Specific Concerns
### iOS/Android differences:
- [ ] Platform-adaptive widgets used where appropriate
- [ ] Back navigation handled correctly (Android back button, iOS swipe-to-go-back)
- [ ] Status bar and safe area handled via `SafeArea` widget
- [ ] Platform-specific permissions declared in `AndroidManifest.xml` and `Info.plist`
### Responsive design:
- [ ] `LayoutBuilder` or `MediaQuery` used for responsive layouts
- [ ] Breakpoints defined consistently (phone, tablet, desktop)
- [ ] Text doesn't overflow on small screens — use `Flexible`, `Expanded`, `FittedBox`
- [ ] Landscape orientation tested or explicitly locked
- [ ] Web-specific: mouse/keyboard interactions supported, hover states present
---
## 9. Security
### Secure storage:
- [ ] Sensitive data (tokens, credentials) stored using platform-secure storage (Keychain on iOS, EncryptedSharedPreferences on Android)
- [ ] Never store secrets in plaintext storage
- [ ] Biometric authentication gating considered for sensitive operations
### API key handling:
- [ ] API keys NOT hardcoded in Dart source — use `--dart-define`, `.env` files excluded from VCS, or compile-time configuration
- [ ] Secrets not committed to git — check `.gitignore`
- [ ] Backend proxy used for truly secret keys (client should never hold server secrets)
### Input validation:
- [ ] All user input validated before sending to API
- [ ] Form validation uses proper validation patterns
- [ ] No raw SQL or string interpolation of user input
- [ ] Deep link URLs validated and sanitized before navigation
### Network security:
- [ ] HTTPS enforced for all API calls
- [ ] Certificate pinning considered for high-security apps
- [ ] Authentication tokens refreshed and expired properly
- [ ] No sensitive data logged or printed
---
## 10. Package/Dependency Review
### Evaluating pub.dev packages:
- [ ] Check **pub points score** (aim for 130+/160)
- [ ] Check **likes** and **popularity** as community signals
- [ ] Verify the publisher is **verified** on pub.dev
- [ ] Check last publish date — stale packages (>1 year) are a risk
- [ ] Review open issues and response time from maintainers
- [ ] Check license compatibility with your project
- [ ] Verify platform support covers your targets
### Version constraints:
- [ ] Use caret syntax (`^1.2.3`) for dependencies — allows compatible updates
- [ ] Pin exact versions only when absolutely necessary
- [ ] Run `flutter pub outdated` regularly to track stale dependencies
- [ ] No dependency overrides in production `pubspec.yaml` — only for temporary fixes with a comment/issue link
- [ ] Minimize transitive dependency count — each dependency is an attack surface
### Monorepo-specific (melos/workspace):
- [ ] Internal packages import only from public API — no `package:other/src/internal.dart` (breaks Dart package encapsulation)
- [ ] Internal package dependencies use workspace resolution, not hardcoded `path: ../../` relative strings
- [ ] All sub-packages share or inherit root `analysis_options.yaml`
---
## 11. Navigation and Routing
### General principles (apply to any routing solution):
- [ ] One routing approach used consistently — no mixing imperative `Navigator.push` with a declarative router
- [ ] Route arguments are typed — no `Map<String, dynamic>` or `Object?` casting
- [ ] Route paths defined as constants, enums, or generated — no magic strings scattered in code
- [ ] Auth guards/redirects centralized — not duplicated across individual screens
- [ ] Deep links configured for both Android and iOS
- [ ] Deep link URLs validated and sanitized before navigation
- [ ] Navigation state is testable — route changes can be verified in tests
- [ ] Back behavior is correct on all platforms
---
## 12. Error Handling
### Framework error handling:
- [ ] `FlutterError.onError` overridden to capture framework errors (build, layout, paint)
- [ ] `PlatformDispatcher.instance.onError` set for async errors not caught by Flutter
- [ ] `ErrorWidget.builder` customized for release mode (user-friendly instead of red screen)
- [ ] Global error capture wrapper around `runApp` (e.g., `runZonedGuarded`, Sentry/Crashlytics wrapper)
### Error reporting:
- [ ] Error reporting service integrated (Firebase Crashlytics, Sentry, or equivalent)
- [ ] Non-fatal errors reported with stack traces
- [ ] State management error observer wired to error reporting (e.g., BlocObserver, ProviderObserver, or equivalent for your solution)
- [ ] User-identifiable info (user ID) attached to error reports for debugging
### Graceful degradation:
- [ ] API errors result in user-friendly error UI, not crashes
- [ ] Retry mechanisms for transient network failures
- [ ] Offline state handled gracefully
- [ ] Error states in state management carry error info for display
- [ ] Raw exceptions (network, parsing) are mapped to user-friendly, localized messages before reaching the UI — never show raw exception strings to users
---
## 13. Internationalization (l10n)
### Setup:
- [ ] Localization solution configured (Flutter's built-in ARB/l10n, easy_localization, or equivalent)
- [ ] Supported locales declared in app configuration
### Content:
- [ ] All user-visible strings use the localization system — no hardcoded strings in widgets
- [ ] Template file includes descriptions/context for translators
- [ ] ICU message syntax used for plurals, genders, selects
- [ ] Placeholders defined with types
- [ ] No missing keys across locales
### Code review:
- [ ] Localization accessor used consistently throughout the project
- [ ] Date, time, number, and currency formatting is locale-aware
- [ ] Text directionality (RTL) supported if targeting Arabic, Hebrew, etc.
- [ ] No string concatenation for localized text — use parameterized messages
---
## 14. Dependency Injection
### Principles (apply to any DI approach):
- [ ] Classes depend on abstractions (interfaces), not concrete implementations at layer boundaries
- [ ] Dependencies provided externally via constructor, DI framework, or provider graph — not created internally
- [ ] Registration distinguishes lifetime: singleton vs factory vs lazy singleton
- [ ] Environment-specific bindings (dev/staging/prod) use configuration, not runtime `if` checks
- [ ] No circular dependencies in the DI graph
- [ ] Service locator calls (if used) are not scattered throughout business logic
---
## 15. Static Analysis
### Configuration:
- [ ] `analysis_options.yaml` present with strict settings enabled
- [ ] Strict analyzer settings: `strict-casts: true`, `strict-inference: true`, `strict-raw-types: true`
- [ ] A comprehensive lint rule set is included (very_good_analysis, flutter_lints, or custom strict rules)
- [ ] All sub-packages in monorepos inherit or share the root analysis options
### Enforcement:
- [ ] No unresolved analyzer warnings in committed code
- [ ] Lint suppressions (`// ignore:`) are justified with comments explaining why
- [ ] `flutter analyze` runs in CI and failures block merges
### Key rules to verify regardless of lint package:
- [ ] `prefer_const_constructors` — performance in widget trees
- [ ] `avoid_print` — use proper logging
- [ ] `unawaited_futures` — prevent fire-and-forget async bugs
- [ ] `prefer_final_locals` — immutability at variable level
- [ ] `always_declare_return_types` — explicit contracts
- [ ] `avoid_catches_without_on_clauses` — specific error handling
- [ ] `always_use_package_imports` — consistent import style
---
## State Management Quick Reference
The table below maps universal principles to their implementation in popular solutions. Use this to adapt review rules to whichever solution the project uses.
| Principle | BLoC/Cubit | Riverpod | Provider | GetX | MobX | Signals | Built-in |
|-----------|-----------|----------|----------|------|------|---------|----------|
| State container | `Bloc`/`Cubit` | `Notifier`/`AsyncNotifier` | `ChangeNotifier` | `GetxController` | `Store` | `signal()` | `StatefulWidget` |
| UI consumer | `BlocBuilder` | `ConsumerWidget` | `Consumer` | `Obx`/`GetBuilder` | `Observer` | `Watch` | `setState` |
| Selector | `BlocSelector`/`buildWhen` | `ref.watch(p.select(...))` | `Selector` | N/A | computed | `computed()` | N/A |
| Side effects | `BlocListener` | `ref.listen` | `Consumer` callback | `ever()`/`once()` | `reaction` | `effect()` | callbacks |
| Disposal | auto via `BlocProvider` | `.autoDispose` | auto via `Provider` | `onClose()` | `ReactionDisposer` | manual | `dispose()` |
| Testing | `blocTest()` | `ProviderContainer` | `ChangeNotifier` directly | `Get.put` in test | store directly | signal directly | widget test |
---
## Sources
- [Effective Dart: Style](https://dart.dev/effective-dart/style)
- [Effective Dart: Usage](https://dart.dev/effective-dart/usage)
- [Effective Dart: Design](https://dart.dev/effective-dart/design)
- [Flutter Performance Best Practices](https://docs.flutter.dev/perf/best-practices)
- [Flutter Testing Overview](https://docs.flutter.dev/testing/overview)
- [Flutter Accessibility](https://docs.flutter.dev/ui/accessibility-and-internationalization/accessibility)
- [Flutter Internationalization](https://docs.flutter.dev/ui/accessibility-and-internationalization/internationalization)
- [Flutter Navigation and Routing](https://docs.flutter.dev/ui/navigation)
- [Flutter Error Handling](https://docs.flutter.dev/testing/errors)
- [Flutter State Management Options](https://docs.flutter.dev/data-and-backend/state-mgmt/options)

View File

@@ -0,0 +1,100 @@
---
name: nuxt4-patterns
description: Nuxt 4 app patterns for hydration safety, performance, route rules, lazy loading, and SSR-safe data fetching with useFetch and useAsyncData.
origin: ECC
---
# Nuxt 4 Patterns
Use when building or debugging Nuxt 4 apps with SSR, hybrid rendering, route rules, or page-level data fetching.
## When to Activate
- Hydration mismatches between server HTML and client state
- Route-level rendering decisions such as prerender, SWR, ISR, or client-only sections
- Performance work around lazy loading, lazy hydration, or payload size
- Page or component data fetching with `useFetch`, `useAsyncData`, or `$fetch`
- Nuxt routing issues tied to route params, middleware, or SSR/client differences
## Hydration Safety
- Keep the first render deterministic. Do not put `Date.now()`, `Math.random()`, browser-only APIs, or storage reads directly into SSR-rendered template state.
- Move browser-only logic behind `onMounted()`, `import.meta.client`, `ClientOnly`, or a `.client.vue` component when the server cannot produce the same markup.
- Use Nuxt's `useRoute()` composable, not the one from `vue-router`.
- Do not use `route.fullPath` to drive SSR-rendered markup. URL fragments are client-only, which can create hydration mismatches.
- Treat `ssr: false` as an escape hatch for truly browser-only areas, not a default fix for mismatches.
## Data Fetching
- Prefer `await useFetch()` for SSR-safe API reads in pages and components. It forwards server-fetched data into the Nuxt payload and avoids a second fetch on hydration.
- Use `useAsyncData()` when the fetcher is not a simple `$fetch()` call, when you need a custom key, or when you are composing multiple async sources.
- Give `useAsyncData()` a stable key for cache reuse and predictable refresh behavior.
- Keep `useAsyncData()` handlers side-effect free. They can run during SSR and hydration.
- Use `$fetch()` for user-triggered writes or client-only actions, not top-level page data that should be hydrated from SSR.
- Use `lazy: true`, `useLazyFetch()`, or `useLazyAsyncData()` for non-critical data that should not block navigation. Handle `status === 'pending'` in the UI.
- Use `server: false` only for data that is not needed for SEO or the first paint.
- Trim payload size with `pick` and prefer shallower payloads when deep reactivity is unnecessary.
```ts
const route = useRoute()
const { data: article, status, error, refresh } = await useAsyncData(
() => `article:${route.params.slug}`,
() => $fetch(`/api/articles/${route.params.slug}`),
)
const { data: comments } = await useFetch(`/api/articles/${route.params.slug}/comments`, {
lazy: true,
server: false,
})
```
## Route Rules
Prefer `routeRules` in `nuxt.config.ts` for rendering and caching strategy:
```ts
export default defineNuxtConfig({
routeRules: {
'/': { prerender: true },
'/products/**': { swr: 3600 },
'/blog/**': { isr: true },
'/admin/**': { ssr: false },
'/api/**': { cache: { maxAge: 60 * 60 } },
},
})
```
- `prerender`: static HTML at build time
- `swr`: serve cached content and revalidate in the background
- `isr`: incremental static regeneration on supported platforms
- `ssr: false`: client-rendered route
- `cache` or `redirect`: Nitro-level response behavior
Pick route rules per route group, not globally. Marketing pages, catalogs, dashboards, and APIs usually need different strategies.
## Lazy Loading and Performance
- Nuxt already code-splits pages by route. Keep route boundaries meaningful before micro-optimizing component splits.
- Use the `Lazy` prefix to dynamically import non-critical components.
- Conditionally render lazy components with `v-if` so the chunk is not loaded until the UI actually needs it.
- Use lazy hydration for below-the-fold or non-critical interactive UI.
```vue
<template>
<LazyRecommendations v-if="showRecommendations" />
<LazyProductGallery hydrate-on-visible />
</template>
```
- For custom strategies, use `defineLazyHydrationComponent()` with a visibility or idle strategy.
- Nuxt lazy hydration works on single-file components. Passing new props to a lazily hydrated component will trigger hydration immediately.
- Use `NuxtLink` for internal navigation so Nuxt can prefetch route components and generated payloads.
## Review Checklist
- First SSR render and hydrated client render produce the same markup
- Page data uses `useFetch` or `useAsyncData`, not top-level `$fetch`
- Non-critical data is lazy and has explicit loading UI
- Route rules match the page's SEO and freshness requirements
- Heavy interactive islands are lazy-loaded or lazily hydrated

View File

@@ -0,0 +1,264 @@
---
name: rules-distill
description: "Scan skills to extract cross-cutting principles and distill them into rules — append, revise, or create new rule files"
origin: ECC
---
# Rules Distill
Scan installed skills, extract cross-cutting principles that appear in multiple skills, and distill them into rules — appending to existing rule files, revising outdated content, or creating new rule files.
Applies the "deterministic collection + LLM judgment" principle: scripts collect facts exhaustively, then an LLM cross-reads the full context and produces verdicts.
## When to Use
- Periodic rules maintenance (monthly or after installing new skills)
- After a skill-stocktake reveals patterns that should be rules
- When rules feel incomplete relative to the skills being used
## How It Works
The rules distillation process follows three phases:
### Phase 1: Inventory (Deterministic Collection)
#### 1a. Collect skill inventory
```bash
bash ~/.claude/skills/rules-distill/scripts/scan-skills.sh
```
#### 1b. Collect rules index
```bash
bash ~/.claude/skills/rules-distill/scripts/scan-rules.sh
```
#### 1c. Present to user
```
Rules Distillation — Phase 1: Inventory
────────────────────────────────────────
Skills: {N} files scanned
Rules: {M} files ({K} headings indexed)
Proceeding to cross-read analysis...
```
### Phase 2: Cross-read, Match & Verdict (LLM Judgment)
Extraction and matching are unified in a single pass. Rules files are small enough (~800 lines total) that the full text can be provided to the LLM — no grep pre-filtering needed.
#### Batching
Group skills into **thematic clusters** based on their descriptions. Analyze each cluster in a subagent with the full rules text.
#### Cross-batch Merge
After all batches complete, merge candidates across batches:
- Deduplicate candidates with the same or overlapping principles
- Re-check the "2+ skills" requirement using evidence from **all** batches combined — a principle found in 1 skill per batch but 2+ skills total is valid
#### Subagent Prompt
Launch a general-purpose Agent with the following prompt:
````
You are an analyst who cross-reads skills to extract principles that should be promoted to rules.
## Input
- Skills: {full text of skills in this batch}
- Existing rules: {full text of all rule files}
## Extraction Criteria
Include a candidate ONLY if ALL of these are true:
1. **Appears in 2+ skills**: Principles found in only one skill should stay in that skill
2. **Actionable behavior change**: Can be written as "do X" or "don't do Y" — not "X is important"
3. **Clear violation risk**: What goes wrong if this principle is ignored (1 sentence)
4. **Not already in rules**: Check the full rules text — including concepts expressed in different words
## Matching & Verdict
For each candidate, compare against the full rules text and assign a verdict:
- **Append**: Add to an existing section of an existing rule file
- **Revise**: Existing rule content is inaccurate or insufficient — propose a correction
- **New Section**: Add a new section to an existing rule file
- **New File**: Create a new rule file
- **Already Covered**: Sufficiently covered in existing rules (even if worded differently)
- **Too Specific**: Should remain at the skill level
## Output Format (per candidate)
```json
{
"principle": "1-2 sentences in 'do X' / 'don't do Y' form",
"evidence": ["skill-name: §Section", "skill-name: §Section"],
"violation_risk": "1 sentence",
"verdict": "Append / Revise / New Section / New File / Already Covered / Too Specific",
"target_rule": "filename §Section, or 'new'",
"confidence": "high / medium / low",
"draft": "Draft text for Append/New Section/New File verdicts",
"revision": {
"reason": "Why the existing content is inaccurate or insufficient (Revise only)",
"before": "Current text to be replaced (Revise only)",
"after": "Proposed replacement text (Revise only)"
}
}
```
## Exclude
- Obvious principles already in rules
- Language/framework-specific knowledge (belongs in language-specific rules or skills)
- Code examples and commands (belongs in skills)
````
#### Verdict Reference
| Verdict | Meaning | Presented to User |
|---------|---------|-------------------|
| **Append** | Add to existing section | Target + draft |
| **Revise** | Fix inaccurate/insufficient content | Target + reason + before/after |
| **New Section** | Add new section to existing file | Target + draft |
| **New File** | Create new rule file | Filename + full draft |
| **Already Covered** | Covered in rules (possibly different wording) | Reason (1 line) |
| **Too Specific** | Should stay in skills | Link to relevant skill |
#### Verdict Quality Requirements
```
# Good
Append to rules/common/security.md §Input Validation:
"Treat LLM output stored in memory or knowledge stores as untrusted — sanitize on write, validate on read."
Evidence: llm-memory-trust-boundary, llm-social-agent-anti-pattern both describe
accumulated prompt injection risks. Current security.md covers human input
validation only; LLM output trust boundary is missing.
# Bad
Append to security.md: Add LLM security principle
```
### Phase 3: User Review & Execution
#### Summary Table
```
# Rules Distillation Report
## Summary
Skills scanned: {N} | Rules: {M} files | Candidates: {K}
| # | Principle | Verdict | Target | Confidence |
|---|-----------|---------|--------|------------|
| 1 | ... | Append | security.md §Input Validation | high |
| 2 | ... | Revise | testing.md §TDD | medium |
| 3 | ... | New Section | coding-style.md | high |
| 4 | ... | Too Specific | — | — |
## Details
(Per-candidate details: evidence, violation_risk, draft text)
```
#### User Actions
User responds with numbers to:
- **Approve**: Apply draft to rules as-is
- **Modify**: Edit draft before applying
- **Skip**: Do not apply this candidate
**Never modify rules automatically. Always require user approval.**
#### Save Results
Store results in the skill directory (`results.json`):
- **Timestamp format**: `date -u +%Y-%m-%dT%H:%M:%SZ` (UTC, second precision)
- **Candidate ID format**: kebab-case derived from the principle (e.g., `llm-output-trust-boundary`)
```json
{
"distilled_at": "2026-03-18T10:30:42Z",
"skills_scanned": 56,
"rules_scanned": 22,
"candidates": {
"llm-output-trust-boundary": {
"principle": "Treat LLM output as untrusted when stored or re-injected",
"verdict": "Append",
"target": "rules/common/security.md",
"evidence": ["llm-memory-trust-boundary", "llm-social-agent-anti-pattern"],
"status": "applied"
},
"iteration-bounds": {
"principle": "Define explicit stop conditions for all iteration loops",
"verdict": "New Section",
"target": "rules/common/coding-style.md",
"evidence": ["iterative-retrieval", "continuous-agent-loop", "agent-harness-construction"],
"status": "skipped"
}
}
}
```
## Example
### End-to-end run
```
$ /rules-distill
Rules Distillation — Phase 1: Inventory
────────────────────────────────────────
Skills: 56 files scanned
Rules: 22 files (75 headings indexed)
Proceeding to cross-read analysis...
[Subagent analysis: Batch 1 (agent/meta skills) ...]
[Subagent analysis: Batch 2 (coding/pattern skills) ...]
[Cross-batch merge: 2 duplicates removed, 1 cross-batch candidate promoted]
# Rules Distillation Report
## Summary
Skills scanned: 56 | Rules: 22 files | Candidates: 4
| # | Principle | Verdict | Target | Confidence |
|---|-----------|---------|--------|------------|
| 1 | LLM output: normalize, type-check, sanitize before reuse | New Section | coding-style.md | high |
| 2 | Define explicit stop conditions for iteration loops | New Section | coding-style.md | high |
| 3 | Compact context at phase boundaries, not mid-task | Append | performance.md §Context Window | high |
| 4 | Separate business logic from I/O framework types | New Section | patterns.md | high |
## Details
### 1. LLM Output Validation
Verdict: New Section in coding-style.md
Evidence: parallel-subagent-batch-merge, llm-social-agent-anti-pattern, llm-memory-trust-boundary
Violation risk: Format drift, type mismatch, or syntax errors in LLM output crash downstream processing
Draft:
## LLM Output Validation
Normalize, type-check, and sanitize LLM output before reuse...
See skill: parallel-subagent-batch-merge, llm-memory-trust-boundary
[... details for candidates 2-4 ...]
Approve, modify, or skip each candidate by number:
> User: Approve 1, 3. Skip 2, 4.
✓ Applied: coding-style.md §LLM Output Validation
✓ Applied: performance.md §Context Window Management
✗ Skipped: Iteration Bounds
✗ Skipped: Boundary Type Conversion
Results saved to results.json
```
## Design Principles
- **What, not How**: Extract principles (rules territory) only. Code examples and commands stay in skills.
- **Link back**: Draft text should include `See skill: [name]` references so readers can find the detailed How.
- **Deterministic collection, LLM judgment**: Scripts guarantee exhaustiveness; the LLM guarantees contextual understanding.
- **Anti-abstraction safeguard**: The 3-layer filter (2+ skills evidence, actionable behavior test, violation risk) prevents overly abstract principles from entering rules.

View File

@@ -0,0 +1,58 @@
#!/usr/bin/env bash
# scan-rules.sh — enumerate rule files and extract H2 heading index
# Usage: scan-rules.sh [RULES_DIR]
# Output: JSON to stdout
#
# Environment:
# RULES_DISTILL_DIR Override ~/.claude/rules (for testing only)
set -euo pipefail
RULES_DIR="${RULES_DISTILL_DIR:-${1:-$HOME/.claude/rules}}"
if [[ ! -d "$RULES_DIR" ]]; then
jq -n --arg path "$RULES_DIR" '{"error":"rules directory not found","path":$path}' >&2
exit 1
fi
# Collect all .md files (excluding _archived/)
files=()
while IFS= read -r f; do
files+=("$f")
done < <(find "$RULES_DIR" -name '*.md' -not -path '*/_archived/*' -print | sort)
total=${#files[@]}
tmpdir=$(mktemp -d)
_rules_cleanup() { rm -rf "$tmpdir"; }
trap _rules_cleanup EXIT
for i in "${!files[@]}"; do
file="${files[$i]}"
rel_path="${file#"$HOME"/}"
rel_path="~/$rel_path"
# Extract H2 headings (## Title) into a JSON array via jq
headings_json=$({ grep -E '^## ' "$file" 2>/dev/null || true; } | sed 's/^## //' | jq -R . | jq -s '.')
# Get line count
line_count=$(wc -l < "$file" | tr -d ' ')
jq -n \
--arg path "$rel_path" \
--arg file "$(basename "$file")" \
--argjson lines "$line_count" \
--argjson headings "$headings_json" \
'{path:$path,file:$file,lines:$lines,headings:$headings}' \
> "$tmpdir/$i.json"
done
if [[ ${#files[@]} -eq 0 ]]; then
jq -n --arg dir "$RULES_DIR" '{rules_dir:$dir,total:0,rules:[]}'
else
jq -n \
--arg dir "$RULES_DIR" \
--argjson total "$total" \
--argjson rules "$(jq -s '.' "$tmpdir"/*.json)" \
'{rules_dir:$dir,total:$total,rules:$rules}'
fi

View File

@@ -0,0 +1,129 @@
#!/usr/bin/env bash
# scan-skills.sh — enumerate skill files, extract frontmatter and UTC mtime
# Usage: scan-skills.sh [CWD_SKILLS_DIR]
# Output: JSON to stdout
#
# When CWD_SKILLS_DIR is omitted, defaults to $PWD/.claude/skills so the
# script always picks up project-level skills without relying on the caller.
#
# Environment:
# RULES_DISTILL_GLOBAL_DIR Override ~/.claude/skills (for testing only;
# do not set in production — intended for bats tests)
# RULES_DISTILL_PROJECT_DIR Override project dir detection (for testing only)
set -euo pipefail
GLOBAL_DIR="${RULES_DISTILL_GLOBAL_DIR:-$HOME/.claude/skills}"
CWD_SKILLS_DIR="${RULES_DISTILL_PROJECT_DIR:-${1:-$PWD/.claude/skills}}"
# Validate CWD_SKILLS_DIR looks like a .claude/skills path (defense-in-depth).
# Only warn when the path exists — a nonexistent path poses no traversal risk.
if [[ -n "$CWD_SKILLS_DIR" && -d "$CWD_SKILLS_DIR" && "$CWD_SKILLS_DIR" != */.claude/skills* ]]; then
echo "Warning: CWD_SKILLS_DIR does not look like a .claude/skills path: $CWD_SKILLS_DIR" >&2
fi
# Extract a frontmatter field (handles both quoted and unquoted single-line values).
# Does NOT support multi-line YAML blocks (| or >) or nested YAML keys.
extract_field() {
local file="$1" field="$2"
awk -v f="$field" '
BEGIN { fm=0 }
/^---$/ { fm++; next }
fm==1 {
n = length(f) + 2
if (substr($0, 1, n) == f ": ") {
val = substr($0, n+1)
gsub(/^"/, "", val)
gsub(/"$/, "", val)
print val
exit
}
}
fm>=2 { exit }
' "$file"
}
# Get file mtime in UTC ISO8601 (portable: GNU and BSD)
get_mtime() {
local file="$1"
local secs
secs=$(stat -c %Y "$file" 2>/dev/null || stat -f %m "$file" 2>/dev/null) || return 1
date -u -d "@$secs" +%Y-%m-%dT%H:%M:%SZ 2>/dev/null ||
date -u -r "$secs" +%Y-%m-%dT%H:%M:%SZ
}
# Scan a directory and produce a JSON array of skill objects
scan_dir_to_json() {
local dir="$1"
local tmpdir
tmpdir=$(mktemp -d)
local _scan_tmpdir="$tmpdir"
_scan_cleanup() { rm -rf "$_scan_tmpdir"; }
trap _scan_cleanup RETURN
local i=0
while IFS= read -r file; do
local name desc mtime dp
name=$(extract_field "$file" "name")
desc=$(extract_field "$file" "description")
mtime=$(get_mtime "$file")
dp="${file/#$HOME/~}"
jq -n \
--arg path "$dp" \
--arg name "$name" \
--arg description "$desc" \
--arg mtime "$mtime" \
'{path:$path,name:$name,description:$description,mtime:$mtime}' \
> "$tmpdir/$i.json"
i=$((i+1))
done < <(find "$dir" -name "SKILL.md" -type f 2>/dev/null | sort)
if [[ $i -eq 0 ]]; then
echo "[]"
else
jq -s '.' "$tmpdir"/*.json
fi
}
# --- Main ---
global_found="false"
global_count=0
global_skills="[]"
if [[ -d "$GLOBAL_DIR" ]]; then
global_found="true"
global_skills=$(scan_dir_to_json "$GLOBAL_DIR")
global_count=$(echo "$global_skills" | jq 'length')
fi
project_found="false"
project_path=""
project_count=0
project_skills="[]"
if [[ -n "$CWD_SKILLS_DIR" && -d "$CWD_SKILLS_DIR" ]]; then
project_found="true"
project_path="$CWD_SKILLS_DIR"
project_skills=$(scan_dir_to_json "$CWD_SKILLS_DIR")
project_count=$(echo "$project_skills" | jq 'length')
fi
# Merge global + project skills into one array
all_skills=$(jq -s 'add' <(echo "$global_skills") <(echo "$project_skills"))
jq -n \
--arg global_found "$global_found" \
--argjson global_count "$global_count" \
--arg project_found "$project_found" \
--arg project_path "$project_path" \
--argjson project_count "$project_count" \
--argjson skills "$all_skills" \
'{
scan_summary: {
global: { found: ($global_found == "true"), count: $global_count },
project: { found: ($project_found == "true"), path: $project_path, count: $project_count }
},
skills: $skills
}'

View File

@@ -52,6 +52,16 @@ function writeInstallComponentsManifest(testDir, components) {
}); });
} }
function stripShebang(source) {
let s = source;
if (s.charCodeAt(0) === 0xFEFF) s = s.slice(1);
if (s.startsWith('#!')) {
const nl = s.indexOf('\n');
s = nl === -1 ? '' : s.slice(nl + 1);
}
return s;
}
/** /**
* Run modified source via a temp file (avoids Windows node -e shebang issues). * Run modified source via a temp file (avoids Windows node -e shebang issues).
* The temp file is written inside the repo so require() can resolve node_modules. * The temp file is written inside the repo so require() can resolve node_modules.
@@ -95,8 +105,8 @@ function runValidatorWithDir(validatorName, dirConstant, overridePath) {
// Read the validator source, replace the directory constant, and run as a wrapper // Read the validator source, replace the directory constant, and run as a wrapper
let source = fs.readFileSync(validatorPath, 'utf8'); let source = fs.readFileSync(validatorPath, 'utf8');
// Remove the shebang line (Windows node cannot parse shebangs in eval/inline mode) // Remove the shebang line so wrappers also work against CRLF-checked-out files on Windows.
source = source.replace(/^#!.*\n/, ''); source = stripShebang(source);
// Replace the directory constant with our override path // Replace the directory constant with our override path
const dirRegex = new RegExp(`const ${dirConstant} = .*?;`); const dirRegex = new RegExp(`const ${dirConstant} = .*?;`);
@@ -113,7 +123,7 @@ function runValidatorWithDir(validatorName, dirConstant, overridePath) {
function runValidatorWithDirs(validatorName, overrides) { function runValidatorWithDirs(validatorName, overrides) {
const validatorPath = path.join(validatorsDir, `${validatorName}.js`); const validatorPath = path.join(validatorsDir, `${validatorName}.js`);
let source = fs.readFileSync(validatorPath, 'utf8'); let source = fs.readFileSync(validatorPath, 'utf8');
source = source.replace(/^#!.*\n/, ''); source = stripShebang(source);
for (const [constant, overridePath] of Object.entries(overrides)) { for (const [constant, overridePath] of Object.entries(overrides)) {
const dirRegex = new RegExp(`const ${constant} = .*?;`); const dirRegex = new RegExp(`const ${constant} = .*?;`);
source = source.replace(dirRegex, `const ${constant} = ${JSON.stringify(overridePath)};`); source = source.replace(dirRegex, `const ${constant} = ${JSON.stringify(overridePath)};`);
@@ -145,7 +155,7 @@ function runValidator(validatorName) {
function runCatalogValidator(overrides = {}) { function runCatalogValidator(overrides = {}) {
const validatorPath = path.join(validatorsDir, 'catalog.js'); const validatorPath = path.join(validatorsDir, 'catalog.js');
let source = fs.readFileSync(validatorPath, 'utf8'); let source = fs.readFileSync(validatorPath, 'utf8');
source = source.replace(/^#!.*\n/, ''); source = stripShebang(source);
source = `process.argv.push('--text');\n${source}`; source = `process.argv.push('--text');\n${source}`;
const resolvedOverrides = { const resolvedOverrides = {
@@ -202,6 +212,11 @@ function runTests() {
// ========================================== // ==========================================
console.log('validate-agents.js:'); console.log('validate-agents.js:');
if (test('strips CRLF shebangs before writing temp wrappers', () => {
const source = '#!/usr/bin/env node\r\nconsole.log("ok");';
assert.strictEqual(stripShebang(source), 'console.log("ok");');
})) passed++; else failed++;
if (test('passes on real project agents', () => { if (test('passes on real project agents', () => {
const result = runValidator('validate-agents'); const result = runValidator('validate-agents');
assert.strictEqual(result.code, 0, `Should pass, got stderr: ${result.stderr}`); assert.strictEqual(result.code, 0, `Should pass, got stderr: ${result.stderr}`);

View File

@@ -28,6 +28,13 @@ function makeTempDir() {
return fs.mkdtempSync(path.join(os.tmpdir(), 'cost-tracker-test-')); return fs.mkdtempSync(path.join(os.tmpdir(), 'cost-tracker-test-'));
} }
function withTempHome(homeDir) {
return {
HOME: homeDir,
USERPROFILE: homeDir,
};
}
function runScript(input, envOverrides = {}) { function runScript(input, envOverrides = {}) {
const inputStr = typeof input === 'string' ? input : JSON.stringify(input); const inputStr = typeof input === 'string' ? input : JSON.stringify(input);
const result = spawnSync('node', [script], { const result = spawnSync('node', [script], {
@@ -64,7 +71,7 @@ function runTests() {
model: 'claude-sonnet-4-20250514', model: 'claude-sonnet-4-20250514',
usage: { input_tokens: 1000, output_tokens: 500 }, usage: { input_tokens: 1000, output_tokens: 500 },
}; };
const result = runScript(input, { HOME: tmpHome }); const result = runScript(input, withTempHome(tmpHome));
assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`); assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`);
const metricsFile = path.join(tmpHome, '.claude', 'metrics', 'costs.jsonl'); const metricsFile = path.join(tmpHome, '.claude', 'metrics', 'costs.jsonl');
@@ -84,7 +91,7 @@ function runTests() {
// 3. Handles empty input gracefully // 3. Handles empty input gracefully
(test('handles empty input gracefully', () => { (test('handles empty input gracefully', () => {
const tmpHome = makeTempDir(); const tmpHome = makeTempDir();
const result = runScript('', { HOME: tmpHome }); const result = runScript('', withTempHome(tmpHome));
assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`); assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`);
// stdout should be empty since input was empty // stdout should be empty since input was empty
assert.strictEqual(result.stdout, '', 'Expected empty stdout for empty input'); assert.strictEqual(result.stdout, '', 'Expected empty stdout for empty input');
@@ -96,7 +103,7 @@ function runTests() {
(test('handles invalid JSON gracefully', () => { (test('handles invalid JSON gracefully', () => {
const tmpHome = makeTempDir(); const tmpHome = makeTempDir();
const invalidInput = 'not valid json {{{'; const invalidInput = 'not valid json {{{';
const result = runScript(invalidInput, { HOME: tmpHome }); const result = runScript(invalidInput, withTempHome(tmpHome));
assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`); assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`);
// Should still pass through the raw input on stdout // Should still pass through the raw input on stdout
assert.strictEqual(result.stdout, invalidInput, 'Expected stdout to contain original invalid input'); assert.strictEqual(result.stdout, invalidInput, 'Expected stdout to contain original invalid input');
@@ -109,7 +116,7 @@ function runTests() {
const tmpHome = makeTempDir(); const tmpHome = makeTempDir();
const input = { model: 'claude-sonnet-4-20250514' }; const input = { model: 'claude-sonnet-4-20250514' };
const inputStr = JSON.stringify(input); const inputStr = JSON.stringify(input);
const result = runScript(input, { HOME: tmpHome }); const result = runScript(input, withTempHome(tmpHome));
assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`); assert.strictEqual(result.code, 0, `Expected exit code 0, got ${result.code}`);
assert.strictEqual(result.stdout, inputStr, 'Expected stdout to match original input'); assert.strictEqual(result.stdout, inputStr, 'Expected stdout to match original input');

View File

@@ -8,11 +8,17 @@
* Run with: node tests/hooks/detect-project-worktree.test.js * Run with: node tests/hooks/detect-project-worktree.test.js
*/ */
// Skip on Windows — these tests invoke bash scripts directly
if (process.platform === 'win32') {
console.log('Skipping bash-dependent worktree tests on Windows\n');
process.exit(0);
}
const assert = require('assert'); const assert = require('assert');
const path = require('path'); const path = require('path');
const fs = require('fs'); const fs = require('fs');
const os = require('os'); const os = require('os');
const { execSync } = require('child_process'); const { execFileSync, execSync } = require('child_process');
let passed = 0; let passed = 0;
let failed = 0; let failed = 0;
@@ -41,6 +47,20 @@ function cleanupDir(dir) {
} }
} }
function toBashPath(filePath) {
if (process.platform !== 'win32') {
return filePath;
}
return String(filePath)
.replace(/^([A-Za-z]):/, (_, driveLetter) => `/${driveLetter.toLowerCase()}`)
.replace(/\\/g, '/');
}
function runBash(command, options = {}) {
return execFileSync('bash', ['-lc', command], options).toString().trim();
}
const repoRoot = path.resolve(__dirname, '..', '..'); const repoRoot = path.resolve(__dirname, '..', '..');
const detectProjectPath = path.join( const detectProjectPath = path.join(
repoRoot, repoRoot,
@@ -98,7 +118,7 @@ test('[ -d ] returns true for .git directory', () => {
const dir = path.join(behaviorDir, 'test-d-dir'); const dir = path.join(behaviorDir, 'test-d-dir');
fs.mkdirSync(dir, { recursive: true }); fs.mkdirSync(dir, { recursive: true });
fs.mkdirSync(path.join(dir, '.git')); fs.mkdirSync(path.join(dir, '.git'));
const result = execSync(`bash -c '[ -d "${dir}/.git" ] && echo yes || echo no'`).toString().trim(); const result = runBash(`[ -d "${toBashPath(path.join(dir, '.git'))}" ] && echo yes || echo no`);
assert.strictEqual(result, 'yes'); assert.strictEqual(result, 'yes');
}); });
@@ -106,7 +126,7 @@ test('[ -d ] returns false for .git file', () => {
const dir = path.join(behaviorDir, 'test-d-file'); const dir = path.join(behaviorDir, 'test-d-file');
fs.mkdirSync(dir, { recursive: true }); fs.mkdirSync(dir, { recursive: true });
fs.writeFileSync(path.join(dir, '.git'), 'gitdir: /some/path\n'); fs.writeFileSync(path.join(dir, '.git'), 'gitdir: /some/path\n');
const result = execSync(`bash -c '[ -d "${dir}/.git" ] && echo yes || echo no'`).toString().trim(); const result = runBash(`[ -d "${toBashPath(path.join(dir, '.git'))}" ] && echo yes || echo no`);
assert.strictEqual(result, 'no'); assert.strictEqual(result, 'no');
}); });
@@ -114,7 +134,7 @@ test('[ -e ] returns true for .git directory', () => {
const dir = path.join(behaviorDir, 'test-e-dir'); const dir = path.join(behaviorDir, 'test-e-dir');
fs.mkdirSync(dir, { recursive: true }); fs.mkdirSync(dir, { recursive: true });
fs.mkdirSync(path.join(dir, '.git')); fs.mkdirSync(path.join(dir, '.git'));
const result = execSync(`bash -c '[ -e "${dir}/.git" ] && echo yes || echo no'`).toString().trim(); const result = runBash(`[ -e "${toBashPath(path.join(dir, '.git'))}" ] && echo yes || echo no`);
assert.strictEqual(result, 'yes'); assert.strictEqual(result, 'yes');
}); });
@@ -122,14 +142,14 @@ test('[ -e ] returns true for .git file', () => {
const dir = path.join(behaviorDir, 'test-e-file'); const dir = path.join(behaviorDir, 'test-e-file');
fs.mkdirSync(dir, { recursive: true }); fs.mkdirSync(dir, { recursive: true });
fs.writeFileSync(path.join(dir, '.git'), 'gitdir: /some/path\n'); fs.writeFileSync(path.join(dir, '.git'), 'gitdir: /some/path\n');
const result = execSync(`bash -c '[ -e "${dir}/.git" ] && echo yes || echo no'`).toString().trim(); const result = runBash(`[ -e "${toBashPath(path.join(dir, '.git'))}" ] && echo yes || echo no`);
assert.strictEqual(result, 'yes'); assert.strictEqual(result, 'yes');
}); });
test('[ -e ] returns false when .git does not exist', () => { test('[ -e ] returns false when .git does not exist', () => {
const dir = path.join(behaviorDir, 'test-e-none'); const dir = path.join(behaviorDir, 'test-e-none');
fs.mkdirSync(dir, { recursive: true }); fs.mkdirSync(dir, { recursive: true });
const result = execSync(`bash -c '[ -e "${dir}/.git" ] && echo yes || echo no'`).toString().trim(); const result = runBash(`[ -e "${toBashPath(path.join(dir, '.git'))}" ] && echo yes || echo no`);
assert.strictEqual(result, 'no'); assert.strictEqual(result, 'no');
}); });
@@ -188,20 +208,21 @@ test('detect-project.sh sets PROJECT_NAME and non-global PROJECT_ID for worktree
// Source detect-project.sh from the worktree directory and capture results // Source detect-project.sh from the worktree directory and capture results
const script = ` const script = `
export CLAUDE_PROJECT_DIR="${worktreeDir}" export CLAUDE_PROJECT_DIR="${toBashPath(worktreeDir)}"
export HOME="${testDir}" export HOME="${toBashPath(testDir)}"
source "${detectProjectPath}" source "${toBashPath(detectProjectPath)}"
echo "PROJECT_NAME=\${PROJECT_NAME}" echo "PROJECT_NAME=\${PROJECT_NAME}"
echo "PROJECT_ID=\${PROJECT_ID}" echo "PROJECT_ID=\${PROJECT_ID}"
`; `;
const result = execSync(`bash -c '${script.replace(/'/g, "'\\''")}'`, { const result = execFileSync('bash', ['-lc', script], {
cwd: worktreeDir, cwd: worktreeDir,
timeout: 10000, timeout: 10000,
env: { env: {
...process.env, ...process.env,
HOME: testDir, HOME: toBashPath(testDir),
CLAUDE_PROJECT_DIR: worktreeDir USERPROFILE: testDir,
CLAUDE_PROJECT_DIR: toBashPath(worktreeDir)
} }
}).toString(); }).toString();

View File

@@ -0,0 +1,294 @@
/**
* Tests for governance event capture hook.
*/
const assert = require('assert');
const {
detectSecrets,
detectApprovalRequired,
detectSensitivePath,
analyzeForGovernanceEvents,
run,
} = require('../../scripts/hooks/governance-capture');
async function test(name, fn) {
try {
await fn();
console.log(` \u2713 ${name}`);
return true;
} catch (error) {
console.log(` \u2717 ${name}`);
console.log(` Error: ${error.message}`);
return false;
}
}
async function runTests() {
console.log('\n=== Testing governance-capture ===\n');
let passed = 0;
let failed = 0;
// ── detectSecrets ──────────────────────────────────────────
if (await test('detectSecrets finds AWS access keys', async () => {
const findings = detectSecrets('my key is AKIAIOSFODNN7EXAMPLE');
assert.ok(findings.length > 0);
assert.ok(findings.some(f => f.name === 'aws_key'));
})) passed += 1; else failed += 1;
if (await test('detectSecrets finds generic secrets', async () => {
const findings = detectSecrets('api_key = "sk-proj-abcdefghij1234567890"');
assert.ok(findings.length > 0);
assert.ok(findings.some(f => f.name === 'generic_secret'));
})) passed += 1; else failed += 1;
if (await test('detectSecrets finds private keys', async () => {
const findings = detectSecrets('-----BEGIN RSA PRIVATE KEY-----\nMIIE...');
assert.ok(findings.length > 0);
assert.ok(findings.some(f => f.name === 'private_key'));
})) passed += 1; else failed += 1;
if (await test('detectSecrets finds GitHub tokens', async () => {
const findings = detectSecrets('token: ghp_ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghij');
assert.ok(findings.length > 0);
assert.ok(findings.some(f => f.name === 'github_token'));
})) passed += 1; else failed += 1;
if (await test('detectSecrets returns empty array for clean text', async () => {
const findings = detectSecrets('This is a normal log message with no secrets.');
assert.strictEqual(findings.length, 0);
})) passed += 1; else failed += 1;
if (await test('detectSecrets handles null and undefined', async () => {
assert.deepStrictEqual(detectSecrets(null), []);
assert.deepStrictEqual(detectSecrets(undefined), []);
assert.deepStrictEqual(detectSecrets(''), []);
})) passed += 1; else failed += 1;
// ── detectApprovalRequired ─────────────────────────────────
if (await test('detectApprovalRequired flags force push', async () => {
const findings = detectApprovalRequired('git push origin main --force');
assert.ok(findings.length > 0);
})) passed += 1; else failed += 1;
if (await test('detectApprovalRequired flags hard reset', async () => {
const findings = detectApprovalRequired('git reset --hard HEAD~3');
assert.ok(findings.length > 0);
})) passed += 1; else failed += 1;
if (await test('detectApprovalRequired flags rm -rf', async () => {
const findings = detectApprovalRequired('rm -rf /tmp/important');
assert.ok(findings.length > 0);
})) passed += 1; else failed += 1;
if (await test('detectApprovalRequired flags DROP TABLE', async () => {
const findings = detectApprovalRequired('DROP TABLE users');
assert.ok(findings.length > 0);
})) passed += 1; else failed += 1;
if (await test('detectApprovalRequired allows safe commands', async () => {
const findings = detectApprovalRequired('git status');
assert.strictEqual(findings.length, 0);
})) passed += 1; else failed += 1;
if (await test('detectApprovalRequired handles null', async () => {
assert.deepStrictEqual(detectApprovalRequired(null), []);
assert.deepStrictEqual(detectApprovalRequired(''), []);
})) passed += 1; else failed += 1;
// ── detectSensitivePath ────────────────────────────────────
if (await test('detectSensitivePath identifies .env files', async () => {
assert.ok(detectSensitivePath('.env'));
assert.ok(detectSensitivePath('.env.local'));
assert.ok(detectSensitivePath('/project/.env.production'));
})) passed += 1; else failed += 1;
if (await test('detectSensitivePath identifies credential files', async () => {
assert.ok(detectSensitivePath('credentials.json'));
assert.ok(detectSensitivePath('/home/user/.ssh/id_rsa'));
assert.ok(detectSensitivePath('server.key'));
assert.ok(detectSensitivePath('cert.pem'));
})) passed += 1; else failed += 1;
if (await test('detectSensitivePath returns false for normal files', async () => {
assert.ok(!detectSensitivePath('index.js'));
assert.ok(!detectSensitivePath('README.md'));
assert.ok(!detectSensitivePath('package.json'));
})) passed += 1; else failed += 1;
if (await test('detectSensitivePath handles null', async () => {
assert.ok(!detectSensitivePath(null));
assert.ok(!detectSensitivePath(''));
})) passed += 1; else failed += 1;
// ── analyzeForGovernanceEvents ─────────────────────────────
if (await test('analyzeForGovernanceEvents detects secrets in tool input', async () => {
const events = analyzeForGovernanceEvents({
tool_name: 'Write',
tool_input: {
file_path: '/tmp/config.js',
content: 'const key = "AKIAIOSFODNN7EXAMPLE";',
},
});
assert.ok(events.length > 0);
const secretEvent = events.find(e => e.eventType === 'secret_detected');
assert.ok(secretEvent);
assert.strictEqual(secretEvent.payload.severity, 'critical');
})) passed += 1; else failed += 1;
if (await test('analyzeForGovernanceEvents detects approval-required commands', async () => {
const events = analyzeForGovernanceEvents({
tool_name: 'Bash',
tool_input: {
command: 'git push origin main --force',
},
});
assert.ok(events.length > 0);
const approvalEvent = events.find(e => e.eventType === 'approval_requested');
assert.ok(approvalEvent);
assert.strictEqual(approvalEvent.payload.severity, 'high');
})) passed += 1; else failed += 1;
if (await test('analyzeForGovernanceEvents detects sensitive file access', async () => {
const events = analyzeForGovernanceEvents({
tool_name: 'Edit',
tool_input: {
file_path: '/project/.env.production',
old_string: 'DB_URL=old',
new_string: 'DB_URL=new',
},
});
assert.ok(events.length > 0);
const policyEvent = events.find(e => e.eventType === 'policy_violation');
assert.ok(policyEvent);
assert.strictEqual(policyEvent.payload.reason, 'sensitive_file_access');
})) passed += 1; else failed += 1;
if (await test('analyzeForGovernanceEvents detects elevated privilege commands', async () => {
const events = analyzeForGovernanceEvents({
tool_name: 'Bash',
tool_input: { command: 'sudo rm -rf /etc/something' },
}, {
hookPhase: 'post',
});
const securityEvent = events.find(e => e.eventType === 'security_finding');
assert.ok(securityEvent);
assert.strictEqual(securityEvent.payload.reason, 'elevated_privilege_command');
})) passed += 1; else failed += 1;
if (await test('analyzeForGovernanceEvents returns empty for clean inputs', async () => {
const events = analyzeForGovernanceEvents({
tool_name: 'Read',
tool_input: { file_path: '/project/src/index.js' },
});
assert.strictEqual(events.length, 0);
})) passed += 1; else failed += 1;
if (await test('analyzeForGovernanceEvents populates session ID from context', async () => {
const events = analyzeForGovernanceEvents({
tool_name: 'Write',
tool_input: {
file_path: '/project/.env',
content: 'DB_URL=test',
},
}, {
sessionId: 'test-session-123',
});
assert.ok(events.length > 0);
assert.strictEqual(events[0].sessionId, 'test-session-123');
})) passed += 1; else failed += 1;
if (await test('analyzeForGovernanceEvents generates unique event IDs', async () => {
const events1 = analyzeForGovernanceEvents({
tool_name: 'Write',
tool_input: { file_path: '.env', content: '' },
});
const events2 = analyzeForGovernanceEvents({
tool_name: 'Write',
tool_input: { file_path: '.env.local', content: '' },
});
if (events1.length > 0 && events2.length > 0) {
assert.notStrictEqual(events1[0].id, events2[0].id);
}
})) passed += 1; else failed += 1;
// ── run() function ─────────────────────────────────────────
if (await test('run() passes through input when feature flag is off', async () => {
const original = process.env.ECC_GOVERNANCE_CAPTURE;
delete process.env.ECC_GOVERNANCE_CAPTURE;
try {
const input = JSON.stringify({ tool_name: 'Bash', tool_input: { command: 'git push --force' } });
const result = run(input);
assert.strictEqual(result, input);
} finally {
if (original !== undefined) {
process.env.ECC_GOVERNANCE_CAPTURE = original;
}
}
})) passed += 1; else failed += 1;
if (await test('run() passes through input when feature flag is on', async () => {
const original = process.env.ECC_GOVERNANCE_CAPTURE;
process.env.ECC_GOVERNANCE_CAPTURE = '1';
try {
const input = JSON.stringify({ tool_name: 'Read', tool_input: { file_path: 'index.js' } });
const result = run(input);
assert.strictEqual(result, input);
} finally {
if (original !== undefined) {
process.env.ECC_GOVERNANCE_CAPTURE = original;
} else {
delete process.env.ECC_GOVERNANCE_CAPTURE;
}
}
})) passed += 1; else failed += 1;
if (await test('run() handles invalid JSON gracefully', async () => {
const original = process.env.ECC_GOVERNANCE_CAPTURE;
process.env.ECC_GOVERNANCE_CAPTURE = '1';
try {
const result = run('not valid json');
assert.strictEqual(result, 'not valid json');
} finally {
if (original !== undefined) {
process.env.ECC_GOVERNANCE_CAPTURE = original;
} else {
delete process.env.ECC_GOVERNANCE_CAPTURE;
}
}
})) passed += 1; else failed += 1;
if (await test('run() can detect multiple event types in one input', async () => {
// Bash command with force push AND secret in command
const events = analyzeForGovernanceEvents({
tool_name: 'Bash',
tool_input: {
command: 'API_KEY="AKIAIOSFODNN7EXAMPLE" git push --force',
},
});
const eventTypes = events.map(e => e.eventType);
assert.ok(eventTypes.includes('secret_detected'));
assert.ok(eventTypes.includes('approval_requested'));
})) passed += 1; else failed += 1;
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
process.exit(failed > 0 ? 1 : 0);
}
runTests();

View File

@@ -8,7 +8,9 @@ const assert = require('assert');
const path = require('path'); const path = require('path');
const fs = require('fs'); const fs = require('fs');
const os = require('os'); const os = require('os');
const { spawn, spawnSync } = require('child_process'); const { execFileSync, spawn, spawnSync } = require('child_process');
const SKIP_BASH = process.platform === 'win32';
function toBashPath(filePath) { function toBashPath(filePath) {
if (process.platform !== 'win32') { if (process.platform !== 'win32') {
@@ -16,10 +18,66 @@ function toBashPath(filePath) {
} }
return String(filePath) return String(filePath)
.replace(/^([A-Za-z]):/, (_, driveLetter) => `/mnt/${driveLetter.toLowerCase()}`) .replace(/^([A-Za-z]):/, (_, driveLetter) => `/${driveLetter.toLowerCase()}`)
.replace(/\\/g, '/'); .replace(/\\/g, '/');
} }
function fromBashPath(filePath) {
if (process.platform !== 'win32') {
return filePath;
}
const rawPath = String(filePath || '');
if (!rawPath) {
return rawPath;
}
try {
return execFileSync(
'bash',
['-lc', 'cygpath -w -- "$1"', 'bash', rawPath],
{ stdio: ['ignore', 'pipe', 'ignore'] }
)
.toString()
.trim();
} catch {
// Fall back to common Git Bash path shapes when cygpath is unavailable.
}
const match = rawPath.match(/^\/(?:cygdrive\/)?([A-Za-z])\/(.*)$/)
|| rawPath.match(/^\/\/([A-Za-z])\/(.*)$/);
if (match) {
return `${match[1].toUpperCase()}:\\${match[2].replace(/\//g, '\\')}`;
}
if (/^[A-Za-z]:\//.test(rawPath)) {
return rawPath.replace(/\//g, '\\');
}
return rawPath;
}
function normalizeComparablePath(filePath) {
const nativePath = fromBashPath(filePath);
if (!nativePath) {
return nativePath;
}
let comparablePath = nativePath;
try {
comparablePath = fs.realpathSync.native ? fs.realpathSync.native(nativePath) : fs.realpathSync(nativePath);
} catch {
comparablePath = path.resolve(nativePath);
}
comparablePath = comparablePath.replace(/[\\/]+/g, '/');
if (comparablePath.length > 1 && !/^[A-Za-z]:\/$/.test(comparablePath)) {
comparablePath = comparablePath.replace(/\/+$/, '');
}
return process.platform === 'win32' ? comparablePath.toLowerCase() : comparablePath;
}
function sleepMs(ms) { function sleepMs(ms) {
Atomics.wait(new Int32Array(new SharedArrayBuffer(4)), 0, 0, ms); Atomics.wait(new Int32Array(new SharedArrayBuffer(4)), 0, 0, ms);
} }
@@ -93,8 +151,8 @@ function runShellScript(scriptPath, args = [], input = '', env = {}, cwd = proce
} }
proc.stdin.end(); proc.stdin.end();
proc.stdout.on('data', data => stdout += data); proc.stdout.on('data', data => (stdout += data));
proc.stderr.on('data', data => stderr += data); proc.stderr.on('data', data => (stderr += data));
proc.on('close', code => resolve({ code, stdout, stderr })); proc.on('close', code => resolve({ code, stdout, stderr }));
proc.on('error', reject); proc.on('error', reject);
}); });
@@ -180,9 +238,7 @@ function assertNoProjectDetectionSideEffects(homeDir, testName) {
assert.ok(!fs.existsSync(registryPath), `${testName} should not create projects.json`); assert.ok(!fs.existsSync(registryPath), `${testName} should not create projects.json`);
const projectEntries = fs.existsSync(projectsDir) const projectEntries = fs.existsSync(projectsDir) ? fs.readdirSync(projectsDir).filter(entry => fs.statSync(path.join(projectsDir, entry)).isDirectory()) : [];
? fs.readdirSync(projectsDir).filter(entry => fs.statSync(path.join(projectsDir, entry)).isDirectory())
: [];
assert.strictEqual(projectEntries.length, 0, `${testName} should not create project directories`); assert.strictEqual(projectEntries.length, 0, `${testName} should not create project directories`);
} }
@@ -204,11 +260,17 @@ async function assertObserveSkipBeforeProjectDetection(testCase) {
...(testCase.payload || {}) ...(testCase.payload || {})
}); });
const result = await runShellScript(observePath, ['post'], payload, { const result = await runShellScript(
HOME: homeDir, observePath,
USERPROFILE: homeDir, ['post'],
...testCase.env payload,
}, projectDir); {
HOME: homeDir,
USERPROFILE: homeDir,
...testCase.env
},
projectDir
);
assert.strictEqual(result.code, 0, `${testCase.name} should exit successfully, stderr: ${result.stderr}`); assert.strictEqual(result.code, 0, `${testCase.name} should exit successfully, stderr: ${result.stderr}`);
assertNoProjectDetectionSideEffects(homeDir, testCase.name); assertNoProjectDetectionSideEffects(homeDir, testCase.name);
@@ -228,13 +290,13 @@ function runPatchedRunAll(tempRoot) {
const result = spawnSync('node', [wrapperPath], { const result = spawnSync('node', [wrapperPath], {
encoding: 'utf8', encoding: 'utf8',
stdio: ['pipe', 'pipe', 'pipe'], stdio: ['pipe', 'pipe', 'pipe'],
timeout: 15000, timeout: 15000
}); });
return { return {
code: result.status ?? 1, code: result.status ?? 1,
stdout: result.stdout || '', stdout: result.stdout || '',
stderr: result.stderr || '', stderr: result.stderr || ''
}; };
} }
@@ -353,6 +415,36 @@ async function runTests() {
passed++; passed++;
else failed++; else failed++;
if (
await asyncTest('strips ANSI escape codes from injected session content', async () => {
const isoHome = path.join(os.tmpdir(), `ecc-ansi-start-${Date.now()}`);
const sessionsDir = path.join(isoHome, '.claude', 'sessions');
fs.mkdirSync(sessionsDir, { recursive: true });
fs.mkdirSync(path.join(isoHome, '.claude', 'skills', 'learned'), { recursive: true });
const sessionFile = path.join(sessionsDir, '2026-02-11-winansi00-session.tmp');
fs.writeFileSync(
sessionFile,
'\x1b[H\x1b[2J\x1b[3J# Real Session\n\nI worked on \x1b[1;36mWindows terminal handling\x1b[0m.\x1b[K\n'
);
try {
const result = await runScript(path.join(scriptsDir, 'session-start.js'), '', {
HOME: isoHome,
USERPROFILE: isoHome
});
assert.strictEqual(result.code, 0);
assert.ok(result.stdout.includes('Previous session summary'), 'Should inject real session content');
assert.ok(result.stdout.includes('Windows terminal handling'), 'Should preserve sanitized session text');
assert.ok(!result.stdout.includes('\x1b['), 'Should not emit ANSI escape codes');
} finally {
fs.rmSync(isoHome, { recursive: true, force: true });
}
})
)
passed++;
else failed++;
if ( if (
await asyncTest('reports learned skills count', async () => { await asyncTest('reports learned skills count', async () => {
const isoHome = path.join(os.tmpdir(), `ecc-skills-start-${Date.now()}`); const isoHome = path.join(os.tmpdir(), `ecc-skills-start-${Date.now()}`);
@@ -388,11 +480,7 @@ async function runTests() {
tool_name: 'Write', tool_name: 'Write',
tool_input: { file_path: 'src/index.ts', content: 'console.log("ok");' } tool_input: { file_path: 'src/index.ts', content: 'console.log("ok");' }
}); });
const result = await runScript( const result = await runScript(path.join(scriptsDir, 'insaits-security-wrapper.js'), stdinData, { ECC_ENABLE_INSAITS: '' });
path.join(scriptsDir, 'insaits-security-wrapper.js'),
stdinData,
{ ECC_ENABLE_INSAITS: '' }
);
assert.strictEqual(result.code, 0, `Exit code should be 0, got ${result.code}`); assert.strictEqual(result.code, 0, `Exit code should be 0, got ${result.code}`);
assert.strictEqual(result.stdout, stdinData, 'Should pass stdin through unchanged'); assert.strictEqual(result.stdout, stdinData, 'Should pass stdin through unchanged');
assert.strictEqual(result.stderr, '', 'Should stay silent when integration is disabled'); assert.strictEqual(result.stderr, '', 'Should stay silent when integration is disabled');
@@ -1782,10 +1870,14 @@ async function runTests() {
for (const hook of entry.hooks) { for (const hook of entry.hooks) {
if (hook.type === 'command') { if (hook.type === 'command') {
const isNode = hook.command.startsWith('node'); const isNode = hook.command.startsWith('node');
const isNpx = hook.command.startsWith('npx ');
const isSkillScript = hook.command.includes('/skills/') && (/^(bash|sh)\s/.test(hook.command) || hook.command.startsWith('${CLAUDE_PLUGIN_ROOT}/skills/')); const isSkillScript = hook.command.includes('/skills/') && (/^(bash|sh)\s/.test(hook.command) || hook.command.startsWith('${CLAUDE_PLUGIN_ROOT}/skills/'));
const isHookShellWrapper = /^(bash|sh)\s+["']?\$\{CLAUDE_PLUGIN_ROOT\}\/scripts\/hooks\/run-with-flags-shell\.sh/.test(hook.command); const isHookShellWrapper = /^(bash|sh)\s+["']?\$\{CLAUDE_PLUGIN_ROOT\}\/scripts\/hooks\/run-with-flags-shell\.sh/.test(hook.command);
const isSessionStartFallback = hook.command.startsWith('bash -lc') && hook.command.includes('run-with-flags.js'); const isSessionStartFallback = hook.command.startsWith('bash -lc') && hook.command.includes('run-with-flags.js');
assert.ok(isNode || isSkillScript || isHookShellWrapper || isSessionStartFallback, `Hook command should use node or approved shell wrapper: ${hook.command.substring(0, 100)}...`); assert.ok(
isNode || isNpx || isSkillScript || isHookShellWrapper || isSessionStartFallback,
`Hook command should use node or approved shell wrapper: ${hook.command.substring(0, 100)}...`
);
} }
} }
} }
@@ -1834,10 +1926,7 @@ async function runTests() {
assert.ok(insaitsHook, 'Should define an InsAIts PreToolUse hook'); assert.ok(insaitsHook, 'Should define an InsAIts PreToolUse hook');
assert.strictEqual(insaitsHook.matcher, 'Bash|Write|Edit|MultiEdit', 'InsAIts hook should avoid matching every tool'); assert.strictEqual(insaitsHook.matcher, 'Bash|Write|Edit|MultiEdit', 'InsAIts hook should avoid matching every tool');
assert.ok(insaitsHook.description.includes('ECC_ENABLE_INSAITS=1'), 'InsAIts hook should document explicit opt-in'); assert.ok(insaitsHook.description.includes('ECC_ENABLE_INSAITS=1'), 'InsAIts hook should document explicit opt-in');
assert.ok( assert.ok(insaitsHook.hooks[0].command.includes('insaits-security-wrapper.js'), 'InsAIts hook should execute through the JS wrapper');
insaitsHook.hooks[0].command.includes('insaits-security-wrapper.js'),
'InsAIts hook should execute through the JS wrapper'
);
}) })
) )
passed++; passed++;
@@ -2261,10 +2350,7 @@ async function runTests() {
if ( if (
test('observer-loop uses a configurable max-turn budget with safe default', () => { test('observer-loop uses a configurable max-turn budget with safe default', () => {
const observerLoopSource = fs.readFileSync( const observerLoopSource = fs.readFileSync(path.join(__dirname, '..', '..', 'skills', 'continuous-learning-v2', 'agents', 'observer-loop.sh'), 'utf8');
path.join(__dirname, '..', '..', 'skills', 'continuous-learning-v2', 'agents', 'observer-loop.sh'),
'utf8'
);
assert.ok(observerLoopSource.includes('ECC_OBSERVER_MAX_TURNS'), 'observer-loop should allow max-turn overrides'); assert.ok(observerLoopSource.includes('ECC_OBSERVER_MAX_TURNS'), 'observer-loop should allow max-turn overrides');
assert.ok(observerLoopSource.includes('max_turns="${ECC_OBSERVER_MAX_TURNS:-10}"'), 'observer-loop should default to 10 turns'); assert.ok(observerLoopSource.includes('max_turns="${ECC_OBSERVER_MAX_TURNS:-10}"'), 'observer-loop should default to 10 turns');
@@ -2276,7 +2362,10 @@ async function runTests() {
passed++; passed++;
else failed++; else failed++;
if ( if (SKIP_BASH) {
console.log(' ⊘ detect-project exports the resolved Python command (skipped on Windows)');
passed++;
} else if (
await asyncTest('detect-project exports the resolved Python command for downstream scripts', async () => { await asyncTest('detect-project exports the resolved Python command for downstream scripts', async () => {
const detectProjectPath = path.join(__dirname, '..', '..', 'skills', 'continuous-learning-v2', 'scripts', 'detect-project.sh'); const detectProjectPath = path.join(__dirname, '..', '..', 'skills', 'continuous-learning-v2', 'scripts', 'detect-project.sh');
const shellCommand = [`source "${toBashPath(detectProjectPath)}" >/dev/null 2>&1`, 'printf "%s\\n" "${CLV2_PYTHON_CMD:-}"'].join('; '); const shellCommand = [`source "${toBashPath(detectProjectPath)}" >/dev/null 2>&1`, 'printf "%s\\n" "${CLV2_PYTHON_CMD:-}"'].join('; ');
@@ -2304,7 +2393,10 @@ async function runTests() {
passed++; passed++;
else failed++; else failed++;
if ( if (SKIP_BASH) {
console.log(' ⊘ detect-project writes project metadata (skipped on Windows)');
passed++;
} else if (
await asyncTest('detect-project writes project metadata to the registry and project directory', async () => { await asyncTest('detect-project writes project metadata to the registry and project directory', async () => {
const testRoot = createTestDir(); const testRoot = createTestDir();
const homeDir = path.join(testRoot, 'home'); const homeDir = path.join(testRoot, 'home');
@@ -2317,15 +2409,15 @@ async function runTests() {
spawnSync('git', ['init'], { cwd: repoDir, stdio: 'ignore' }); spawnSync('git', ['init'], { cwd: repoDir, stdio: 'ignore' });
spawnSync('git', ['remote', 'add', 'origin', 'https://github.com/example/ecc-test.git'], { cwd: repoDir, stdio: 'ignore' }); spawnSync('git', ['remote', 'add', 'origin', 'https://github.com/example/ecc-test.git'], { cwd: repoDir, stdio: 'ignore' });
const shellCommand = [ const shellCommand = [`cd "${toBashPath(repoDir)}"`, `source "${toBashPath(detectProjectPath)}" >/dev/null 2>&1`, 'printf "%s\\n" "$PROJECT_ID"', 'printf "%s\\n" "$PROJECT_DIR"'].join('; ');
`cd "${toBashPath(repoDir)}"`,
`source "${toBashPath(detectProjectPath)}" >/dev/null 2>&1`,
'printf "%s\\n" "$PROJECT_ID"',
'printf "%s\\n" "$PROJECT_DIR"'
].join('; ');
const proc = spawn('bash', ['-lc', shellCommand], { const proc = spawn('bash', ['-lc', shellCommand], {
env: { ...process.env, HOME: homeDir, USERPROFILE: homeDir }, env: {
...process.env,
HOME: homeDir,
USERPROFILE: homeDir,
CLAUDE_PROJECT_DIR: ''
},
stdio: ['ignore', 'pipe', 'pipe'] stdio: ['ignore', 'pipe', 'pipe']
}); });
@@ -2343,22 +2435,43 @@ async function runTests() {
const [projectId, projectDir] = stdout.trim().split(/\r?\n/); const [projectId, projectDir] = stdout.trim().split(/\r?\n/);
const registryPath = path.join(homeDir, '.claude', 'homunculus', 'projects.json'); const registryPath = path.join(homeDir, '.claude', 'homunculus', 'projects.json');
const projectMetadataPath = path.join(projectDir, 'project.json'); const expectedProjectDir = path.join(
homeDir,
'.claude',
'homunculus',
'projects',
projectId
);
const projectMetadataPath = path.join(expectedProjectDir, 'project.json');
assert.ok(projectId, 'detect-project should emit a project id'); assert.ok(projectId, 'detect-project should emit a project id');
assert.ok(projectDir, 'detect-project should emit a project directory');
assert.ok(fs.existsSync(registryPath), 'projects.json should be created'); assert.ok(fs.existsSync(registryPath), 'projects.json should be created');
assert.ok(fs.existsSync(projectMetadataPath), 'project.json should be written in the project directory'); assert.ok(fs.existsSync(projectMetadataPath), 'project.json should be written in the project directory');
const registry = JSON.parse(fs.readFileSync(registryPath, 'utf8')); const registry = JSON.parse(fs.readFileSync(registryPath, 'utf8'));
const metadata = JSON.parse(fs.readFileSync(projectMetadataPath, 'utf8')); const metadata = JSON.parse(fs.readFileSync(projectMetadataPath, 'utf8'));
const comparableMetadataRoot = normalizeComparablePath(metadata.root);
const comparableRepoDir = normalizeComparablePath(repoDir);
const comparableProjectDir = normalizeComparablePath(projectDir);
const comparableExpectedProjectDir = normalizeComparablePath(expectedProjectDir);
assert.ok(registry[projectId], 'registry should contain the detected project'); assert.ok(registry[projectId], 'registry should contain the detected project');
assert.strictEqual(metadata.id, projectId, 'project.json should include the detected id'); assert.strictEqual(metadata.id, projectId, 'project.json should include the detected id');
assert.strictEqual(metadata.name, path.basename(repoDir), 'project.json should include the repo name'); assert.strictEqual(metadata.name, path.basename(repoDir), 'project.json should include the repo name');
assert.strictEqual(fs.realpathSync(metadata.root), fs.realpathSync(repoDir), 'project.json should include the repo root'); assert.strictEqual(
comparableMetadataRoot,
comparableRepoDir,
`project.json should include the repo root (expected ${comparableRepoDir}, got ${comparableMetadataRoot})`
);
assert.strictEqual(metadata.remote, 'https://github.com/example/ecc-test.git', 'project.json should include the sanitized remote'); assert.strictEqual(metadata.remote, 'https://github.com/example/ecc-test.git', 'project.json should include the sanitized remote');
assert.ok(metadata.created_at, 'project.json should include created_at'); assert.ok(metadata.created_at, 'project.json should include created_at');
assert.ok(metadata.last_seen, 'project.json should include last_seen'); assert.ok(metadata.last_seen, 'project.json should include last_seen');
assert.strictEqual(
comparableProjectDir,
comparableExpectedProjectDir,
`PROJECT_DIR should point at the project storage directory (expected ${comparableExpectedProjectDir}, got ${comparableProjectDir})`
);
} finally { } finally {
cleanupTestDir(testRoot); cleanupTestDir(testRoot);
} }
@@ -2367,88 +2480,125 @@ async function runTests() {
passed++; passed++;
else failed++; else failed++;
if (await asyncTest('observe.sh falls back to legacy output fields when tool_response is null', async () => { if (SKIP_BASH) {
const homeDir = createTestDir(); console.log(' ⊘ observe.sh falls back to legacy output fields (skipped on Windows)');
const projectDir = createTestDir(); passed++;
const observePath = path.join(__dirname, '..', '..', 'skills', 'continuous-learning-v2', 'hooks', 'observe.sh'); } else if (
const payload = JSON.stringify({ await asyncTest('observe.sh falls back to legacy output fields when tool_response is null', async () => {
tool_name: 'Bash', const homeDir = createTestDir();
tool_input: { command: 'echo hello' }, const projectDir = createTestDir();
tool_response: null, const observePath = path.join(__dirname, '..', '..', 'skills', 'continuous-learning-v2', 'hooks', 'observe.sh');
tool_output: 'legacy output', const payload = JSON.stringify({
session_id: 'session-123', tool_name: 'Bash',
cwd: projectDir tool_input: { command: 'echo hello' },
}); tool_response: null,
tool_output: 'legacy output',
session_id: 'session-123',
cwd: projectDir
});
try { try {
const result = await runShellScript(observePath, ['post'], payload, { const result = await runShellScript(
HOME: homeDir, observePath,
USERPROFILE: homeDir, ['post'],
CLAUDE_PROJECT_DIR: projectDir payload,
}, projectDir); {
HOME: homeDir,
USERPROFILE: homeDir,
CLAUDE_PROJECT_DIR: projectDir
},
projectDir
);
assert.strictEqual(result.code, 0, `observe.sh should exit successfully, stderr: ${result.stderr}`); assert.strictEqual(result.code, 0, `observe.sh should exit successfully, stderr: ${result.stderr}`);
const projectsDir = path.join(homeDir, '.claude', 'homunculus', 'projects'); const projectsDir = path.join(homeDir, '.claude', 'homunculus', 'projects');
const projectIds = fs.readdirSync(projectsDir); const projectIds = fs.readdirSync(projectsDir);
assert.strictEqual(projectIds.length, 1, 'observe.sh should create one project-scoped observation directory'); assert.strictEqual(projectIds.length, 1, 'observe.sh should create one project-scoped observation directory');
const observationsPath = path.join(projectsDir, projectIds[0], 'observations.jsonl'); const observationsPath = path.join(projectsDir, projectIds[0], 'observations.jsonl');
const observations = fs.readFileSync(observationsPath, 'utf8').trim().split('\n').filter(Boolean); const observations = fs.readFileSync(observationsPath, 'utf8').trim().split('\n').filter(Boolean);
assert.ok(observations.length > 0, 'observe.sh should append at least one observation'); assert.ok(observations.length > 0, 'observe.sh should append at least one observation');
const observation = JSON.parse(observations[0]); const observation = JSON.parse(observations[0]);
assert.strictEqual(observation.output, 'legacy output', 'observe.sh should fall back to legacy tool_output when tool_response is null'); assert.strictEqual(observation.output, 'legacy output', 'observe.sh should fall back to legacy tool_output when tool_response is null');
} finally { } finally {
cleanupTestDir(homeDir); cleanupTestDir(homeDir);
cleanupTestDir(projectDir); cleanupTestDir(projectDir);
} }
})) passed++; else failed++; })
)
passed++;
else failed++;
if (await asyncTest('observe.sh skips non-cli entrypoints before project detection side effects', async () => { if (SKIP_BASH) {
await assertObserveSkipBeforeProjectDetection({ console.log(' \u2298 observe.sh skips non-cli entrypoints (skipped on Windows)');
name: 'non-cli entrypoint', passed++;
env: { CLAUDE_CODE_ENTRYPOINT: 'mcp' } } else if (
}); await asyncTest('observe.sh skips non-cli entrypoints before project detection side effects', async () => {
})) passed++; else failed++; await assertObserveSkipBeforeProjectDetection({
name: 'non-cli entrypoint',
env: { CLAUDE_CODE_ENTRYPOINT: 'mcp' }
});
})
)
passed++;
else failed++;
if (await asyncTest('observe.sh skips minimal hook profile before project detection side effects', async () => { if (SKIP_BASH) { console.log(" ⊘ observe.sh skips minimal hook profile (skipped on Windows)"); passed++; } else if (
await assertObserveSkipBeforeProjectDetection({ await asyncTest('observe.sh skips minimal hook profile before project detection side effects', async () => {
name: 'minimal hook profile', await assertObserveSkipBeforeProjectDetection({
env: { CLAUDE_CODE_ENTRYPOINT: 'cli', ECC_HOOK_PROFILE: 'minimal' } name: 'minimal hook profile',
}); env: { CLAUDE_CODE_ENTRYPOINT: 'cli', ECC_HOOK_PROFILE: 'minimal' }
})) passed++; else failed++; });
})
)
passed++;
else failed++;
if (await asyncTest('observe.sh skips cooperative skip env before project detection side effects', async () => { if (SKIP_BASH) { console.log(" ⊘ observe.sh skips cooperative skip env (skipped on Windows)"); passed++; } else if (
await assertObserveSkipBeforeProjectDetection({ await asyncTest('observe.sh skips cooperative skip env before project detection side effects', async () => {
name: 'cooperative skip env', await assertObserveSkipBeforeProjectDetection({
env: { CLAUDE_CODE_ENTRYPOINT: 'cli', ECC_SKIP_OBSERVE: '1' } name: 'cooperative skip env',
}); env: { CLAUDE_CODE_ENTRYPOINT: 'cli', ECC_SKIP_OBSERVE: '1' }
})) passed++; else failed++; });
})
)
passed++;
else failed++;
if (await asyncTest('observe.sh skips subagent payloads before project detection side effects', async () => { if (SKIP_BASH) { console.log(" ⊘ observe.sh skips subagent payloads (skipped on Windows)"); passed++; } else if (
await assertObserveSkipBeforeProjectDetection({ await asyncTest('observe.sh skips subagent payloads before project detection side effects', async () => {
name: 'subagent payload', await assertObserveSkipBeforeProjectDetection({
env: { CLAUDE_CODE_ENTRYPOINT: 'cli' }, name: 'subagent payload',
payload: { agent_id: 'agent-123' } env: { CLAUDE_CODE_ENTRYPOINT: 'cli' },
}); payload: { agent_id: 'agent-123' }
})) passed++; else failed++; });
})
)
passed++;
else failed++;
if (await asyncTest('observe.sh skips configured observer-session paths before project detection side effects', async () => { if (SKIP_BASH) { console.log(" ⊘ observe.sh skips configured observer-session paths (skipped on Windows)"); passed++; } else if (
await assertObserveSkipBeforeProjectDetection({ await asyncTest('observe.sh skips configured observer-session paths before project detection side effects', async () => {
name: 'cwd skip path', await assertObserveSkipBeforeProjectDetection({
env: { name: 'cwd skip path',
CLAUDE_CODE_ENTRYPOINT: 'cli', env: {
ECC_OBSERVE_SKIP_PATHS: ' observer-sessions , .claude-mem ' CLAUDE_CODE_ENTRYPOINT: 'cli',
}, ECC_OBSERVE_SKIP_PATHS: ' observer-sessions , .claude-mem '
cwdSuffix: path.join('observer-sessions', 'worker') },
}); cwdSuffix: path.join('observer-sessions', 'worker')
})) passed++; else failed++; });
})
)
passed++;
else failed++;
if (await asyncTest('matches .tsx extension for type checking', async () => { if (
const testDir = createTestDir(); await asyncTest('matches .tsx extension for type checking', async () => {
const testFile = path.join(testDir, 'component.tsx'); const testDir = createTestDir();
fs.writeFileSync(testFile, 'const x: number = 1;'); const testFile = path.join(testDir, 'component.tsx');
fs.writeFileSync(testFile, 'const x: number = 1;');
const stdinJson = JSON.stringify({ tool_input: { file_path: testFile } }); const stdinJson = JSON.stringify({ tool_input: { file_path: testFile } });
const result = await runScript(path.join(scriptsDir, 'post-edit-typecheck.js'), stdinJson); const result = await runScript(path.join(scriptsDir, 'post-edit-typecheck.js'), stdinJson);
@@ -2658,10 +2808,7 @@ async function runTests() {
const branch = spawnSync('git', ['rev-parse', '--abbrev-ref', 'HEAD'], { encoding: 'utf8' }).stdout.trim(); const branch = spawnSync('git', ['rev-parse', '--abbrev-ref', 'HEAD'], { encoding: 'utf8' }).stdout.trim();
const project = path.basename(spawnSync('git', ['rev-parse', '--show-toplevel'], { encoding: 'utf8' }).stdout.trim()); const project = path.basename(spawnSync('git', ['rev-parse', '--show-toplevel'], { encoding: 'utf8' }).stdout.trim());
fs.writeFileSync( fs.writeFileSync(sessionFile, `# Session: ${today}\n**Date:** ${today}\n**Started:** 09:00\n**Last Updated:** 09:00\n\n---\n\n## Current State\n\n[Session context goes here]\n`);
sessionFile,
`# Session: ${today}\n**Date:** ${today}\n**Started:** 09:00\n**Last Updated:** 09:00\n\n---\n\n## Current State\n\n[Session context goes here]\n`
);
const result = await runScript(path.join(scriptsDir, 'session-end.js'), '', { const result = await runScript(path.join(scriptsDir, 'session-end.js'), '', {
HOME: testDir, HOME: testDir,

View File

@@ -0,0 +1,266 @@
/**
* Tests for scripts/hooks/mcp-health-check.js
*
* Run with: node tests/hooks/mcp-health-check.test.js
*/
const assert = require('assert');
const fs = require('fs');
const os = require('os');
const path = require('path');
const { spawnSync } = require('child_process');
const script = path.join(__dirname, '..', '..', 'scripts', 'hooks', 'mcp-health-check.js');
function test(name, fn) {
try {
fn();
console.log(`${name}`);
return true;
} catch (err) {
console.log(`${name}`);
console.log(` Error: ${err.message}`);
return false;
}
}
async function asyncTest(name, fn) {
try {
await fn();
console.log(`${name}`);
return true;
} catch (err) {
console.log(`${name}`);
console.log(` Error: ${err.message}`);
return false;
}
}
function createTempDir() {
return fs.mkdtempSync(path.join(os.tmpdir(), 'ecc-mcp-health-'));
}
function cleanupTempDir(dirPath) {
fs.rmSync(dirPath, { recursive: true, force: true });
}
function writeConfig(configPath, body) {
fs.writeFileSync(configPath, JSON.stringify(body, null, 2));
}
function readState(statePath) {
return JSON.parse(fs.readFileSync(statePath, 'utf8'));
}
function createCommandConfig(scriptPath) {
return {
command: process.execPath,
args: [scriptPath]
};
}
function runHook(input, env = {}) {
const result = spawnSync('node', [script], {
input: JSON.stringify(input),
encoding: 'utf8',
env: {
...process.env,
ECC_HOOK_PROFILE: 'standard',
...env
},
timeout: 15000,
stdio: ['pipe', 'pipe', 'pipe']
});
return {
code: result.status || 0,
stdout: result.stdout || '',
stderr: result.stderr || ''
};
}
async function runTests() {
console.log('\n=== Testing mcp-health-check.js ===\n');
let passed = 0;
let failed = 0;
if (test('passes through non-MCP tools untouched', () => {
const result = runHook(
{ tool_name: 'Read', tool_input: { file_path: 'README.md' } },
{ CLAUDE_HOOK_EVENT_NAME: 'PreToolUse' }
);
assert.strictEqual(result.code, 0, 'Expected non-MCP tool to pass through');
assert.strictEqual(result.stderr, '', 'Expected no stderr for non-MCP tool');
})) passed++; else failed++;
if (await asyncTest('marks healthy command MCP servers and allows the tool call', async () => {
const tempDir = createTempDir();
const configPath = path.join(tempDir, 'claude.json');
const statePath = path.join(tempDir, 'mcp-health.json');
const serverScript = path.join(tempDir, 'healthy-server.js');
try {
fs.writeFileSync(serverScript, "setInterval(() => {}, 1000);\n");
writeConfig(configPath, {
mcpServers: {
mock: createCommandConfig(serverScript)
}
});
const input = { tool_name: 'mcp__mock__list_items', tool_input: {} };
const result = runHook(input, {
CLAUDE_HOOK_EVENT_NAME: 'PreToolUse',
ECC_MCP_CONFIG_PATH: configPath,
ECC_MCP_HEALTH_STATE_PATH: statePath,
ECC_MCP_HEALTH_TIMEOUT_MS: '100'
});
assert.strictEqual(result.code, 0, `Expected healthy server to pass, got ${result.code}`);
assert.strictEqual(result.stdout.trim(), JSON.stringify(input), 'Expected original JSON on stdout');
const state = readState(statePath);
assert.strictEqual(state.servers.mock.status, 'healthy', 'Expected mock server to be marked healthy');
} finally {
cleanupTempDir(tempDir);
}
})) passed++; else failed++;
if (await asyncTest('blocks unhealthy command MCP servers and records backoff state', async () => {
const tempDir = createTempDir();
const configPath = path.join(tempDir, 'claude.json');
const statePath = path.join(tempDir, 'mcp-health.json');
const serverScript = path.join(tempDir, 'unhealthy-server.js');
try {
fs.writeFileSync(serverScript, "process.exit(1);\n");
writeConfig(configPath, {
mcpServers: {
flaky: createCommandConfig(serverScript)
}
});
const result = runHook(
{ tool_name: 'mcp__flaky__search', tool_input: {} },
{
CLAUDE_HOOK_EVENT_NAME: 'PreToolUse',
ECC_MCP_CONFIG_PATH: configPath,
ECC_MCP_HEALTH_STATE_PATH: statePath,
ECC_MCP_HEALTH_TIMEOUT_MS: '100'
}
);
assert.strictEqual(result.code, 2, 'Expected unhealthy server to block the MCP tool');
assert.ok(result.stderr.includes('Blocking search'), `Expected blocking message, got: ${result.stderr}`);
const state = readState(statePath);
assert.strictEqual(state.servers.flaky.status, 'unhealthy', 'Expected flaky server to be marked unhealthy');
assert.ok(state.servers.flaky.nextRetryAt > state.servers.flaky.checkedAt, 'Expected retry backoff to be recorded');
} finally {
cleanupTempDir(tempDir);
}
})) passed++; else failed++;
if (await asyncTest('fail-open mode warns but does not block unhealthy MCP servers', async () => {
const tempDir = createTempDir();
const configPath = path.join(tempDir, 'claude.json');
const statePath = path.join(tempDir, 'mcp-health.json');
const serverScript = path.join(tempDir, 'relaxed-server.js');
try {
fs.writeFileSync(serverScript, "process.exit(1);\n");
writeConfig(configPath, {
mcpServers: {
relaxed: createCommandConfig(serverScript)
}
});
const result = runHook(
{ tool_name: 'mcp__relaxed__list', tool_input: {} },
{
CLAUDE_HOOK_EVENT_NAME: 'PreToolUse',
ECC_MCP_CONFIG_PATH: configPath,
ECC_MCP_HEALTH_STATE_PATH: statePath,
ECC_MCP_HEALTH_FAIL_OPEN: '1',
ECC_MCP_HEALTH_TIMEOUT_MS: '100'
}
);
assert.strictEqual(result.code, 0, 'Expected fail-open mode to allow execution');
assert.ok(result.stderr.includes('Blocking list') || result.stderr.includes('fall back'), 'Expected warning output in fail-open mode');
} finally {
cleanupTempDir(tempDir);
}
})) passed++; else failed++;
if (await asyncTest('post-failure reconnect command restores server health when a reprobe succeeds', async () => {
const tempDir = createTempDir();
const configPath = path.join(tempDir, 'claude.json');
const statePath = path.join(tempDir, 'mcp-health.json');
const switchFile = path.join(tempDir, 'server-mode.txt');
const reconnectFile = path.join(tempDir, 'reconnected.txt');
const probeScript = path.join(tempDir, 'probe-server.js');
fs.writeFileSync(switchFile, 'down');
fs.writeFileSync(
probeScript,
[
"const fs = require('fs');",
`const mode = fs.readFileSync(${JSON.stringify(switchFile)}, 'utf8').trim();`,
"if (mode === 'up') { setInterval(() => {}, 1000); } else { console.error('401 Unauthorized'); process.exit(1); }"
].join('\n')
);
const reconnectScript = path.join(tempDir, 'reconnect.js');
fs.writeFileSync(
reconnectScript,
[
"const fs = require('fs');",
`fs.writeFileSync(${JSON.stringify(switchFile)}, 'up');`,
`fs.writeFileSync(${JSON.stringify(reconnectFile)}, 'done');`
].join('\n')
);
try {
writeConfig(configPath, {
mcpServers: {
authy: createCommandConfig(probeScript)
}
});
const result = runHook(
{
tool_name: 'mcp__authy__messages',
tool_input: {},
error: '401 Unauthorized'
},
{
CLAUDE_HOOK_EVENT_NAME: 'PostToolUseFailure',
ECC_MCP_CONFIG_PATH: configPath,
ECC_MCP_HEALTH_STATE_PATH: statePath,
ECC_MCP_RECONNECT_COMMAND: `node ${JSON.stringify(reconnectScript)}`,
ECC_MCP_HEALTH_TIMEOUT_MS: '100'
}
);
assert.strictEqual(result.code, 0, 'Expected failure hook to remain non-blocking');
assert.ok(result.stderr.includes('reported 401'), `Expected reconnect log, got: ${result.stderr}`);
assert.ok(result.stderr.includes('connection restored'), `Expected restored log, got: ${result.stderr}`);
assert.ok(fs.existsSync(reconnectFile), 'Expected reconnect command to run');
const state = readState(statePath);
assert.strictEqual(state.servers.authy.status, 'healthy', 'Expected authy server to be restored after reconnect');
} finally {
cleanupTempDir(tempDir);
}
})) passed++; else failed++;
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
process.exit(failed > 0 ? 1 : 0);
}
runTests().catch(error => {
console.error(error);
process.exit(1);
});

View File

@@ -216,6 +216,10 @@ test('counter file handles missing/corrupt file gracefully', () => {
console.log('\n--- observe.sh end-to-end throttle (shell execution) ---'); console.log('\n--- observe.sh end-to-end throttle (shell execution) ---');
test('observe.sh creates counter file and increments on each call', () => { test('observe.sh creates counter file and increments on each call', () => {
if (process.platform === 'win32') {
return;
}
// This test runs observe.sh with minimal input to verify counter behavior. // This test runs observe.sh with minimal input to verify counter behavior.
// We need python3, bash, and a valid project dir to test the full flow. // We need python3, bash, and a valid project dir to test the full flow.
// We use ECC_SKIP_OBSERVE=0 and minimal JSON so observe.sh processes but // We use ECC_SKIP_OBSERVE=0 and minimal JSON so observe.sh processes but

View File

@@ -171,6 +171,16 @@ function cleanupTestDir(testDir) {
fs.rmSync(testDir, { recursive: true, force: true }); fs.rmSync(testDir, { recursive: true, force: true });
} }
function getHookCommandByDescription(hooks, lifecycle, descriptionText) {
const hookGroup = hooks.hooks[lifecycle]?.find(
entry => entry.description && entry.description.includes(descriptionText)
);
assert.ok(hookGroup, `Expected ${lifecycle} hook matching "${descriptionText}"`);
assert.ok(hookGroup.hooks?.[0]?.command, `Expected ${lifecycle} hook command for "${descriptionText}"`);
return hookGroup.hooks[0].command;
}
// Test suite // Test suite
async function runTests() { async function runTests() {
console.log('\n=== Hook Integration Tests ===\n'); console.log('\n=== Hook Integration Tests ===\n');
@@ -253,7 +263,11 @@ async function runTests() {
if (await asyncTest('dev server hook transforms command to tmux session', async () => { if (await asyncTest('dev server hook transforms command to tmux session', async () => {
// Test the auto-tmux dev hook — transforms dev commands to run in tmux // Test the auto-tmux dev hook — transforms dev commands to run in tmux
const hookCommand = hooks.hooks.PreToolUse[0].hooks[0].command; const hookCommand = getHookCommandByDescription(
hooks,
'PreToolUse',
'Auto-start dev servers in tmux'
);
const result = await runHookCommand(hookCommand, { const result = await runHookCommand(hookCommand, {
tool_input: { command: 'npm run dev' } tool_input: { command: 'npm run dev' }
}); });
@@ -280,7 +294,11 @@ async function runTests() {
if (await asyncTest('dev server hook transforms yarn dev to tmux session', async () => { if (await asyncTest('dev server hook transforms yarn dev to tmux session', async () => {
// The auto-tmux dev hook transforms dev commands (yarn dev, npm run dev, etc.) // The auto-tmux dev hook transforms dev commands (yarn dev, npm run dev, etc.)
const hookCommand = hooks.hooks.PreToolUse[0].hooks[0].command; const hookCommand = getHookCommandByDescription(
hooks,
'PreToolUse',
'Auto-start dev servers in tmux'
);
const result = await runHookCommand(hookCommand, { const result = await runHookCommand(hookCommand, {
tool_input: { command: 'yarn dev' } tool_input: { command: 'yarn dev' }
}); });
@@ -295,6 +313,50 @@ async function runTests() {
} }
})) passed++; else failed++; })) passed++; else failed++;
if (await asyncTest('MCP health hook blocks unhealthy MCP tool calls through hooks.json', async () => {
const hookCommand = getHookCommandByDescription(
hooks,
'PreToolUse',
'Check MCP server health before MCP tool execution'
);
const testDir = createTestDir();
const configPath = path.join(testDir, 'claude.json');
const statePath = path.join(testDir, 'mcp-health.json');
const serverScript = path.join(testDir, 'broken-mcp.js');
try {
fs.writeFileSync(serverScript, 'process.exit(1);\n');
fs.writeFileSync(
configPath,
JSON.stringify({
mcpServers: {
broken: {
command: process.execPath,
args: [serverScript]
}
}
})
);
const result = await runHookCommand(
hookCommand,
{ tool_name: 'mcp__broken__search', tool_input: {} },
{
CLAUDE_HOOK_EVENT_NAME: 'PreToolUse',
ECC_MCP_CONFIG_PATH: configPath,
ECC_MCP_HEALTH_STATE_PATH: statePath,
ECC_MCP_HEALTH_TIMEOUT_MS: '100'
}
);
assert.strictEqual(result.code, 2, 'Expected unhealthy MCP preflight to block');
assert.ok(result.stderr.includes('broken is unavailable'), `Expected health warning, got: ${result.stderr}`);
} finally {
cleanupTestDir(testDir);
}
})) passed++; else failed++;
if (await asyncTest('hooks handle missing files gracefully', async () => { if (await asyncTest('hooks handle missing files gracefully', async () => {
const testDir = createTestDir(); const testDir = createTestDir();
const transcriptPath = path.join(testDir, 'nonexistent.jsonl'); const transcriptPath = path.join(testDir, 'nonexistent.jsonl');
@@ -673,6 +735,7 @@ async function runTests() {
const isInline = hook.command.startsWith('node -e'); const isInline = hook.command.startsWith('node -e');
const isFilePath = hook.command.startsWith('node "'); const isFilePath = hook.command.startsWith('node "');
const isNpx = hook.command.startsWith('npx ');
const isShellWrapper = const isShellWrapper =
hook.command.startsWith('bash "') || hook.command.startsWith('bash "') ||
hook.command.startsWith('sh "') || hook.command.startsWith('sh "') ||
@@ -681,8 +744,8 @@ async function runTests() {
const isShellScriptPath = hook.command.endsWith('.sh'); const isShellScriptPath = hook.command.endsWith('.sh');
assert.ok( assert.ok(
isInline || isFilePath || isShellWrapper || isShellScriptPath, isInline || isFilePath || isNpx || isShellWrapper || isShellScriptPath,
`Hook command in ${hookType} should be node -e, node script, or shell wrapper/script, got: ${hook.command.substring(0, 80)}` `Hook command in ${hookType} should be node -e, node script, npx, or shell wrapper/script, got: ${hook.command.substring(0, 80)}`
); );
} }
} }

View File

@@ -0,0 +1,271 @@
/**
* Tests for scripts/lib/agent-compress.js
*
* Run with: node tests/lib/agent-compress.test.js
*/
const assert = require('assert');
const path = require('path');
const fs = require('fs');
const os = require('os');
const {
parseFrontmatter,
extractSummary,
loadAgent,
loadAgents,
compressToCatalog,
compressToSummary,
buildAgentCatalog,
lazyLoadAgent,
} = require('../../scripts/lib/agent-compress');
function test(name, fn) {
try {
fn();
console.log(` \u2713 ${name}`);
return true;
} catch (err) {
console.log(` \u2717 ${name}`);
console.log(` Error: ${err.message}`);
return false;
}
}
function runTests() {
console.log('\n=== Testing agent-compress ===\n');
let passed = 0;
let failed = 0;
// --- parseFrontmatter ---
if (test('parseFrontmatter extracts YAML frontmatter and body', () => {
const content = '---\nname: test-agent\ndescription: A test\ntools: ["Read", "Grep"]\nmodel: sonnet\n---\n\nBody text here.';
const { frontmatter, body } = parseFrontmatter(content);
assert.strictEqual(frontmatter.name, 'test-agent');
assert.strictEqual(frontmatter.description, 'A test');
assert.deepStrictEqual(frontmatter.tools, ['Read', 'Grep']);
assert.strictEqual(frontmatter.model, 'sonnet');
assert.ok(body.includes('Body text here.'));
})) passed++; else failed++;
if (test('parseFrontmatter handles content without frontmatter', () => {
const content = 'Just a regular markdown file.';
const { frontmatter, body } = parseFrontmatter(content);
assert.deepStrictEqual(frontmatter, {});
assert.strictEqual(body, content);
})) passed++; else failed++;
if (test('parseFrontmatter handles colons in values', () => {
const content = '---\nname: test\ndescription: Use this: it works\n---\n\nBody.';
const { frontmatter } = parseFrontmatter(content);
assert.strictEqual(frontmatter.description, 'Use this: it works');
})) passed++; else failed++;
if (test('parseFrontmatter strips surrounding quotes', () => {
const content = '---\nname: "quoted-name"\n---\n\nBody.';
const { frontmatter } = parseFrontmatter(content);
assert.strictEqual(frontmatter.name, 'quoted-name');
})) passed++; else failed++;
if (test('parseFrontmatter handles content ending right after closing ---', () => {
const content = '---\nname: test\ndescription: No body\n---';
const { frontmatter, body } = parseFrontmatter(content);
assert.strictEqual(frontmatter.name, 'test');
assert.strictEqual(frontmatter.description, 'No body');
assert.strictEqual(body, '');
})) passed++; else failed++;
// --- extractSummary ---
if (test('extractSummary returns the first paragraph of the body', () => {
const body = '# Heading\n\nThis is the first paragraph. It has two sentences.\n\nSecond paragraph.';
const summary = extractSummary(body);
assert.strictEqual(summary, 'This is the first paragraph.');
})) passed++; else failed++;
if (test('extractSummary returns empty string for empty body', () => {
assert.strictEqual(extractSummary(''), '');
assert.strictEqual(extractSummary('# Only Headings\n\n## Another'), '');
})) passed++; else failed++;
if (test('extractSummary skips code blocks', () => {
const body = '```\ncode here\n```\n\nActual summary sentence.';
const summary = extractSummary(body);
assert.strictEqual(summary, 'Actual summary sentence.');
})) passed++; else failed++;
if (test('extractSummary respects maxSentences', () => {
const body = 'First sentence. Second sentence. Third sentence.';
const one = extractSummary(body, 1);
const two = extractSummary(body, 2);
assert.strictEqual(one, 'First sentence.');
assert.strictEqual(two, 'First sentence. Second sentence.');
})) passed++; else failed++;
if (test('extractSummary skips plain bullet items', () => {
const body = '- plain bullet\n- another bullet\n\nActual paragraph here.';
const summary = extractSummary(body);
assert.strictEqual(summary, 'Actual paragraph here.');
})) passed++; else failed++;
if (test('extractSummary skips asterisk bullets and numbered lists', () => {
const body = '* star bullet\n1. numbered item\n2. second item\n\nReal paragraph.';
const summary = extractSummary(body);
assert.strictEqual(summary, 'Real paragraph.');
})) passed++; else failed++;
// --- loadAgent / loadAgents ---
// Create a temp directory with test agent files
const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'agent-compress-test-'));
const agentContent = '---\nname: test-agent\ndescription: A test agent\ntools: ["Read"]\nmodel: haiku\n---\n\nTest agent body paragraph.\n\n## Details\nMore info.';
fs.writeFileSync(path.join(tmpDir, 'test-agent.md'), agentContent);
fs.writeFileSync(path.join(tmpDir, 'not-an-agent.txt'), 'ignored');
if (test('loadAgent reads and parses a single agent file', () => {
const agent = loadAgent(path.join(tmpDir, 'test-agent.md'));
assert.strictEqual(agent.name, 'test-agent');
assert.strictEqual(agent.description, 'A test agent');
assert.deepStrictEqual(agent.tools, ['Read']);
assert.strictEqual(agent.model, 'haiku');
assert.ok(agent.body.includes('Test agent body paragraph'));
assert.strictEqual(agent.fileName, 'test-agent');
assert.ok(agent.byteSize > 0);
})) passed++; else failed++;
if (test('loadAgents reads all .md files from a directory', () => {
const agents = loadAgents(tmpDir);
assert.strictEqual(agents.length, 1);
assert.strictEqual(agents[0].name, 'test-agent');
})) passed++; else failed++;
if (test('loadAgents returns empty array for non-existent directory', () => {
const agents = loadAgents(path.join(os.tmpdir(), 'does-not-exist-agent-compress-test'));
assert.deepStrictEqual(agents, []);
})) passed++; else failed++;
// --- compressToCatalog / compressToSummary ---
const sampleAgent = loadAgent(path.join(tmpDir, 'test-agent.md'));
if (test('compressToCatalog strips body and keeps only metadata', () => {
const catalog = compressToCatalog(sampleAgent);
assert.strictEqual(catalog.name, 'test-agent');
assert.strictEqual(catalog.description, 'A test agent');
assert.deepStrictEqual(catalog.tools, ['Read']);
assert.strictEqual(catalog.model, 'haiku');
assert.strictEqual(catalog.body, undefined);
assert.strictEqual(catalog.byteSize, undefined);
})) passed++; else failed++;
if (test('compressToSummary includes first paragraph summary', () => {
const summary = compressToSummary(sampleAgent);
assert.strictEqual(summary.name, 'test-agent');
assert.ok(summary.summary.includes('Test agent body paragraph'));
assert.strictEqual(summary.body, undefined);
})) passed++; else failed++;
// --- buildAgentCatalog ---
if (test('buildAgentCatalog in catalog mode produces minimal output with stats', () => {
const result = buildAgentCatalog(tmpDir, { mode: 'catalog' });
assert.strictEqual(result.agents.length, 1);
assert.strictEqual(result.agents[0].body, undefined);
assert.strictEqual(result.stats.totalAgents, 1);
assert.strictEqual(result.stats.mode, 'catalog');
assert.ok(result.stats.originalBytes > 0);
assert.ok(result.stats.compressedBytes < result.stats.originalBytes);
assert.ok(result.stats.compressedTokenEstimate > 0);
})) passed++; else failed++;
if (test('buildAgentCatalog in summary mode includes summaries', () => {
const result = buildAgentCatalog(tmpDir, { mode: 'summary' });
assert.ok(result.agents[0].summary);
assert.strictEqual(result.agents[0].body, undefined);
})) passed++; else failed++;
if (test('buildAgentCatalog in full mode preserves body', () => {
const result = buildAgentCatalog(tmpDir, { mode: 'full' });
assert.ok(result.agents[0].body);
})) passed++; else failed++;
if (test('buildAgentCatalog throws on invalid mode', () => {
assert.throws(
() => buildAgentCatalog(tmpDir, { mode: 'invalid' }),
/Invalid mode "invalid"/
);
})) passed++; else failed++;
if (test('buildAgentCatalog supports filter function', () => {
// Add a second agent
fs.writeFileSync(
path.join(tmpDir, 'other-agent.md'),
'---\nname: other\ndescription: Other agent\ntools: ["Bash"]\nmodel: opus\n---\n\nOther body.'
);
const result = buildAgentCatalog(tmpDir, {
filter: a => a.model === 'opus',
});
assert.strictEqual(result.agents.length, 1);
assert.strictEqual(result.agents[0].name, 'other');
// Clean up
fs.unlinkSync(path.join(tmpDir, 'other-agent.md'));
})) passed++; else failed++;
// --- lazyLoadAgent ---
if (test('lazyLoadAgent loads a single agent by name', () => {
const agent = lazyLoadAgent(tmpDir, 'test-agent');
assert.ok(agent);
assert.strictEqual(agent.name, 'test-agent');
assert.ok(agent.body.includes('Test agent body paragraph'));
})) passed++; else failed++;
if (test('lazyLoadAgent returns null for non-existent agent', () => {
const agent = lazyLoadAgent(tmpDir, 'does-not-exist');
assert.strictEqual(agent, null);
})) passed++; else failed++;
if (test('lazyLoadAgent rejects path traversal attempts', () => {
const agent = lazyLoadAgent(tmpDir, '../etc/passwd');
assert.strictEqual(agent, null);
})) passed++; else failed++;
if (test('lazyLoadAgent rejects names with invalid characters', () => {
const agent = lazyLoadAgent(tmpDir, 'foo/bar');
assert.strictEqual(agent, null);
const agent2 = lazyLoadAgent(tmpDir, 'foo bar');
assert.strictEqual(agent2, null);
})) passed++; else failed++;
// --- Real agents directory ---
const realAgentsDir = path.resolve(__dirname, '../../agents');
if (test('buildAgentCatalog works with real agents directory', () => {
if (!fs.existsSync(realAgentsDir)) return; // skip if not present
const result = buildAgentCatalog(realAgentsDir, { mode: 'catalog' });
assert.ok(result.agents.length > 0, 'Should find at least one agent');
assert.ok(result.stats.compressedBytes < result.stats.originalBytes, 'Catalog should be smaller than original');
// Verify significant compression ratio
const ratio = result.stats.compressedBytes / result.stats.originalBytes;
assert.ok(ratio < 0.5, `Compression ratio ${ratio.toFixed(2)} should be < 0.5`);
})) passed++; else failed++;
if (test('catalog mode token estimate is under 5000 for real agents', () => {
if (!fs.existsSync(realAgentsDir)) return;
const result = buildAgentCatalog(realAgentsDir, { mode: 'catalog' });
assert.ok(
result.stats.compressedTokenEstimate < 5000,
`Token estimate ${result.stats.compressedTokenEstimate} exceeds 5000`
);
})) passed++; else failed++;
// Cleanup
fs.rmSync(tmpDir, { recursive: true, force: true });
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
process.exit(failed > 0 ? 1 : 0);
}
runTests();

View File

@@ -0,0 +1,232 @@
/**
* Tests for inspection logic — pattern detection from failures.
*/
const assert = require('assert');
const {
normalizeFailureReason,
groupFailures,
detectPatterns,
generateReport,
suggestAction,
DEFAULT_FAILURE_THRESHOLD,
} = require('../../scripts/lib/inspection');
async function test(name, fn) {
try {
await fn();
console.log(` \u2713 ${name}`);
return true;
} catch (error) {
console.log(` \u2717 ${name}`);
console.log(` Error: ${error.message}`);
return false;
}
}
function makeSkillRun(overrides = {}) {
return {
id: overrides.id || `run-${Math.random().toString(36).slice(2, 8)}`,
skillId: overrides.skillId || 'test-skill',
skillVersion: overrides.skillVersion || '1.0.0',
sessionId: overrides.sessionId || 'session-1',
taskDescription: overrides.taskDescription || 'test task',
outcome: overrides.outcome || 'failure',
failureReason: overrides.failureReason || 'generic error',
tokensUsed: overrides.tokensUsed || 500,
durationMs: overrides.durationMs || 1000,
userFeedback: overrides.userFeedback || null,
createdAt: overrides.createdAt || '2026-03-15T08:00:00.000Z',
};
}
async function runTests() {
console.log('\n=== Testing inspection ===\n');
let passed = 0;
let failed = 0;
if (await test('normalizeFailureReason strips timestamps and UUIDs', async () => {
const normalized = normalizeFailureReason(
'Error at 2026-03-15T08:00:00.000Z for id 550e8400-e29b-41d4-a716-446655440000'
);
assert.ok(!normalized.includes('2026'));
assert.ok(!normalized.includes('550e8400'));
assert.ok(normalized.includes('<timestamp>'));
assert.ok(normalized.includes('<uuid>'));
})) passed += 1; else failed += 1;
if (await test('normalizeFailureReason strips file paths', async () => {
const normalized = normalizeFailureReason('File not found: /usr/local/bin/node');
assert.ok(!normalized.includes('/usr/local'));
assert.ok(normalized.includes('<path>'));
})) passed += 1; else failed += 1;
if (await test('normalizeFailureReason handles null and empty values', async () => {
assert.strictEqual(normalizeFailureReason(null), 'unknown');
assert.strictEqual(normalizeFailureReason(''), 'unknown');
assert.strictEqual(normalizeFailureReason(undefined), 'unknown');
})) passed += 1; else failed += 1;
if (await test('groupFailures groups by skillId and normalized reason', async () => {
const runs = [
makeSkillRun({ id: 'r1', skillId: 'skill-a', failureReason: 'timeout' }),
makeSkillRun({ id: 'r2', skillId: 'skill-a', failureReason: 'timeout' }),
makeSkillRun({ id: 'r3', skillId: 'skill-b', failureReason: 'parse error' }),
makeSkillRun({ id: 'r4', skillId: 'skill-a', outcome: 'success' }), // should be excluded
];
const groups = groupFailures(runs);
assert.strictEqual(groups.size, 2);
const skillAGroup = groups.get('skill-a::timeout');
assert.ok(skillAGroup);
assert.strictEqual(skillAGroup.runs.length, 2);
const skillBGroup = groups.get('skill-b::parse error');
assert.ok(skillBGroup);
assert.strictEqual(skillBGroup.runs.length, 1);
})) passed += 1; else failed += 1;
if (await test('groupFailures handles mixed outcome casing', async () => {
const runs = [
makeSkillRun({ id: 'r1', outcome: 'FAILURE', failureReason: 'timeout' }),
makeSkillRun({ id: 'r2', outcome: 'Failed', failureReason: 'timeout' }),
makeSkillRun({ id: 'r3', outcome: 'error', failureReason: 'timeout' }),
];
const groups = groupFailures(runs);
assert.strictEqual(groups.size, 1);
const group = groups.values().next().value;
assert.strictEqual(group.runs.length, 3);
})) passed += 1; else failed += 1;
if (await test('detectPatterns returns empty array when below threshold', async () => {
const runs = [
makeSkillRun({ id: 'r1', failureReason: 'timeout' }),
makeSkillRun({ id: 'r2', failureReason: 'timeout' }),
];
const patterns = detectPatterns(runs, { threshold: 3 });
assert.strictEqual(patterns.length, 0);
})) passed += 1; else failed += 1;
if (await test('detectPatterns detects patterns at or above threshold', async () => {
const runs = [
makeSkillRun({ id: 'r1', failureReason: 'timeout', createdAt: '2026-03-15T08:00:00Z' }),
makeSkillRun({ id: 'r2', failureReason: 'timeout', createdAt: '2026-03-15T08:01:00Z' }),
makeSkillRun({ id: 'r3', failureReason: 'timeout', createdAt: '2026-03-15T08:02:00Z' }),
];
const patterns = detectPatterns(runs, { threshold: 3 });
assert.strictEqual(patterns.length, 1);
assert.strictEqual(patterns[0].count, 3);
assert.strictEqual(patterns[0].skillId, 'test-skill');
assert.strictEqual(patterns[0].normalizedReason, 'timeout');
assert.strictEqual(patterns[0].firstSeen, '2026-03-15T08:00:00Z');
assert.strictEqual(patterns[0].lastSeen, '2026-03-15T08:02:00Z');
assert.strictEqual(patterns[0].runIds.length, 3);
})) passed += 1; else failed += 1;
if (await test('detectPatterns uses default threshold', async () => {
const runs = Array.from({ length: DEFAULT_FAILURE_THRESHOLD }, (_, i) =>
makeSkillRun({ id: `r${i}`, failureReason: 'permission denied' })
);
const patterns = detectPatterns(runs);
assert.strictEqual(patterns.length, 1);
})) passed += 1; else failed += 1;
if (await test('detectPatterns sorts by count descending', async () => {
const runs = [
// 4 timeouts
...Array.from({ length: 4 }, (_, i) =>
makeSkillRun({ id: `t${i}`, skillId: 'skill-a', failureReason: 'timeout' })
),
// 3 parse errors
...Array.from({ length: 3 }, (_, i) =>
makeSkillRun({ id: `p${i}`, skillId: 'skill-b', failureReason: 'parse error' })
),
];
const patterns = detectPatterns(runs, { threshold: 3 });
assert.strictEqual(patterns.length, 2);
assert.strictEqual(patterns[0].count, 4);
assert.strictEqual(patterns[0].skillId, 'skill-a');
assert.strictEqual(patterns[1].count, 3);
assert.strictEqual(patterns[1].skillId, 'skill-b');
})) passed += 1; else failed += 1;
if (await test('detectPatterns groups similar failure reasons with different timestamps', async () => {
const runs = [
makeSkillRun({ id: 'r1', failureReason: 'Error at 2026-03-15T08:00:00Z in /tmp/foo' }),
makeSkillRun({ id: 'r2', failureReason: 'Error at 2026-03-15T09:00:00Z in /tmp/bar' }),
makeSkillRun({ id: 'r3', failureReason: 'Error at 2026-03-15T10:00:00Z in /tmp/baz' }),
];
const patterns = detectPatterns(runs, { threshold: 3 });
assert.strictEqual(patterns.length, 1);
assert.ok(patterns[0].normalizedReason.includes('<timestamp>'));
assert.ok(patterns[0].normalizedReason.includes('<path>'));
})) passed += 1; else failed += 1;
if (await test('detectPatterns tracks unique session IDs and versions', async () => {
const runs = [
makeSkillRun({ id: 'r1', sessionId: 'sess-1', skillVersion: '1.0.0', failureReason: 'err' }),
makeSkillRun({ id: 'r2', sessionId: 'sess-2', skillVersion: '1.0.0', failureReason: 'err' }),
makeSkillRun({ id: 'r3', sessionId: 'sess-1', skillVersion: '1.1.0', failureReason: 'err' }),
];
const patterns = detectPatterns(runs, { threshold: 3 });
assert.strictEqual(patterns.length, 1);
assert.deepStrictEqual(patterns[0].sessionIds.sort(), ['sess-1', 'sess-2']);
assert.deepStrictEqual(patterns[0].versions.sort(), ['1.0.0', '1.1.0']);
})) passed += 1; else failed += 1;
if (await test('generateReport returns clean status with no patterns', async () => {
const report = generateReport([]);
assert.strictEqual(report.status, 'clean');
assert.strictEqual(report.patternCount, 0);
assert.ok(report.summary.includes('No recurring'));
assert.ok(report.generatedAt);
})) passed += 1; else failed += 1;
if (await test('generateReport produces structured report from patterns', async () => {
const runs = [
...Array.from({ length: 3 }, (_, i) =>
makeSkillRun({ id: `r${i}`, skillId: 'my-skill', failureReason: 'timeout' })
),
];
const patterns = detectPatterns(runs, { threshold: 3 });
const report = generateReport(patterns, { generatedAt: '2026-03-15T09:00:00Z' });
assert.strictEqual(report.status, 'attention_needed');
assert.strictEqual(report.patternCount, 1);
assert.strictEqual(report.totalFailures, 3);
assert.deepStrictEqual(report.affectedSkills, ['my-skill']);
assert.strictEqual(report.patterns[0].skillId, 'my-skill');
assert.ok(report.patterns[0].suggestedAction);
assert.strictEqual(report.generatedAt, '2026-03-15T09:00:00Z');
})) passed += 1; else failed += 1;
if (await test('suggestAction returns timeout-specific advice', async () => {
const action = suggestAction({ normalizedReason: 'timeout after 30s', versions: ['1.0.0'] });
assert.ok(action.toLowerCase().includes('timeout'));
})) passed += 1; else failed += 1;
if (await test('suggestAction returns permission-specific advice', async () => {
const action = suggestAction({ normalizedReason: 'permission denied', versions: ['1.0.0'] });
assert.ok(action.toLowerCase().includes('permission'));
})) passed += 1; else failed += 1;
if (await test('suggestAction returns version-span advice when multiple versions affected', async () => {
const action = suggestAction({ normalizedReason: 'something broke', versions: ['1.0.0', '1.1.0'] });
assert.ok(action.toLowerCase().includes('version'));
})) passed += 1; else failed += 1;
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
process.exit(failed > 0 ? 1 : 0);
}
runTests();

View File

@@ -0,0 +1,247 @@
/**
* Tests for scripts/lib/resolve-ecc-root.js
*
* Covers the ECC root resolution fallback chain:
* 1. CLAUDE_PLUGIN_ROOT env var
* 2. Standard install (~/.claude/)
* 3. Plugin cache auto-detection
* 4. Fallback to ~/.claude/
*/
const assert = require('assert');
const fs = require('fs');
const os = require('os');
const path = require('path');
const { resolveEccRoot, INLINE_RESOLVE } = require('../../scripts/lib/resolve-ecc-root');
function test(name, fn) {
try {
fn();
console.log(` \u2713 ${name}`);
return true;
} catch (error) {
console.log(` \u2717 ${name}`);
console.log(` Error: ${error.message}`);
return false;
}
}
function createTempDir() {
return fs.mkdtempSync(path.join(os.tmpdir(), 'ecc-root-test-'));
}
function setupStandardInstall(homeDir) {
const claudeDir = path.join(homeDir, '.claude');
const scriptDir = path.join(claudeDir, 'scripts', 'lib');
fs.mkdirSync(scriptDir, { recursive: true });
fs.writeFileSync(path.join(scriptDir, 'utils.js'), '// stub');
return claudeDir;
}
function setupPluginCache(homeDir, orgName, version) {
const cacheDir = path.join(
homeDir, '.claude', 'plugins', 'cache',
'everything-claude-code', orgName, version
);
const scriptDir = path.join(cacheDir, 'scripts', 'lib');
fs.mkdirSync(scriptDir, { recursive: true });
fs.writeFileSync(path.join(scriptDir, 'utils.js'), '// stub');
return cacheDir;
}
function runTests() {
console.log('\n=== Testing resolve-ecc-root.js ===\n');
let passed = 0;
let failed = 0;
// ─── Env Var Priority ───
if (test('returns CLAUDE_PLUGIN_ROOT when set', () => {
const result = resolveEccRoot({ envRoot: '/custom/plugin/root' });
assert.strictEqual(result, '/custom/plugin/root');
})) passed++; else failed++;
if (test('trims whitespace from CLAUDE_PLUGIN_ROOT', () => {
const result = resolveEccRoot({ envRoot: ' /trimmed/root ' });
assert.strictEqual(result, '/trimmed/root');
})) passed++; else failed++;
if (test('skips empty CLAUDE_PLUGIN_ROOT', () => {
const homeDir = createTempDir();
try {
setupStandardInstall(homeDir);
const result = resolveEccRoot({ envRoot: '', homeDir });
assert.strictEqual(result, path.join(homeDir, '.claude'));
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
}
})) passed++; else failed++;
if (test('skips whitespace-only CLAUDE_PLUGIN_ROOT', () => {
const homeDir = createTempDir();
try {
setupStandardInstall(homeDir);
const result = resolveEccRoot({ envRoot: ' ', homeDir });
assert.strictEqual(result, path.join(homeDir, '.claude'));
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
}
})) passed++; else failed++;
// ─── Standard Install ───
if (test('finds standard install at ~/.claude/', () => {
const homeDir = createTempDir();
try {
setupStandardInstall(homeDir);
const result = resolveEccRoot({ envRoot: '', homeDir });
assert.strictEqual(result, path.join(homeDir, '.claude'));
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
}
})) passed++; else failed++;
// ─── Plugin Cache Auto-Detection ───
if (test('discovers plugin root from cache directory', () => {
const homeDir = createTempDir();
try {
const expected = setupPluginCache(homeDir, 'everything-claude-code', '1.8.0');
const result = resolveEccRoot({ envRoot: '', homeDir });
assert.strictEqual(result, expected);
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
}
})) passed++; else failed++;
if (test('prefers standard install over plugin cache', () => {
const homeDir = createTempDir();
try {
const claudeDir = setupStandardInstall(homeDir);
setupPluginCache(homeDir, 'everything-claude-code', '1.8.0');
const result = resolveEccRoot({ envRoot: '', homeDir });
assert.strictEqual(result, claudeDir,
'Standard install should take precedence over plugin cache');
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
}
})) passed++; else failed++;
if (test('handles multiple versions in plugin cache', () => {
const homeDir = createTempDir();
try {
setupPluginCache(homeDir, 'everything-claude-code', '1.7.0');
const expected = setupPluginCache(homeDir, 'everything-claude-code', '1.8.0');
const result = resolveEccRoot({ envRoot: '', homeDir });
// Should find one of them (either is valid)
assert.ok(
result === expected ||
result === path.join(homeDir, '.claude', 'plugins', 'cache', 'everything-claude-code', 'everything-claude-code', '1.7.0'),
'Should resolve to a valid plugin cache directory'
);
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
}
})) passed++; else failed++;
// ─── Fallback ───
if (test('falls back to ~/.claude/ when nothing is found', () => {
const homeDir = createTempDir();
try {
// Create ~/.claude but don't put scripts there
fs.mkdirSync(path.join(homeDir, '.claude'), { recursive: true });
const result = resolveEccRoot({ envRoot: '', homeDir });
assert.strictEqual(result, path.join(homeDir, '.claude'));
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
}
})) passed++; else failed++;
if (test('falls back gracefully when ~/.claude/ does not exist', () => {
const homeDir = createTempDir();
try {
const result = resolveEccRoot({ envRoot: '', homeDir });
assert.strictEqual(result, path.join(homeDir, '.claude'));
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
}
})) passed++; else failed++;
// ─── Custom Probe ───
if (test('supports custom probe path', () => {
const homeDir = createTempDir();
try {
const claudeDir = path.join(homeDir, '.claude');
fs.mkdirSync(path.join(claudeDir, 'custom'), { recursive: true });
fs.writeFileSync(path.join(claudeDir, 'custom', 'marker.js'), '// probe');
const result = resolveEccRoot({
envRoot: '',
homeDir,
probe: path.join('custom', 'marker.js'),
});
assert.strictEqual(result, claudeDir);
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
}
})) passed++; else failed++;
// ─── INLINE_RESOLVE ───
if (test('INLINE_RESOLVE is a non-empty string', () => {
assert.ok(typeof INLINE_RESOLVE === 'string');
assert.ok(INLINE_RESOLVE.length > 50, 'Should be a substantial inline expression');
})) passed++; else failed++;
if (test('INLINE_RESOLVE returns CLAUDE_PLUGIN_ROOT when set', () => {
const { execFileSync } = require('child_process');
const result = execFileSync('node', [
'-e', `console.log(${INLINE_RESOLVE})`,
], {
env: { ...process.env, CLAUDE_PLUGIN_ROOT: '/inline/test/root' },
encoding: 'utf8',
}).trim();
assert.strictEqual(result, '/inline/test/root');
})) passed++; else failed++;
if (test('INLINE_RESOLVE discovers plugin cache when env var is unset', () => {
const homeDir = createTempDir();
try {
const expected = setupPluginCache(homeDir, 'everything-claude-code', '1.9.0');
const { execFileSync } = require('child_process');
const result = execFileSync('node', [
'-e', `console.log(${INLINE_RESOLVE})`,
], {
env: { PATH: process.env.PATH, HOME: homeDir, USERPROFILE: homeDir },
encoding: 'utf8',
}).trim();
assert.strictEqual(result, expected);
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
}
})) passed++; else failed++;
if (test('INLINE_RESOLVE falls back to ~/.claude/ when nothing found', () => {
const homeDir = createTempDir();
try {
const { execFileSync } = require('child_process');
const result = execFileSync('node', [
'-e', `console.log(${INLINE_RESOLVE})`,
], {
env: { PATH: process.env.PATH, HOME: homeDir, USERPROFILE: homeDir },
encoding: 'utf8',
}).trim();
assert.strictEqual(result, path.join(homeDir, '.claude'));
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
}
})) passed++; else failed++;
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
process.exit(failed > 0 ? 1 : 0);
}
runTests();

View File

@@ -0,0 +1,717 @@
/**
* Tests for --with / --without selective install flags (issue #470)
*
* Covers:
* - CLI argument parsing for --with and --without
* - Request normalization with include/exclude component IDs
* - Component-to-module expansion via the manifest catalog
* - End-to-end install plans with --with and --without
* - Validation and error handling for unknown component IDs
* - Combined --profile + --with + --without flows
* - Standalone --with without a profile
* - agent: and skill: component families
*/
const assert = require('assert');
const fs = require('fs');
const os = require('os');
const path = require('path');
const {
parseInstallArgs,
normalizeInstallRequest,
} = require('../../scripts/lib/install/request');
const {
listInstallComponents,
resolveInstallPlan,
} = require('../../scripts/lib/install-manifests');
function test(name, fn) {
try {
fn();
console.log(` \u2713 ${name}`);
return true;
} catch (error) {
console.log(` \u2717 ${name}`);
console.log(` Error: ${error.message}`);
return false;
}
}
function runTests() {
console.log('\n=== Testing --with / --without selective install flags ===\n');
let passed = 0;
let failed = 0;
// ─── CLI Argument Parsing ───
if (test('parses single --with flag', () => {
const parsed = parseInstallArgs([
'node', 'install-apply.js',
'--profile', 'core',
'--with', 'lang:typescript',
]);
assert.deepStrictEqual(parsed.includeComponentIds, ['lang:typescript']);
assert.deepStrictEqual(parsed.excludeComponentIds, []);
})) passed++; else failed++;
if (test('parses single --without flag', () => {
const parsed = parseInstallArgs([
'node', 'install-apply.js',
'--profile', 'developer',
'--without', 'capability:orchestration',
]);
assert.deepStrictEqual(parsed.excludeComponentIds, ['capability:orchestration']);
assert.deepStrictEqual(parsed.includeComponentIds, []);
})) passed++; else failed++;
if (test('parses multiple --with flags', () => {
const parsed = parseInstallArgs([
'node', 'install-apply.js',
'--with', 'lang:typescript',
'--with', 'framework:nextjs',
'--with', 'capability:database',
]);
assert.deepStrictEqual(parsed.includeComponentIds, [
'lang:typescript',
'framework:nextjs',
'capability:database',
]);
})) passed++; else failed++;
if (test('parses multiple --without flags', () => {
const parsed = parseInstallArgs([
'node', 'install-apply.js',
'--profile', 'full',
'--without', 'capability:media',
'--without', 'capability:social',
]);
assert.deepStrictEqual(parsed.excludeComponentIds, [
'capability:media',
'capability:social',
]);
})) passed++; else failed++;
if (test('parses combined --with and --without flags', () => {
const parsed = parseInstallArgs([
'node', 'install-apply.js',
'--profile', 'developer',
'--with', 'lang:typescript',
'--with', 'framework:nextjs',
'--without', 'capability:orchestration',
]);
assert.strictEqual(parsed.profileId, 'developer');
assert.deepStrictEqual(parsed.includeComponentIds, ['lang:typescript', 'framework:nextjs']);
assert.deepStrictEqual(parsed.excludeComponentIds, ['capability:orchestration']);
})) passed++; else failed++;
if (test('ignores empty --with values', () => {
const parsed = parseInstallArgs([
'node', 'install-apply.js',
'--with', '',
'--with', 'lang:python',
]);
assert.deepStrictEqual(parsed.includeComponentIds, ['lang:python']);
})) passed++; else failed++;
if (test('ignores empty --without values', () => {
const parsed = parseInstallArgs([
'node', 'install-apply.js',
'--profile', 'core',
'--without', '',
'--without', 'capability:media',
]);
assert.deepStrictEqual(parsed.excludeComponentIds, ['capability:media']);
})) passed++; else failed++;
// ─── Request Normalization ───
if (test('normalizes --with-only request as manifest mode', () => {
const request = normalizeInstallRequest({
target: 'claude',
profileId: null,
moduleIds: [],
includeComponentIds: ['lang:typescript'],
excludeComponentIds: [],
languages: [],
});
assert.strictEqual(request.mode, 'manifest');
assert.deepStrictEqual(request.includeComponentIds, ['lang:typescript']);
assert.deepStrictEqual(request.excludeComponentIds, []);
})) passed++; else failed++;
if (test('normalizes --profile + --with + --without as manifest mode', () => {
const request = normalizeInstallRequest({
target: 'cursor',
profileId: 'developer',
moduleIds: [],
includeComponentIds: ['lang:typescript', 'framework:nextjs'],
excludeComponentIds: ['capability:orchestration'],
languages: [],
});
assert.strictEqual(request.mode, 'manifest');
assert.strictEqual(request.profileId, 'developer');
assert.deepStrictEqual(request.includeComponentIds, ['lang:typescript', 'framework:nextjs']);
assert.deepStrictEqual(request.excludeComponentIds, ['capability:orchestration']);
})) passed++; else failed++;
if (test('rejects --with combined with legacy language arguments', () => {
assert.throws(
() => normalizeInstallRequest({
target: 'claude',
profileId: null,
moduleIds: [],
includeComponentIds: ['lang:typescript'],
excludeComponentIds: [],
languages: ['python'],
}),
/cannot be combined/
);
})) passed++; else failed++;
if (test('rejects --without combined with legacy language arguments', () => {
assert.throws(
() => normalizeInstallRequest({
target: 'claude',
profileId: null,
moduleIds: [],
includeComponentIds: [],
excludeComponentIds: ['capability:media'],
languages: ['typescript'],
}),
/cannot be combined/
);
})) passed++; else failed++;
if (test('deduplicates repeated --with component IDs', () => {
const request = normalizeInstallRequest({
target: 'claude',
profileId: null,
moduleIds: [],
includeComponentIds: ['lang:typescript', 'lang:typescript', 'lang:python'],
excludeComponentIds: [],
languages: [],
});
assert.deepStrictEqual(request.includeComponentIds, ['lang:typescript', 'lang:python']);
})) passed++; else failed++;
if (test('deduplicates repeated --without component IDs', () => {
const request = normalizeInstallRequest({
target: 'claude',
profileId: 'full',
moduleIds: [],
includeComponentIds: [],
excludeComponentIds: ['capability:media', 'capability:media', 'capability:social'],
languages: [],
});
assert.deepStrictEqual(request.excludeComponentIds, ['capability:media', 'capability:social']);
})) passed++; else failed++;
// ─── Component Catalog Validation ───
if (test('component catalog includes lang: family entries', () => {
const components = listInstallComponents({ family: 'language' });
assert.ok(components.some(c => c.id === 'lang:typescript'), 'Should have lang:typescript');
assert.ok(components.some(c => c.id === 'lang:python'), 'Should have lang:python');
assert.ok(components.some(c => c.id === 'lang:go'), 'Should have lang:go');
assert.ok(components.some(c => c.id === 'lang:java'), 'Should have lang:java');
})) passed++; else failed++;
if (test('component catalog includes framework: family entries', () => {
const components = listInstallComponents({ family: 'framework' });
assert.ok(components.some(c => c.id === 'framework:react'), 'Should have framework:react');
assert.ok(components.some(c => c.id === 'framework:nextjs'), 'Should have framework:nextjs');
assert.ok(components.some(c => c.id === 'framework:django'), 'Should have framework:django');
assert.ok(components.some(c => c.id === 'framework:springboot'), 'Should have framework:springboot');
})) passed++; else failed++;
if (test('component catalog includes capability: family entries', () => {
const components = listInstallComponents({ family: 'capability' });
assert.ok(components.some(c => c.id === 'capability:database'), 'Should have capability:database');
assert.ok(components.some(c => c.id === 'capability:security'), 'Should have capability:security');
assert.ok(components.some(c => c.id === 'capability:orchestration'), 'Should have capability:orchestration');
})) passed++; else failed++;
if (test('component catalog includes agent: family entries', () => {
const components = listInstallComponents({ family: 'agent' });
assert.ok(components.length > 0, 'Should have at least one agent component');
assert.ok(components.some(c => c.id === 'agent:security-reviewer'), 'Should have agent:security-reviewer');
})) passed++; else failed++;
if (test('component catalog includes skill: family entries', () => {
const components = listInstallComponents({ family: 'skill' });
assert.ok(components.length > 0, 'Should have at least one skill component');
assert.ok(components.some(c => c.id === 'skill:continuous-learning'), 'Should have skill:continuous-learning');
})) passed++; else failed++;
// ─── Install Plan Resolution with --with ───
if (test('--with alone resolves component modules and their dependencies', () => {
const plan = resolveInstallPlan({
includeComponentIds: ['lang:typescript'],
target: 'claude',
});
assert.ok(plan.selectedModuleIds.includes('framework-language'),
'Should include the module behind lang:typescript');
assert.ok(plan.selectedModuleIds.includes('rules-core'),
'Should include framework-language dependency rules-core');
assert.ok(plan.selectedModuleIds.includes('platform-configs'),
'Should include framework-language dependency platform-configs');
})) passed++; else failed++;
if (test('--with adds modules on top of a profile', () => {
const plan = resolveInstallPlan({
profileId: 'core',
includeComponentIds: ['capability:security'],
target: 'claude',
});
// core profile modules
assert.ok(plan.selectedModuleIds.includes('rules-core'));
assert.ok(plan.selectedModuleIds.includes('workflow-quality'));
// added by --with
assert.ok(plan.selectedModuleIds.includes('security'),
'Should include security module from --with');
})) passed++; else failed++;
if (test('multiple --with flags union their modules', () => {
const plan = resolveInstallPlan({
includeComponentIds: ['lang:typescript', 'capability:database'],
target: 'claude',
});
assert.ok(plan.selectedModuleIds.includes('framework-language'),
'Should include framework-language from lang:typescript');
assert.ok(plan.selectedModuleIds.includes('database'),
'Should include database from capability:database');
})) passed++; else failed++;
// ─── Install Plan Resolution with --without ───
if (test('--without excludes modules from a profile', () => {
const plan = resolveInstallPlan({
profileId: 'developer',
excludeComponentIds: ['capability:orchestration'],
target: 'claude',
});
assert.ok(!plan.selectedModuleIds.includes('orchestration'),
'Should exclude orchestration module');
assert.ok(plan.excludedModuleIds.includes('orchestration'),
'Should report orchestration as excluded');
// rest of developer profile should remain
assert.ok(plan.selectedModuleIds.includes('rules-core'));
assert.ok(plan.selectedModuleIds.includes('framework-language'));
assert.ok(plan.selectedModuleIds.includes('database'));
})) passed++; else failed++;
if (test('multiple --without flags exclude multiple modules', () => {
const plan = resolveInstallPlan({
profileId: 'full',
excludeComponentIds: ['capability:media', 'capability:social', 'capability:supply-chain'],
target: 'claude',
});
assert.ok(!plan.selectedModuleIds.includes('media-generation'));
assert.ok(!plan.selectedModuleIds.includes('social-distribution'));
assert.ok(!plan.selectedModuleIds.includes('supply-chain-domain'));
assert.ok(plan.excludedModuleIds.includes('media-generation'));
assert.ok(plan.excludedModuleIds.includes('social-distribution'));
assert.ok(plan.excludedModuleIds.includes('supply-chain-domain'));
})) passed++; else failed++;
// ─── Combined --with + --without ───
if (test('--with and --without work together on a profile', () => {
const plan = resolveInstallPlan({
profileId: 'developer',
includeComponentIds: ['capability:security'],
excludeComponentIds: ['capability:orchestration'],
target: 'claude',
});
assert.ok(plan.selectedModuleIds.includes('security'),
'Should include security from --with');
assert.ok(!plan.selectedModuleIds.includes('orchestration'),
'Should exclude orchestration from --without');
assert.ok(plan.selectedModuleIds.includes('rules-core'),
'Should keep profile base modules');
})) passed++; else failed++;
if (test('--without on a dependency of --with raises an error', () => {
assert.throws(
() => resolveInstallPlan({
includeComponentIds: ['capability:social'],
excludeComponentIds: ['capability:content'],
}),
/depends on excluded module/
);
})) passed++; else failed++;
// ─── Validation Errors ───
if (test('throws for unknown component ID in --with', () => {
assert.throws(
() => resolveInstallPlan({
includeComponentIds: ['lang:brainfuck-plus-plus'],
}),
/Unknown install component/
);
})) passed++; else failed++;
if (test('throws for unknown component ID in --without', () => {
assert.throws(
() => resolveInstallPlan({
profileId: 'core',
excludeComponentIds: ['capability:teleportation'],
}),
/Unknown install component/
);
})) passed++; else failed++;
if (test('throws when all modules are excluded', () => {
assert.throws(
() => resolveInstallPlan({
profileId: 'core',
excludeComponentIds: [
'baseline:rules',
'baseline:agents',
'baseline:commands',
'baseline:hooks',
'baseline:platform',
'baseline:workflow',
],
target: 'claude',
}),
/excludes every requested install module/
);
})) passed++; else failed++;
// ─── Target-Specific Behavior ───
if (test('--with respects target compatibility filtering', () => {
const plan = resolveInstallPlan({
includeComponentIds: ['capability:orchestration'],
target: 'cursor',
});
// orchestration module only supports claude, codex, opencode
assert.ok(!plan.selectedModuleIds.includes('orchestration'),
'Should skip orchestration for cursor target');
assert.ok(plan.skippedModuleIds.includes('orchestration'),
'Should report orchestration as skipped for cursor');
})) passed++; else failed++;
if (test('--without with agent: component excludes the agent module', () => {
const plan = resolveInstallPlan({
profileId: 'core',
excludeComponentIds: ['agent:security-reviewer'],
target: 'claude',
});
// agent:security-reviewer maps to agents-core module
// Since core profile includes agents-core and it is excluded, it should be gone
assert.ok(!plan.selectedModuleIds.includes('agents-core'),
'Should exclude agents-core when agent:security-reviewer is excluded');
assert.ok(plan.excludedModuleIds.includes('agents-core'),
'Should report agents-core as excluded');
})) passed++; else failed++;
if (test('--with agent: component includes the agents-core module', () => {
const plan = resolveInstallPlan({
includeComponentIds: ['agent:security-reviewer'],
target: 'claude',
});
assert.ok(plan.selectedModuleIds.includes('agents-core'),
'Should include agents-core module from agent:security-reviewer');
})) passed++; else failed++;
if (test('--with skill: component includes the parent skill module', () => {
const plan = resolveInstallPlan({
includeComponentIds: ['skill:continuous-learning'],
target: 'claude',
});
assert.ok(plan.selectedModuleIds.includes('workflow-quality'),
'Should include workflow-quality module from skill:continuous-learning');
})) passed++; else failed++;
// ─── Help Text ───
if (test('help text documents --with and --without flags', () => {
const { execFileSync } = require('child_process');
const scriptPath = path.join(__dirname, '..', '..', 'scripts', 'install-apply.js');
const result = execFileSync('node', [scriptPath, '--help'], {
encoding: 'utf8',
stdio: ['pipe', 'pipe', 'pipe'],
});
assert.ok(result.includes('--with'), 'Help should mention --with');
assert.ok(result.includes('--without'), 'Help should mention --without');
assert.ok(result.includes('component'), 'Help should describe components');
})) passed++; else failed++;
// ─── End-to-End Dry-Run ───
if (test('end-to-end: --profile developer --with capability:security --without capability:orchestration --dry-run', () => {
const { execFileSync } = require('child_process');
const scriptPath = path.join(__dirname, '..', '..', 'scripts', 'install-apply.js');
const homeDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-e2e-'));
const projectDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-e2e-project-'));
try {
const result = execFileSync('node', [
scriptPath,
'--profile', 'developer',
'--with', 'capability:security',
'--without', 'capability:orchestration',
'--dry-run',
], {
cwd: projectDir,
env: { ...process.env, HOME: homeDir },
encoding: 'utf8',
stdio: ['pipe', 'pipe', 'pipe'],
});
assert.ok(result.includes('Mode: manifest'), 'Should be manifest mode');
assert.ok(result.includes('Profile: developer'), 'Should show developer profile');
assert.ok(result.includes('capability:security'), 'Should show included component');
assert.ok(result.includes('capability:orchestration'), 'Should show excluded component');
assert.ok(result.includes('security'), 'Selected modules should include security');
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
fs.rmSync(projectDir, { recursive: true, force: true });
}
})) passed++; else failed++;
if (test('end-to-end: --with lang:python --with agent:security-reviewer --dry-run', () => {
const { execFileSync } = require('child_process');
const scriptPath = path.join(__dirname, '..', '..', 'scripts', 'install-apply.js');
const homeDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-e2e-'));
const projectDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-e2e-project-'));
try {
const result = execFileSync('node', [
scriptPath,
'--with', 'lang:python',
'--with', 'agent:security-reviewer',
'--dry-run',
], {
cwd: projectDir,
env: { ...process.env, HOME: homeDir },
encoding: 'utf8',
stdio: ['pipe', 'pipe', 'pipe'],
});
assert.ok(result.includes('Mode: manifest'), 'Should be manifest mode');
assert.ok(result.includes('lang:python'), 'Should show lang:python as included');
assert.ok(result.includes('agent:security-reviewer'), 'Should show agent:security-reviewer as included');
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
fs.rmSync(projectDir, { recursive: true, force: true });
}
})) passed++; else failed++;
if (test('end-to-end: --with with unknown component fails cleanly', () => {
const { execFileSync } = require('child_process');
const scriptPath = path.join(__dirname, '..', '..', 'scripts', 'install-apply.js');
let exitCode = 0;
let stderr = '';
try {
execFileSync('node', [
scriptPath,
'--with', 'lang:nonexistent-language',
'--dry-run',
], {
encoding: 'utf8',
stdio: ['pipe', 'pipe', 'pipe'],
});
} catch (error) {
exitCode = error.status || 1;
stderr = error.stderr || '';
}
assert.strictEqual(exitCode, 1, 'Should exit with error code 1');
assert.ok(stderr.includes('Unknown install component'), 'Should report unknown component');
})) passed++; else failed++;
if (test('end-to-end: --without with unknown component fails cleanly', () => {
const { execFileSync } = require('child_process');
const scriptPath = path.join(__dirname, '..', '..', 'scripts', 'install-apply.js');
let exitCode = 0;
let stderr = '';
try {
execFileSync('node', [
scriptPath,
'--profile', 'core',
'--without', 'capability:nonexistent',
'--dry-run',
], {
encoding: 'utf8',
stdio: ['pipe', 'pipe', 'pipe'],
});
} catch (error) {
exitCode = error.status || 1;
stderr = error.stderr || '';
}
assert.strictEqual(exitCode, 1, 'Should exit with error code 1');
assert.ok(stderr.includes('Unknown install component'), 'Should report unknown component');
})) passed++; else failed++;
// ─── End-to-End Actual Install ───
if (test('end-to-end: installs --profile core --with capability:security and writes state', () => {
const { execFileSync } = require('child_process');
const scriptPath = path.join(__dirname, '..', '..', 'scripts', 'install-apply.js');
const homeDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-install-'));
const projectDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-install-project-'));
try {
const _result = execFileSync('node', [
scriptPath,
'--profile', 'core',
'--with', 'capability:security',
], {
cwd: projectDir,
env: { ...process.env, HOME: homeDir },
encoding: 'utf8',
stdio: ['pipe', 'pipe', 'pipe'],
});
const claudeRoot = path.join(homeDir, '.claude');
// Security skill should be installed (from --with)
assert.ok(fs.existsSync(path.join(claudeRoot, 'skills', 'security-review', 'SKILL.md')),
'Should install security-review skill from --with');
// Core profile modules should be installed
assert.ok(fs.existsSync(path.join(claudeRoot, 'rules', 'common', 'coding-style.md')),
'Should install core rules');
// Install state should record include/exclude
const statePath = path.join(claudeRoot, 'ecc', 'install-state.json');
const state = JSON.parse(fs.readFileSync(statePath, 'utf8'));
assert.strictEqual(state.request.profile, 'core');
assert.deepStrictEqual(state.request.includeComponents, ['capability:security']);
assert.deepStrictEqual(state.request.excludeComponents, []);
assert.ok(state.resolution.selectedModules.includes('security'));
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
fs.rmSync(projectDir, { recursive: true, force: true });
}
})) passed++; else failed++;
if (test('end-to-end: installs --profile developer --without capability:orchestration and state reflects exclusion', () => {
const { execFileSync } = require('child_process');
const scriptPath = path.join(__dirname, '..', '..', 'scripts', 'install-apply.js');
const homeDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-install-'));
const projectDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-install-project-'));
try {
execFileSync('node', [
scriptPath,
'--profile', 'developer',
'--without', 'capability:orchestration',
], {
cwd: projectDir,
env: { ...process.env, HOME: homeDir },
encoding: 'utf8',
stdio: ['pipe', 'pipe', 'pipe'],
});
const claudeRoot = path.join(homeDir, '.claude');
// Orchestration skills should NOT be installed (from --without)
assert.ok(!fs.existsSync(path.join(claudeRoot, 'skills', 'dmux-workflows', 'SKILL.md')),
'Should not install orchestration skills');
// Developer profile base modules should be installed
assert.ok(fs.existsSync(path.join(claudeRoot, 'rules', 'common', 'coding-style.md')),
'Should install core rules');
assert.ok(fs.existsSync(path.join(claudeRoot, 'skills', 'tdd-workflow', 'SKILL.md')),
'Should install workflow skills');
const statePath = path.join(claudeRoot, 'ecc', 'install-state.json');
const state = JSON.parse(fs.readFileSync(statePath, 'utf8'));
assert.strictEqual(state.request.profile, 'developer');
assert.deepStrictEqual(state.request.excludeComponents, ['capability:orchestration']);
assert.ok(!state.resolution.selectedModules.includes('orchestration'));
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
fs.rmSync(projectDir, { recursive: true, force: true });
}
})) passed++; else failed++;
if (test('end-to-end: --with alone (no profile) installs just the component modules', () => {
const { execFileSync } = require('child_process');
const scriptPath = path.join(__dirname, '..', '..', 'scripts', 'install-apply.js');
const homeDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-install-'));
const projectDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-install-project-'));
try {
execFileSync('node', [
scriptPath,
'--with', 'lang:typescript',
], {
cwd: projectDir,
env: { ...process.env, HOME: homeDir },
encoding: 'utf8',
stdio: ['pipe', 'pipe', 'pipe'],
});
const claudeRoot = path.join(homeDir, '.claude');
// framework-language skill (from lang:typescript) should be installed
assert.ok(fs.existsSync(path.join(claudeRoot, 'skills', 'coding-standards', 'SKILL.md')),
'Should install framework-language skills');
// Its dependencies should be installed
assert.ok(fs.existsSync(path.join(claudeRoot, 'rules', 'common', 'coding-style.md')),
'Should install dependency rules-core');
const statePath = path.join(claudeRoot, 'ecc', 'install-state.json');
const state = JSON.parse(fs.readFileSync(statePath, 'utf8'));
assert.strictEqual(state.request.profile, null);
assert.deepStrictEqual(state.request.includeComponents, ['lang:typescript']);
assert.ok(state.resolution.selectedModules.includes('framework-language'));
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
fs.rmSync(projectDir, { recursive: true, force: true });
}
})) passed++; else failed++;
// ─── JSON output mode ───
if (test('end-to-end: --dry-run --json includes component selections in output', () => {
const { execFileSync } = require('child_process');
const scriptPath = path.join(__dirname, '..', '..', 'scripts', 'install-apply.js');
const homeDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-e2e-'));
const projectDir = fs.mkdtempSync(path.join(os.tmpdir(), 'selective-e2e-project-'));
try {
const output = execFileSync('node', [
scriptPath,
'--profile', 'core',
'--with', 'capability:database',
'--without', 'baseline:hooks',
'--dry-run',
'--json',
], {
cwd: projectDir,
env: { ...process.env, HOME: homeDir },
encoding: 'utf8',
stdio: ['pipe', 'pipe', 'pipe'],
});
const json = JSON.parse(output);
assert.strictEqual(json.dryRun, true);
assert.ok(json.plan, 'Should include plan object');
assert.ok(
json.plan.includedComponentIds.includes('capability:database'),
'JSON output should include capability:database in included components'
);
assert.ok(
json.plan.excludedComponentIds.includes('baseline:hooks'),
'JSON output should include baseline:hooks in excluded components'
);
} finally {
fs.rmSync(homeDir, { recursive: true, force: true });
fs.rmSync(projectDir, { recursive: true, force: true });
}
})) passed++; else failed++;
console.log(`\nResults: Passed: ${passed}, Failed: ${failed}`);
process.exit(failed > 0 ? 1 : 0);
}
runTests();

View File

@@ -2424,6 +2424,65 @@ function runTests() {
} }
})) passed++; else failed++; })) passed++; else failed++;
// ─── stripAnsi ───
console.log('\nstripAnsi:');
if (test('strips SGR color codes (\\x1b[...m)', () => {
assert.strictEqual(utils.stripAnsi('\x1b[31mRed text\x1b[0m'), 'Red text');
assert.strictEqual(utils.stripAnsi('\x1b[1;36mBold cyan\x1b[0m'), 'Bold cyan');
})) passed++; else failed++;
if (test('strips cursor movement sequences (\\x1b[H, \\x1b[2J, \\x1b[3J)', () => {
// These are the exact sequences reported in issue #642
assert.strictEqual(utils.stripAnsi('\x1b[H\x1b[2J\x1b[3JHello'), 'Hello');
assert.strictEqual(utils.stripAnsi('before\x1b[Hafter'), 'beforeafter');
})) passed++; else failed++;
if (test('strips cursor position sequences (\\x1b[row;colH)', () => {
assert.strictEqual(utils.stripAnsi('\x1b[5;10Hplaced'), 'placed');
})) passed++; else failed++;
if (test('strips erase line sequences (\\x1b[K, \\x1b[2K)', () => {
assert.strictEqual(utils.stripAnsi('line\x1b[Kend'), 'lineend');
assert.strictEqual(utils.stripAnsi('line\x1b[2Kend'), 'lineend');
})) passed++; else failed++;
if (test('strips OSC sequences (window title, hyperlinks)', () => {
// OSC terminated by BEL (\x07)
assert.strictEqual(utils.stripAnsi('\x1b]0;My Title\x07content'), 'content');
// OSC terminated by ST (\x1b\\)
assert.strictEqual(utils.stripAnsi('\x1b]8;;https://example.com\x1b\\link\x1b]8;;\x1b\\'), 'link');
})) passed++; else failed++;
if (test('strips charset selection (\\x1b(B)', () => {
assert.strictEqual(utils.stripAnsi('\x1b(Bnormal'), 'normal');
})) passed++; else failed++;
if (test('strips bare ESC + letter (\\x1bM reverse index)', () => {
assert.strictEqual(utils.stripAnsi('line\x1bMup'), 'lineup');
})) passed++; else failed++;
if (test('handles mixed ANSI sequences in one string', () => {
const input = '\x1b[H\x1b[2J\x1b[1;36mSession\x1b[0m summary\x1b[K';
assert.strictEqual(utils.stripAnsi(input), 'Session summary');
})) passed++; else failed++;
if (test('returns empty string for non-string input', () => {
assert.strictEqual(utils.stripAnsi(null), '');
assert.strictEqual(utils.stripAnsi(undefined), '');
assert.strictEqual(utils.stripAnsi(42), '');
})) passed++; else failed++;
if (test('preserves string with no ANSI codes', () => {
assert.strictEqual(utils.stripAnsi('plain text'), 'plain text');
assert.strictEqual(utils.stripAnsi(''), '');
})) passed++; else failed++;
if (test('handles CSI with question mark parameter (DEC private modes)', () => {
// e.g. \x1b[?25h (show cursor), \x1b[?25l (hide cursor)
assert.strictEqual(utils.stripAnsi('\x1b[?25hvisible\x1b[?25l'), 'visible');
})) passed++; else failed++;
// Summary // Summary
console.log('\n=== Test Results ==='); console.log('\n=== Test Results ===');
console.log(`Passed: ${passed}`); console.log(`Passed: ${passed}`);